Did you know Cybrary has FREE video training? Join more than 2,500,000 IT and cyber security professionals, students, career changers, and more, growing their careers on Cybrary.
This segment discusses risk calculations in managing and mitigating risk. We discuss in details terms such as Mean Time To Repair (MTTR), Single Loss Expectancy (SLE) in terms of what they mean, what they are related to, function and purpose, impact, etc. [toggle_content title="Transcript"] Now we review risk calculations. The first topic we look at here is the mean time to repair. Periodically, machines might fail on the network. A machine might crash, a machine might shut down because of mechanical failure or some other sort of failure. The mean time to repair considers how soon it will take to fix the device and put it back in production. The mean time to repair is the measure of time for which we can accept that machine to be down. It's the measure of the down time that is tolerated by the organization, for the computer to be down. Best practice organizations and administrators within the organization, when they give a mean time to repair it should include the time it takes to fix the device and also test the device. The mean time to repair should include a time to fix and test the device. The mean time between failure, the mean time between failure is a measure of how long will I use this device before it fails? The mean time between failure allows us to know how long we can use the device before it fails. At the same time, this is the measure for devices that can fail and be repaired, devices that can fail and be repaired. You know want to know how long you are able to use it before it fails. The mean time to failure is similar to the mean time between failure. However, the mean time to failure is for devices that you do not plan to return. You want to know how long you are able to use the device for before it fails. That is the end of that device. The mean time between failure and mean time to failure can only be given by the manufacturers. We usually use these for purchase decisions, "Why should I buy this device and not that device?" You want to know the mean time between failure. It is not just by looking at the cheaper option, the cheaper option might not always be the best option for you. You want to consider, how long would, you use it before it fails. You don't need a device you pay a cheaper price for and every day you have to want to fix it. You are not using it if you are fixing it. You are only using it when it's doing what it meant to do. You want devices that have a higher mean time between failure. This allows you to make a purchase decisions. This is so that when you take devices offline you want to know the mean time to repair, how soon can they be back online, how soon will they be back in production? This is determined by your network administrators and that would also include the amount of time required for testing. Next, we talk about the analyzed loss expectancy. This is the measure of how much in terms of cost will be lost if an incident were to happen. If an incident was to happen, what do you expect to lose a year? What do you expect to lose annually? The calculation is usually done to derive what is called the analyzed loss expectancy. What loss do you expect annually? In many cases the calculations could be provided that you do a calculation for 2 years. The ALE, the analyzed loss expectancy, if giving 2 years would be to divide whatever you have by 2 so that you can tell what it would be for each year, annual loss each year, what do you expect to lose each year? Then we talk about the single loss expectancy, within any network, threats could exploit vulnerabilities, if a threat exploits a vulnerability, what is the single loss expectancy? What do you expect to lose if that thing happens? If that loss is experienced, what do you expect to lose in terms of, or if the threat exploits a vulnerability, what loss do you expect in dollars? That is your single loss expectancy. Each time it happens, what do we lose? Analyzed rate of occurrence, the analyzed rate of occurrence means, how many times does this happen? What is rate of occurrence annually? You say, if this happens, what is the rate of occurrence annually? You want to do calculations to know how many times an event could compromise or a threat could compromise vulnerability? You talk about the analyzed rate of occurrence that is the ARO. When we calculate risk, we have quantitative analysis or qualitative analysis. If we do quantitative analysis we are calculating risk based on numeric values. There is a dollar amount you are using numeric values to calculate risk for quantitative quantity, how much loss is experienced? When we do qualitative analysis, this is analysis based on experience or individual opinion. Quantitative could be very subjective because the same person could be or the same threat could be measured but by different people and they have different results. With quantitative analysis we are using numeric values. What some people experience could be different from the experience of others but for the same incidence. Qualitative analysis is analysis based on user experience or experience by whoever is measuring, doing the analysis, the calculations. These are the 2 forms of analysis we could do. Always bear in mind that when we do numeric values, when we do that we are doing quantitative analysis. At the end of the day there is always a numeric value, say if this thing happens, what do we lose? We lose 400 million. A numeric value is used to measure the risk. Here, where you say, "How much did we lose? We lost a lot of information, we lost reputation." You can only measure that on experience and that could be very subjective. We also have to define vulnerabilities, trend factors, risks. Vulnerability is defined as the absence or weakness of a control. If your controls are there, but they are weak, you have vulnerability. If your controls are missing you have a vulnerability. An example could be where we have a lock on a door but the lock could be that random keys could just arbitrarily compromise the lock. It is a lock, it is a control but it is a weak control. It will also be that the control is not even there, there is no lock. The best definition for vulnerability is that we have the absence or weakness of a control. When you have vulnerabilities on your network, it could also be that patches are missing. Vulnerabilities could exist through that. Maybe people are not following best practice procedures, they leave at the end of the day, they don't log off their systems, they leave to use the rest room, they don't lock their screen. Those are vulnerabilities within the network. Trend factors are any agents that could exploit vulnerabilities. Any entity that can exploit vulnerability is a threat. Risk is defined as the likelihood that something negative will happen, likelihood, there is a probability of it happening, it might happen. It might not happen. All these factors need to be considered because, when you have vulnerabilities, it is probable that the threat agents are able to exploit the vulnerabilities or not. That is the risk when we look at the network or the facility. The next topic for this portion is risk response. How do we respond to risk? There are numerous methods by which we respond to risk, one of which is you mitigate the risk, where you mitigate a risk, you put controls in place to reduce the impact felt by the risk. You put controls in place so that should a risk happen, the impact is limited by the controls to reduce the effect of the risk. You could also decide to transfer the risk. When you transfer the risk, you buy insurance such that, some other party is responsible for fixing the problem. You've paid for the insurance, you take up a policy. Another person will be responsible for taking care of whatever the outcome is. When you avoid the risk, you could back out of a planned activity to avoid the risk involved in that activity. That way you avoid a risk, maybe you plan to set up a factory at a location and you later on learn that that location is prone to some natural environmental disasters. You then decide not to go ahead with that set up. You have backed out of that risk. Risk deterrence, you could put in controls to deter the threats. Controls like putting up signs maybe to deter the threats, let them go do their malicious actions elsewhere. This is how you do risk deterrence. For risk acceptance, sometimes putting in controls might overweigh the benefits derived from the asset. We decide to accept the risk and do nothing, maybe by putting in certain controls. In some instances we could put controls up to a certain state and then say, "We have spent enough in terms of controls, at this level we accept any other risk that might come into play." You cannot reduce risk to 0, rather you can reduce it to an acceptable level at which you then decide to let it be because maybe you can't spend any much more in protecting the asset. That way you accept the risk. These are the numerous methods by which we could respond to risk, mitigate the risk, transfer the risk, avoid the risk, deter the risk or accept the risk. What are some risks with cloud computing? What is cloud computing? Cloud computing is, you are carrying out your business operations across the internet on some other people's computer. We have several models of cloud computing. We talk about infrastructure as a service, platform as a service, software as a service, network as a service, and security as service. In these different strategies, you are carrying out your business operating and your computing operations on other people's computers across the internet. There are some risks inherent in doing this, some of which is confidentiality. Some of us use email, the emails are set on corporate servers elsewhere, somewhere else in the world. Can we guarantee confidentiality? No, we don't know who is at the server, looking at the emails. It is possible somebody else could be looking at your emails on the server. You can't guarantee confidentiality, you hope they are following best practice but what if there are malicious individuals there that have access to the server and they can glean the content of your emails? Also, we can't guarantee availability, server availability. If the server goes down, you only wait for them to bring it back on. Yes you could have service level agreements in place to say that the server should never go down, but it can happen, the server could crash, the server could be brought down for any reason or the other. You can't guarantee availability of the server by saying, "Can you go the server room turn it back on?" If it were within your premises you could do that but this is on some other persons' server, you can only hope they bring it back up in time. Control of you data is in other people's hands. What are they doing with it? How are they copying your data? Are they backing up your data? You can only hope they are doing so. Even if you have service level agreement in place to ensure that they do back up your data, you want to be certain that they are following best practice, that the access that is granted to the backup location is limited. Security for your primary location should be the same as security for your backup locations because someone in possession of your backup tapes is as good as someone in front of your server. These are some security concerns with cloud computing. The cloud computing offers a lot of benefits but we also should bear in mind that these concerns could create a problem for data confidentiality, data integrity and availability. In a previous video we talked about virtualization, virtualization is technology that allows us to build multiple computers within the hypervisor. That is, within a software environment residing on another computer. This way, we can build multiple virtual machines, within one machine. With the use of virtualization, there are also some risks associated with virtualization. You have orphan virtual machines. The word orphan is a word that describes a child having no parents. A child with no parents cannot be well taken care of. A child with no parents will be malnourished. The same thing applies to virtual machines. If a machine is no longer in use, it's been decommissioned it's not going to be receiving its updates. Nobody is giving it updates and the machine is still on the network, what if somebody comes along to use it? That introduces a vulnerability into your network. A machine that is lacking updates is a good source of attack, a good point of attack onto the network. Then you have VMSK, machines that have access to the internet in terms of virtual machines should also have the same level of security that your host machines have, otherwise, a malicious person could have access to the hypervisor through your virtual machines. Somebody could attack the virtual machine and take over the hypervisor. That way, it is possible to kill other virtual machines or even take over your host PC. Your operating system should be secure at this level, at the same time you need to secure the individual virtual machines, giving them updates, hardening the operating systems and blocking unnecessary services or ports to ensure that malicious persons could not come in to the individual virtual machines, take over the hypervisor or even control the host PC. It is also possible that some of your personnel, staff might want to run prohibited software. The use of prohibited software within virtual machine makes it difficult to detect the use of such software. Administrator should ensure that the disabled use of virtual machines or machines that are not meant to have virtualization. Every system that is virtual should be documented and accounted for. The essence of documentation is so that we know where these machines exist on the network, we know where they are and we also give them their required updates. We also harden them exactly as we would harden our host PCs. Then finally there is the risk associated with virtualization that discusses best practices and standards, because we are running virtual machines we must also pay for licenses. It is best practice that if you are running a software. You pay for the licenses for such software. The fact that you are running a virtual machine does not mean you don't pay for your license. Organizations have individuals that would do whistle blowing, to disclose the fact that they are not using licensed software. Organizations could fall into a risk of being fined if they are not following best practices and standards that govern the use of virtual machines. Some standards such as the payment card industry requirement for credit cards and debit cards do require that if we collect information about our customers, these should be stored on different machines. If you are storing customer information in a virtual machine, maybe you have 2 virtual machines A and B, within another host PC. You store credit card numbers here and you store user addresses over there. These 2 machines are still in the same vessel. This does not meet best practice in terms of standards. We shouldn't store certain information in the same machine. Even if they are in different virtual machines, they are still within the same vessel. We have to follow best practice and there is the risk that somebody attacking this machine has access to A and B, thereby compromising confidentiality of data that should be protected by the payment card industry standards. The final topic for section 2.1 is the RTO and the RPO, the recovery time objective and the recovery point objective. The recovery time objective, RTO, is a measure of time with which we can recover a device that is down. It is a measure of time that the organization can tolerate a device, a server, to be down. You want to measure down time, we look at the recovery time objective, how long could these machines be down without it giving us a concern? You must know your recovery time objective so that you are able to evaluate. This device has been down for so long, at which point it will introduce a concern, a major concern to the organization. When we talk about recovery point objective, when you want to do a recovery, you must give a point at which you want to recover from. Say a user has lost, accidentally deleted all the emails in their inbox, they tell the administrator, "Please can you recover my emails for me?" the administrator will gladly say, "Yes, but when do I recover from at what point should I recover from?" That is the measure of recovery point objective. "Take me back to January." "January of this year? January of last year or the year before?" you want to know the point at which the data should be recovered from and that is what we refer to as the recovery point objective. The recovery time objective deals with how long it will take to do that recovery From the backup? The recovery point objective deals with how far within the backup should we go to recover you from? This is for section 2.1 of the security plus syllabus. [/toggle_content]