the next type of vulnerability we're gonna talk about is a configuration vulnerability.
Before, when we talked about software vulnerability, we were talking about an actual flaw in the coding of the software.
In the case of configuration vulnerabilities, we're not talking about flaws in the coding itself, but we're talking about flaws in the way that the hardware or software is configured or deployed, so there's no bug in the actual code. But the way we implemented it is not the most secure way.
Some common examples of this are things like lack of encryption, Right? Maybe we created a website. We forgot to turn on encryption on the website.
It's not a flaw in the code is just We forgot to turn on encryption, excessive access. Maybe we didn't lock it down properly. Or maybe we disabled blogging. And the security team can't really see when there's any attacks against the system
mitigation for configuration vulnerabilities, just like you can scan for vulnerabilities. Those same scanning technologies usually have configurations scanning built into them. In this case, we're talking about CS scanning CS Dance for Center for Internet Security,
and what CS does is they maintain a list of
of best practices in regards to how different software is configured, how different applications air configured. So when you scan your environment for vulnerabilities, you can also come back with a C I s score that tells you, Hey, this this encryption is disabled or this isn't password protected or whatever it is, it it'll give you the same types of things. The regular scanning
is it just like the software vulnerabilities will help identify configuration vulnerabilities.
Standard images is another good one. We have, ah, session later in the course on this. But when you're talking about standard images or golding him golden images were sounds just talking about instead of if you. If you deploy Windows 10 to 5000 systems in your organization instead of
going through the entire installation process from scratch and having a custom installation every time you do it, standard images just give us a way to create a base image. And then we know this base image is hardened. It has all the right security features in place, and then we can use that image to deploy
and to distribute across the environment instead of trying to rebuild it every time where we may make mistakes.
It's also critical and configuration vulnerability mitigation to have a go live, ergo dead process. So as new systems come into the environment, you should have a process in place that has check boxes that say yes, the system is hardened. I've done all of the right things.
Okay, there should be a go and no go decision that says Yes, everything's checked, go ahead and put it in production, and then the system gets added to the environment.
Same thing when the system gets removed from the environment. I've seen a lot of times, especially in the virtual environment, where you've got a system that you're trying to decommission and you take it off the network. You shut it down, but because it's still out there, it's a virtual system. There's a snapshot of it's still out there. A lot of times, that thing will just pop back up
because someone did not follow through of
deleting that snapshot archiving that are moving it to a place that it couldn't accidentally get turned back on. So these things will pop back up and you'll have these vulnerable systems that you thought you got rid of that popped back into the environment. We call those zombies Sometimes
the next type of vulnerability we're gonna talk about is, ah, hardware vulnerability, and we talk about hardware. These are much more complex and these air flaws in the way the physical electronic components handled data. And some examples of this are meltdown. Specter and MDs or otherwise known as micro architectural data sampling.
Let's take a look at an example. Let's take a look at meltdown as an example.
Before we talk about meltdown, we have to talk about how normal computer operations works. So in a normal environment, a request comes into the CPU, right? The application. There's a request that comes in the CPI use. If it it'll go and retrieve the information from RAM. So there's an application running.
Maybe there's something that's loaded into active memory, so the request comes in.
The CPI goes and retrieved it from active memory, and then it actually stores it in its cash. So the CPU actually has memory of its own,
and the reason it does this is to speed up processing. So when a request comes in, if there's things that are used, all the time it'll go grab it out of RAM. If it's something that's used all the time, it'll stored in its local cash on the CPU so that the next time a request comes in it can process it and respond much quicker
now. This was great in this work well, but chipmakers as competition got more, more fierce to get faster and faster. CPU use chipmakers hit a limit. There is a limit to the speed of a CPU in today's technology and how fast it can actually process request.
So some chipmakers, not all but some chipmakers, introduced the concept of speculative execution
to speed up the CPU even more. And essentially, what speculative execution is is a request comes in, and now what happens is the chick. The chip will start to speculate and what that means when it request comes into a CPU, there's only certain requests that are valid. Right Operating systems can make request to CP use,
but applications can't directly. Applications are supposed to go through
the operating system to make a request
with speculum execution. We have this request that comes in the chip will start to actually processes that request before it even knows if it's valid. So it'll simultaneously start to process requests while it starts to see if it's valid. And the reason for that is that whenever it identifies that it's valid, it can instantly respond, and it doesn't have to then take another step that
speeds up the overall execution.
It won't actually send the response back until it knows it's valid request.
But what it will do is it will mark the request essentially as ready for delivery and then assumes that the chip determines that that was, ah, valid request. Then it releases it.
So a speculator execution, same kind of thing happens. Request comes in, it's speculates. It then goes and looks at its local cash to see if it has the answer. It goes and looks at RAM to see if it if the answer is there and then it responds to the request.
Well, meltdown takes advantage of that speculative execution, and what it does is when a request comes in. Meltdown will send a request meltdown. You know, a malicious application can use the Met down vulnerability and the malicious application. Consider request to the CPU, and it's a small request. Let's just say, for example, it says, Hey,
the current user is the first letter in the password, the letter T
and at the same time, the malicious application will start a timer.
Now when the chips starts to speculate, remember, I said, the chip will speculate, and it'll mark that request as ready to be responded to. It won't respond to it, but it'll market is ready to be responded to, and the malicious code can. Actually, it can't see the response, but it can see that it's ready to be responded to, and that's where this timer comes in.
The malicious code knows that if the response was Mart in a certain time, it had to have been in the local cash on the CPU. Which means it's true
if the request was Mark. If it took longer for the request to be marked ready,
it knows that the CPU had to go retrieve that information from RAM, and we're talking about nanoseconds here. We're talking about ah, minuscule amount of time, but the malicious code and knows that if it's a shorter time, a certain amount of time than the answers, the question is true.
So just by timing the amount of time speculative execution took to mark the request as ready for response.
The malicious code can now say, Ah, I know the first letter in the password is tea, and you can see how doing this over and over and over again can give the malicious code the password. That's a that's ah, rundown of the meltdown exploitation.
Now, what do you need to know about how do you mitigate hardware vulnerabilities? Well, first of all, you cannot patch hardware. We're talking about a vulnerability in the way the piece of hardware is actually made in the chip itself, the way it actually functions. So you cannot patch the hardware itself. I mean, you could replace it, But if you're replacing one
manufacturer with another one, then you may have to replace underlying the mother board and other things, and it gets very complicated.
You can how, however, you can patch the software that uses that hardware, So in this case, you can patch the BIOS or the operating system and hardware for meltdown. Specifically, there's a patch for a BIOS and for Windows operating system that you can apply. But it comes in with a downfall.
Remember speculative execution? The reason it was actually invented was so that the CPU runs faster. It can return responses faster.
By applying this patch, you're not allowing the chip to do speculative execution, thereby hurting your performance. So you have to ray the way the risk versus the reward. You know, what's the likelihood that this is gonna be exploited in the risk if it is versus the, um, the cost for a slower CPU?
Now, the good news is, is hardware vulnerabilities are the exploitation of them is much, much rarer than other types of vulnerabilities. So you're not going to see exploits Near is much as you would software vulnerabilities and other types of things
That brings us to the end of our technology vulnerability section. Next up, unless and 1.2, we're gonna talk about process vulnerabilities