this video is focused on infrastructure security for the cloud. We're gonna be going over key takeaways for a few different domains. Domain six, Domain seven and domain eight. So if I touch on anything that seems a little fuzzy to you, maybe you don't fully grasp. Feel free to jump back and look at any of those older videos.
You may recall the major virtualization categories in the cloud and specifically reviewed each of the four categories and discuss different ways on how to secure them. For example, in the computer world, you want to make sure the hyper visor level is very secured, and there's a good patch management process to keep the hyper visors themselves up to date.
This allows you to isolate virtual machines from each other,
and additionally, the cloud provider needs internal processes and technical controls. Prevent its own admissions from having access to the virtual machines and the volatile memory associated with compute.
We're gonna discuss network in a little bit more detail later in this video, but keep in mind that the cloud provider has certain responsibilities, the physical layer, the cloud provider, and is to ensure there's good, strong perimeter security defenses in place, especially at that physical layer, especially around the management plane.
Then there's storage. We talked about the virtualization of storage, the use of sand and
NFS, and some of those other methodologies that the provider may use, or as well as proprietary methods securing it, ensuring that there's encryption of the data at rest, leveraging the keys, maintaining and managing keys doing it even in a way that the tenant themselves may or may not have access to the encryption keys.
And finally there were containers, which is a form of compute. But there's some specific things that you want to make sure for. The platform that's hosting the containers. The cloud provider's ability if it's a past based platform, making sure you probably configure the virtualization services, understanding the isolation capabilities of the container platform itself
and the underlying operating system
that is hosting the different containers. We discussed the importance of managing container images third party container images using container registries ensuring the appropriate controls were in place so that the container images themselves stored in these registries were not being manipulated or altered without our knowledge, and then
implementing the appropriate role based access controls to ensure strong authentication
for the containers themselves, the container to container communication as well as management of the container repository that is the source of truth driving a little deeper on the network management and virtualized networks. Software defined networks. We talked about those definitely the preferred way of doing things
being that software defined networks are software to find.
It provides a lot more flexibility in how you evolve the structure of your network, putting together virtual networks, creating segmentation or micro segmentation within, in amongst the different virtual networks
you may recall, we talked about virtual appliances and being very weary and routing all the never traffic through a single hub. It's very important that that hub be very resilient in elasticity or could create clear performance bottlenecks for you. Implementing a denied by default with your different cloud firewalls into that and
leveraging the provider's cloud firewalls
to control where traffic can flow, what ports it can flow on between which different nodes in your virtual network and communicate. And ultimately, you're isolating the blast radius in the event that one part of your network or a particular machine within your network, gets breached somehow, or compromised,
then the attacker can only get so far and can only move to certain things.
In fact, segregating those networks having the separate accounts even using virtual networks really makes it a lot more difficult for an attacker to traverse and hop from one cloud resource to another and take control of things. And we're always want to restrict to the traffic between the different work clothes they're using the same sub net. And again,
the network security groups who talked about application security groups. These kind of concepts, which really evolved from the cloud providers, firewalls and the ability to manage and direct and cater the flow of traffic between your different cloud. Resource is
I spoke about immutable workloads and leveraging these whenever possible, especially for your virtual machines and your containers and the non past type solutions. Your disabling remote access is greatly enhances your security footprint. You're gonna integrate security testing into the process of creating these images,
and you sent up alarms. So when the integrity of any files on these immutable images somehow changes or drifts,
you're made aware of it. In fact, you can even create some automated procedures and processes to isolate and rebuild those work flows, and this dramatically speeds up the patching process. Instead of applying patches to images and servers and containers that are running in real time. Rather, you're gonna update the source image and you're going to deploy
as opposed to flying these patches to the running images.
And since he's arm immutable and we really don't want content on these servers themselves, it's important to store the logs externally somewhere to a nice, safe location that takes into account who can modify it, just in case you need to deal with the chain of custody consideration in some sort of a prosecution.
And, of course, you don't want these log files to get manipulated by 1/3 party user,
as they might be cleaning up their fingerprints and covering up the elicit activities that they've been performing on your computer resource is carrying. The management plane is a big one, and while a lot of the responsibility falls on the customer, the cloud provider needs to make sure that there is perimeter security around the different AP I gateways and Web consuls that they're providing to the consumer
to use an interact with this management plane
on the consumer side. Want to set up strong authentication using M F A B, very sparing with the use of these super admin accounts. We have one super count will call the God mode account, and then we create sub accounts only. Apply the principle of least privileges for these different admin accounts as well as different service accounts.
It's important that the authentication be over secure channels.
Ah, single sign on has a great value here and finally rotating the authentication tokens for the service accounts regularly. If one does get compromised, if somehow the authentication token gets out, an attacker can leverage that account. Even though they're not acting as an individual. They could still perform actions so rotating those on a regular basis,
especially those that have powers to perform
actions at the management plane level. Right. So this is the ability to create virtual networks, modify virtual networks, create virtual machines, reconfigure change past providers, those type of accounts, things with that privilege. You don't have a real strong control over those because if an attacker compromises there, as we learned in some of the real world examples,
they can cause a whole lot of damage, much more damage than just pulling out your data that can actually take and destroy your entire environment
and lock you out from being able to do anything with all those different cloud resource is that you own.
And then last but not least, there's cloud continuity. We looked at continuity in the controls that you could put in place at each layer off that cloud logical model.
We start with the meta structure layer and making sure we can back up the cloud configurations in its eye as or pass model. We leverage software defined infrastructure. So this is above and beyond just software defined networking. We're actually using infrastructure as code and codifying the build out of the different cloud resource is and how they should be configured.
It's a little different when we're in the SAS model,
but work with your provider to do something about that, and this is really your backup. This is your total Oh, shoot. If you want to do ah cold site recovery type situation, you can leverage this to to rebuild in a different region of the cloud providers when needed, or completely reconfigure a SAS deployment
at the infrastructure layer we looked at
leveraging. What does the provider themselves have in terms of data replication, cross regional replication and also being considerate of the risk of outage relative to the cost? Because when you're doing this kind of stuff, the cost can go and near double.
If you're especially if you're doing ah, hot, hot, active, active type fail over scenario
data replications. When we're looking at infrastructure, right, What is the information getting those replicated across regions again? Providers will give you mechanisms to do this in the SAS world, they may completely take care of this. All for you cloud storage and backup capabilities. Make sure you're aware of the appropriate tear as well.
So if you say have an active, passive backup
when you're replicating your data to that passive site, it doesn't necessarily need to be the highest performing tear storage because the data is there. And then when you do need to perform a fail over, the data is still there, and at that point you take advantage and you can upgrade the performance here
of the data storage that you're on.
This way, you're not paying more than you need to for that cold backup site that you have in play. And then finally the apple structure layer understanding the limitations of pass and some of the lock ins that come with that
and more important than all of it designing for failure. Assuming that failure is gonna happen, we even discuss the concept of chaos engineering. In that philosophy, you may be there. It's definitely advanced.
But keeping in mind in all your design is that resiliency and failure is just gonna be a standard way. You want to think when you're looking at the cloud paradigm
and ultimately you're designing it for the right amount of resiliency and failure so that you don't unnecessarily incur excessive costs that couldn't otherwise be justified by the business criticality of the applications and systems that you're deploying into the cloud