In this video, we will cover cloud workload, security and its impacts on compute
security controls, monitoring and logging, as well as vulnerability assessments.
We've covered the different categories of cloud compute, and it's become clear that tenants share compute nodes and provider must maintain some form of isolation. This is often done using hyper visors so that the same physical machine can host different virtual machines from different tenants without them accessing each other's memory.
Some providers give control to dedicate physical machines, which will run your specific virtual machines and their fire not be co tenant with anybody else.
But generally you have minimal control on where your workload physically executes within the data center.
The new approach of running workloads in the cloud brings many benefits, and it also impacts traditional security controls, the method of endpoint security, having agents on machines that do things like run, anti virus checking or
some sort of centralized configuration management or monitoring capability. Those aren't well suited for work clothes running in serverless container or platform based modes.
In many situations, it will just not be feasible to install agents because of the performance problems and low level incompatibilities even on virtual machines. Agent management must support a high rate of node change. Again, the servers air cattle that are being cycled frequently,
so they're coming on and off, and they need to register an unregistered from any sort of centralized
mechanism that these agents are reporting into. And finally, the agents should not increase the attack service by having to do things like exposed extra ports. Consider the micro segmentation philosophy we discussed earlier is really making sure we manage the network traffic closely. Every open port increases the attack surface, and we want to keep this to an absolute minimum.
When we look at monitoring and logging in the cloud, keep in mind that an I P address is not a good identify. Other unique identifies should be used. Many cloud native monitoring systems help realize the concept of observe ability. This is where an application is designed with health and performance monitoring Top of mind.
The ephemeral nature of cloud requires offloading logs.
You just don't know how long the server or the container will be around. Logging architectures may not work in the cloud topology. The mindset of centralized security information and event management is not going to work well, we talked about agents and some of their shortcomings in the cloud.
The mindset of a centralized security information and event management system is not gonna work well. More distributed. You get across different regions within the cloud. Seen Vendors may have an answer for deploying their solution in the clown,
but you'll need to make the judgment call if it's adopting cloud native mindsets or is it just a lift and shift? Beware of points we talked about for virtual appliances.
Scalability fail over and bottlenecks.
Then there's vulnerability assessments. Thes air tests to determine whether there's a potential vulnerability is very similar to a pen test, but they're not only your identifying the vulnerabilities, but you also try to exploit the vulnerabilities when you try to perform a vulnerability assessment on a cloud provider. To them, it's not clear whether you're just doing a vulnerability assessment
or if you're doing a pen test.
And for that matter, they don't know if you are a good guy or a bad guy, so they will often be limited by the provider, and it's very important that you let that provider no. You inform the provider that you're going to be doing a vulnerability assessment before you actually do it, because it may set off a lot of legitimate alarms as the provider thinks you're tryingto hack them.
The default deny nature of networks also limit the effectiveness of external testing.
Hopefully, your teams aren't overriding the default denial with, ah higher priority allow all type rule because so many ports will be locked down in traffic routes. The testing process itself will be very constrained
in the grander scheme of things. This is good news, but if somebody does open up network ports or allow additional pass to be included, that can really expose a lot of vulnerabilities that your assessment overlooked. And that's where putting your focus and energy on assessing the server images
is going to improve. When reviewing the immutable workload pipeline, you may recall that we had security testing integrated with the actual creation of the server images.
This is a great example of having the vulnerability assessments focused on the images instead of working at the cloud provider and examining the system in a black box manner. These concepts change a little bit when you're dealing with pass and SAS scenarios in the same way you'll tell your I asked provider. When you're about to do a vulnerability assessment,
you want to do that also for your pass and SAS providers,
an open line of communication is invaluable when performing vulnerability assessments. In this video, we talked about cloud workload security, its impact on compute
risk controls, monitoring logging as well as vulnerability assessments.