Okay, so you are massively prepared for anything at this point. Now let's go through what it takes to detect problems and examine them. Specifically, we will talk about building alerts, responding to alerts and analyzing the attack.
When building alerts, you want to know your data sources. Keep in mind there are things that provider will not give you at the risk of exposing other tenants. But instrument in your application code can give you extra insights to fill that gap.
Don't forget to monitor the management plane itself, even if in attacker doesn't get full control of your management plane, they may get partial control and modify firewall rules where they may not get any control. But you'll see repeated failures in their attempts to access the management plain and gain control.
Establish automated alerting on unexpected events or behaviors. Meddled. Integrate with existing monitoring tools or may require new monitoring when it comes to monitoring V. EMS. Built in a NY as environment, your existing tools are likely to work. But monitoring the management plane or the past and SAS services
probably won't work with the traditional monitoring tools.
Validate alerts and escalations. Look out for false positives. Too many false positives, and you overlook the real problems when they occur. It's basically a cloud security version of the boy who cried wolf
Leverage Automated incident response work flows when possible. Take advantage that the meta structure is often a P I driven.
This means you can automate your standard response protocols, for example, creating a snapshot of a virtual machine disc, which could be used later for forensic review and then automatically replacing that compromised via When you find an event, you may also want to copy certain logs off to a safe location.
There's a lot more examples than I'm sure you'll be able to think of if you examine your incident response possibilities.
But the key point here is automate wherever possible, make your lives easier.
And once you have this great alerting in place, how do you respond to it?
First thing you want to do is estimate the scope of impact. Keep in mind at this point you haven't done a thorough analysis, but you want to have some rough feel for the impact. You can certainly revise this estimate as you learn more about the incident, but you have to start somewhere a sign an incident manager to coordinate further. This is your point person for the event.
If there is a flurry of events within a certain time for him, you may build a small team,
but you still want somebody to be the appointed leader.
Does it make communication handlers to provide containment and recovery status? This is the person that needs to partner with the incident manager, but not overburden the incident manager with constant hounding for status updates.
So alerts fired. Now you're responding to it. Next thing to do is start analyzing the attack. Collect logs and if you're in the eye *** model machine images, many, I asked providers give you the ability to pause a machine, thereby taking it off line but keeping volatile memory around for more thorough forensics. Later,
be aware of chain of custody when handling forensic data.
We talked about chain of custody and earlier modules about legal matters in the event of legal prosecution. The information you are analyzing may become evidence, and you want to make sure it's hailed appropriately so they could be submitted into the court of law.
Build a timeline for the attack, determine the extent of potential data loss. Make sure the network isolation and firewall rules you expected to be in place still are. This is where your infrastructure as code is very handy. See if any similar cloud resource is were attacked, even if you didn't get alerts about them.
Storage access logs and management plane logs will be invaluable to you in this situation.
In summarizing, we've covered building alerts, responding to alerts and then getting your arms around the problem by analyzing the attack.