Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

9 hours 59 minutes
Video Transcription
in this video, we're gonna talk all about preparation, will cover preparation basics and then examine the cloud impacts on the preparation process
before we get too far. Let's keep in mind when you fail to prepare your preparing to fail.
There are many references and variations on this longstanding proverb.
John Warden was wildly successful and influential basketball coach at University California, Los Angeles U. C L A.
Keep in mind that I'm an alumni of the cross town rival University of Southern California, but I have great respect for U. C. L. A. And many aspects, and I honestly admire John Wooden and many of his philosophies. If you ever want to learn more about leadership going well beyond the sport of basketball, I highly recommend his books.
So heating John Wooden's advice. Let's get into specifics. It's preparing for cybersecurity incidents. The C S. A guidance summarizes some valuable points that you'll want to internalize
to find a process to handle the various types of incidents. Ideally, this is in writing. If you like writing long policy documents, that's great. But having a shorthand decision tree for the team to reference is incredibly valuable
handling communications. We talked about this and we'll talk about this more in just a moment.
Incident analysis, hardware and software. These air the tools you'll wanna have to perform forensic activities. They will come in handy during post incident analysis, but also in figuring out how to quarantine the source of the problem.
Internal documentation on normal behaviors. This is helpful to prevent false positives and determine when things are back to the expected state
training. Have you ever heard the saying an ounce of prevention is worth a pound of cure? This especially applies to end user training, whether we were talking about common business users or developers. Of course, when things happen, which they will, responders need to know what to do.
Proactiv system scanning and network monitoring allows you to detect problems early and help you prevent things from escalating from bad to worse.
Finally, subscriptions to third party intelligence services. These services air extremely helpful. Resource is to get information about your adversary to help you in classifying the problem and determining ways to contain it.
The cloud paradigm has an impact on those fundamentals we just covered, as is the case with so many other parts of the shared responsibilities, model governance and SL A's act is a key tool for addressing these impacts.
For starters, it's very important you understand the allocation of responsibilities between the customer and the provider.
This way, your preparation plans can build on assumptions of who's doing what.
Be sure you have support plans with the provider to reinforce responsibilities, setting expectations for things like How quickly is the provider obligated to respond and acknowledge an incident report?
When is the provider obligated to send you notification that there are incidents affecting their service? Or perhaps when they observe incidents affecting your tendency within their platform?
When there is an incident, what logs and data will the customer have access to, and how long will it provider retain those logs after the incident?
Let's speak further on communication and the kind of things you want to account for in your communication plans. Demarcate how the provider contacts the customer and vice versa.
Remember, we're talking about security incidents here, so the path escalate may be different than traditional support and customer service channels. This brings us to the next topic. Define the incident Response team's between the customer and provider, be sure not to use individuals as contact points. People get sick, go on vacation and even leave companies altogether.
Making sure to update each provider whenever these things happen can be tedious and air prone.
Consider a hotline that forwards to a pool of collars or an escalation list of multiple individuals
established out of band communication methods. In the course of an incident, you may not be able to rely on digital services during an attack. Email. Chat, etcetera. So have backup methods. If you're a telco provider and being attacked, you may need to rely on different carriers or different technologies Altogether.
However, it's very unlikely that you'll need to resort to the canon string method you see in the graphic below.
Last, but certainly not least, be sure to test the process before a real incident happens. This is something you should do on an annual basis or whenever. There are large changes to the support channel.
Data and logs provide you valuable insights during into their response. Understand the data you can collect. Keep in mind your cloud providers aren't going to provide you logs that compromise. Other tenants
said expectations on retention periods don't assume you can come back any time after an incident and ask for forensics. Cloud providers can't keep all data in perpetuity. Your visibility in the logs will be less as you move from my as to says,
regardless of the service model, you don't have access to the physical layer, which means the tool kit you rely on for responding and isolating incidents will be different.
This is where the cloud jumpgate comes into play. For example, Diffie is an open source tool developed by Netflix Security Intelligence and Response Team, and it's specifically designed for AWS.
The little logo on the top corner is the icon for Diffie. See if you can spot the differences between the two cats. Be sure your jump kit can examine cloud platform activities at the meta structure layer, and that also wants to be able to interpret and understand activities of cloud. Resource is themselves. This is the apple a structure layer.
You may have heard the term security by design, and this is an aspect to that incident. Response by design will call it that your visibility is limited in a cloud environment at logging to your applications that can help you see what's going on. Be sure to store logs in a secure location that investigators can access. But Attackers can't
you don't want them covering their tracks.
We previously talked about network micro segmentation and even full blown isolation strategies. The goal is to minimize the blast radius and impact of an incident.
Immutable servers shouldn't change when they're running. This simplifies detection. Did the server change from its based image?
And it also simplifies recovery. I'll reinstate a new server from the base image
infrastructure as code provides an approach similar to the beautiful server thinking
I A C Technologies let you declare the way things should look. If there are differences between the way things are set up for, says how you have declared the way they should be set up,
running these tools will realign. The cloud resource is appropriately getting you back to the state that you've expected
whiteboard Hacking is a great way to assess vulnerabilities and create theoretical attacks. Threat modelling is a structured way to perform these assessments, and table top exercises can be a lot easier and less costly than full blown penetration testing.
In this video, we covered the preparation basics. Then we went over the impacts of cloud on preparation, focusing on communication data and logs and architecture
Up Next