Compiling a Final Heatmap Part 1

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

3 hours 16 minutes
Video Transcription
welcome to part one of less than three module three within the Attack based Stock Assessments training course.
In this lesson, we're going to talk about how you can compile a final heat map is part of an attack based stock assessment.
This lesson fits into the fifth phase of our generic attack based stock assessment methodology. Here you set the rubric. You've done all the technical analysis of the socks components and you've interviewed staff. And now your task is to bring it all together into one single coverage chart that you can turn over to the sock
to help them understand where they stand.
This lesson has two primary learning objectives.
Number one. After the lesson, you should understand what's needed before compiling the final results and number two after the lesson. You should be able to aggregate heat maps and the interview results together,
so creating a final coverage chart ultimately boils down to three core steps.
Number one Create heat maps denoting what each analytic and each tool will be able to detect.
Here you're focuses primarily on the analytics and tools and less so on the data sources, which we'll talk about a little bit in the next lesson,
then aggregate the results from Step one, creating a combined heat map.
When you're doing this, always choose the highest score when looking at just tools and analytics.
And then, once you have that aggregated heatmap, augment the results using anything you discovered looking at policies and procedures as well as the interviews,
policies and procedures will
be helpful if you have them to help discuss how specific tools are used and what potential mitigations might be deployed. Interview results are, of course, helpful as well, because they also go into detail on tools. But they'll also provide other information that might speak to other strengths or even gas that they have in coverage.
And this process is useful, but we can also make it into a formula. Essentially, we'll start with the tool coverage, and the analytic coverage will add those together,
add in the positives from the interviews,
subtract out the negatives from the interviews and use that as our final result.
So what does that look like in practice here? We're gonna walk through an example where we're gonna aggregate tool coverage, analytic coverage and interview results.
In this example, we're going to use our go to rubric.
Where were you going? To use a heat map with low confidence of detection, some confidence and high confidence in white, yellow and green.
And then we're also going to focus on only a small subset of the attack matrix just to make it a little bit more visible.
Here on the screen, you can see a coverage chart for 21
as well as a coverage chart for tool to and from a process perspective. What we're going to do is take the top coverage chart,
add in what's
covered, additionally on the middle coverage chart and get a bottom chart that's gonna have everything put together.
So here to walk through what that looks like we're going to see Tool number two provides a lot of extra coverage on top of tool number one. In particular, we have we have coverage for enterprises communication
group policy modification, password spraying and credential
stuffing, all provided by 12 to that isn't covered at all within 21 So we know that in the aggregated coverage chart, those are going to be extra editions.
We can also see tool to providing some coverage of javascript, but this really isn't that important, because Tool one provides high confidence of detection.
When you put those together, you get in a little bit more enhanced coverage chart that effectively takes all of tool one and most of tool to together.
The next step is to bump this one up top. Use that as a running
heat map that work that we're going with and then add in the results from the analytics. Here we have the same exact process where we see that there is a lot of
extra things covered by analytics that haven't been covered by tool one or two or two, and we add those together into our aggregated heat map.
And one of the interesting things to note here is that you can see in the initial heat map with 21 into a two. By themselves. We have inter process communication scored as some coverage
where analytics has high confidence of detection for component, object model and dynamic data exchange
in the final heat map on the bottom, you can see that we've kept the some high high from 21 and two and analytics and more importantly, we haven't upgraded inter process communication This goes back to our previous lessons, where we discussed how to work with sub techniques and make sure we're not over abstracting or over inferring
just based on abstraction.
Now the next step is to take this aggregated heat map and add an interview results. And the way we'll do this is, well, look at the interview results as a series of bullet points where each bullet point provides an additional data point for us to augment. Our heat map from
the first one is from the red Team. They say that the sock never detects. When we escalate privileges
here, we're gonna downgrade that group policy modification coverage to low confidence of detection. Because this statement from the red team is very assertive, it's pretty good evidence that there's likely a gap there
in this example. We're, of course, going with low confidence, but you might might look at this and, depending on the context, say, Oh, that might be some confidence is a better fit.
The second statement is from the engineering team.
They say that they block all communications over nonstandard ports. This one's pretty straightforward and is good evidence for a mitigation being deployed. What we're going to do is go a little bit outside of our normal rubric and use orange to note in the bottom heat map. That nonstandard port is likely to be mitigated.
The detection team gives us another interesting piece of information. They say that they don't use tool one to detect lateral movement
here. You can see that now we've taken that that piece of information and we've downgraded the coverage for lateral tool transfer, which was originally high confidence of detection, but should be bumped down to low confidence. Given that that coverage was provided by 21
and lastly, another interesting one from the detection team. We struggle with all types of inter process communication.
This one is particularly interesting because of that modifier, all types of enterprises, communication. Instead of just saying we struggle with enterprises communication,
they're saying that they struggle with any type of it.
Because of this kind of broad wording that they've given us. We look at this and say, Oh, this isn't just a potential hit to inter process communication, but also all of its sub techniques since those are types of inter process communication.
Now, using that statement, we've downgraded component object model and dynamic data exchange from high confidence to some confidence.
And when you put all that together, you get this final composed heat map where essentially all we've done is walk through each each of the individual heat maps and put them together.
So we'll now go through a couple of different exercises.
Feel free to pause the video, see what you think on your own. These are intended to be examples, but you can always just kind of give it a shot on your own. See what you think, and then we'll we'll walk through it together.
so now walk through what we think is a good solution here. This one's a pretty straightforward example. We just have to tools. They have a little bit of overlapping coverage, a little bit of different coverage. We're going to walk through the same process that we walk through in the previous example.
Here we start with tool one on the bottom. We just kind of copied that heat map, pasted it down below,
and we're just gonna walk through all of the things covered by two and two and added to the bottom heat map.
So here you can see tool to to supply chain compromise, and we have high confidence. There were tool. One only has low confidence will bump that up to high confidence
for these four in the middle, you can see we all have some confidence there.
We have low confidence for 21 to those in the final one are all upgraded to some confidence.
Under exfiltration. This one's a little more interesting.
Well, two provides high confidence of ex filtration of code repository into cloud storage. We're gonna directly copy that into the bottom one, but we're going to leave the primary technique as some confidence of detection.
And then, lastly, network denial of service is provided by Tool to, but it's shadowed by the coverage for 21
and that gives us the final heat map. That kind of brings everything together, just doing a little bit of aggregation there.
Here's another example that we can walk through again. It's feel free to pause the video. This one is more around interview results, so we'll start with this initial heat map up top. We have these four bullet points that we've gotten from the interviews, and the question is how would you modify that initial heat map to account for what we're being told during the interview stage?
So feel free, pause the video, and then we will dive back into the solution.
Welcome back. Well, again, for for this one, we're going to walk through the same process we walk through before we're starting with the initial heat map on the bottom, and we're gonna walk through each of these bullet points individually.
So first
via the red team, we're getting them saying that systems are frequently unpatched and have vulnerabilities. This one immediately screams exploitation to me and you can see down on the bottom that that corresponds to privacy credential access and some lateral movement as well.
This one's interesting in that they say that the systems are frequently unpatched vulnerabilities. They don't say that this is
not detected well, or
it's always a problem. It's just
that this issue exists, and from that information, we think it's reasonable to conclude that the high confidence for pre vest and remote services
those should be downgraded a little bit to some confidence of detection.
The second bullet point we're getting from the engineering team that power shell is disabled on all of our windows. Endpoints. This one's pretty straightforward there, saying that effectively that we can't use power shell at all in
on our windows endpoints. And so power shell as a technique should be considered mitigated.
These last two are super interesting in that they kind of relate to each other.
The detection team is saying that they have high fidelity alerts for picking up cmd DT xz.
This says to us that we should maybe consider calling Windows Command Shell high confidence of detection.
However, the red team gives us the opposite.
They say, to get around the power shell block, we use the Windows shell. They're saying that using this technique is actually effective.
These two essentially cancel each other out. Maybe we could go a little bit more with the detection team and call it some confidence of detection. But just given these two here, it seems reasonable to err on the side of caution and assume that there is low confidence of detection there.
The other interesting thing to note is that from the red team, they actually acknowledge that there is the power shot block, giving further evidence that the engineering mitigation is being deployed successful. You
When you take all that together, you get this final heat map as your end result,
so a couple of summary points and takeaways to close out this lesson
number one to aggregate and create that final heat map, you should have the following
an analytic heat map showing analytic coverage. This is essentially just a heat map showing what the analytics cover
number two, a heat map for each relevant tool that you're looking at and number three, an understanding of any strengths and weaknesses that came out during the interviews.
Another summary point is that final coverage can easily be summarized in a relatively straightforward formula.
First, we have an aggregation step where we take the tool coverage, the analytic coverage and the interview positives and just bring those altogether taking whatever is the most covered or the highest
highest level of detection for each each technique.
Then, from there we have a subtraction step. We take that initial aggregate and then remove anything that came up as negatives during the interview.
And then, lastly, whenever there's disagreements, always make sure to choose higher coverage during aggregation and choose lower coverage during subtraction.
Lastly, there's one final point to take away. We've only really scratched the surface of how you do aggregation in this lesson in the next part of this lesson. Well, locks for how you do partial aggregation to start doing a little bit more complex
heat map aggregation.
Up Next