Compiling a Final Heatmap Part 1
Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or
Already have an account? Sign In »

Time
3 hours 16 minutes
Difficulty
Intermediate
Video Transcription
00:00
>> Welcome to Part 1 of lesson 3 Module
00:00
3 within the attack based
00:00
SOC assessments training course.
00:00
In this lesson, we're going to talk
00:00
about how you can compile
00:00
a final heatmap as
00:00
part of an attack based SOC assessment.
00:00
This lesson fits into the fifth phase of
00:00
our generic attack based SOC assessment methodology.
00:00
Here you said the rubric,
00:00
you've done all the technical analysis of
00:00
the SOCs components and you've interviewed staff,
00:00
and now your task is to bring it all together into
00:00
one single coverage chart that you can turn
00:00
over to the SOC to help them understand where they stand.
00:00
This lesson has two primary learning objectives.
00:00
Number 1, after the lesson,
00:00
you should understand what's
00:00
needed before compiling the final results.
00:00
Number 2, after the lesson,
00:00
you should be able to aggregate
00:00
heatmaps and the interview results together.
00:00
Creating a final coverage chart
00:00
ultimately boils down to three core steps.
00:00
Number 1, create heatmaps denoting what
00:00
each analytic and each tool will be able to detect.
00:00
Here, your focus is primarily on the analytics
00:00
and tools and less so on the data sources,
00:00
which we'll talk about a little bit in the next lesson.
00:00
Then aggregate the results from step 1,
00:00
creating a combined heatmap.
00:00
When you're doing this, always choose
00:00
the highest score when looking
00:00
at just tools and analytics.
00:00
Then once you have that aggregated heatmap,
00:00
augment the results using anything you've discovered
00:00
looking at policies and procedures
00:00
as well as the interviews.
00:00
Policies and procedures will
00:00
be helpful if you have them to
00:00
help discuss how specific tools are used
00:00
and what potential mitigations might be deployed.
00:00
Interview results are of course helpful as well
00:00
because they also go into detail on tools,
00:00
but they'll also provide other information that might
00:00
speak to others strengths or
00:00
even gaps that they have in coverage.
00:00
This process is useful,
00:00
but we can also make it into a formula.
00:00
Essentially, we'll start with
00:00
the tool coverage and the analytic coverage.
00:00
We'll add those together,
00:00
add in the positives from the interviews,
00:00
subtract out the negatives from
00:00
the interviews and use that as our final result.
00:00
What does that look like in practice?
00:00
Here, we're going to walk through an example
00:00
where we're going to aggregate tool coverage,
00:00
analytic coverage, and interview results.
00:00
In this example, we're going to use our go-to rubric,
00:00
where we're you going to use a heatmap
00:00
with low confidence of detection,
00:00
some confidence, and
00:00
high confidence in white, yellow, and green.
00:00
Then we're also going to focus on
00:00
only a small subset of the attack matrix,
00:00
just to make it a little bit more visible.
00:00
Here on the screen,
00:00
you can see a coverage chart for tool
00:00
1 as well as a coverage chart for tool 2.
00:00
From a process perspective,
00:00
what we're going to do is take the top coverage chart,
00:00
add in what's covered additionally on
00:00
the middle coverage chart and get
00:00
a bottom chart that's
00:00
going to have everything put together.
00:00
Here it walks through what that looks like.
00:00
We're going to see tool number 2 provides
00:00
a lot of extra coverage on top of tool number 1.
00:00
In particular, we have coverage for
00:00
inter-process communication, group policy modification,
00:00
password spraying, and credential stuffing,
00:00
all provided by tool 2
00:00
that isn't covered at all was in tool 1.
00:00
We know that in the aggregated coverage chart,
00:00
those are going to be extra additions.
00:00
We can also see tool 2 providing
00:00
some coverage of JavaScript.
00:00
But this really isn't that important because tool
00:00
1 provides high confidence of detection.
00:00
When you put those together,
00:00
you get in a little bit more enhanced coverage chart that
00:00
effectively takes all of tool
00:00
1 and most of tool 2 together.
00:00
The next step is to bump this one up top.
00:00
Use that as our running heatmap that we're going with,
00:00
then add in the results from the analytics.
00:00
Here we have the same exact process
00:00
where we see that there is a lot
00:00
of extra things covered by
00:00
analytics that haven't been covered by tool 1 or tool 2,
00:00
and we add those together into our aggregated heatmap.
00:00
Now, one of the interesting things to
00:00
note here is that you
00:00
can see in the initial heatmap
00:00
with tool 1 and tool 2 by themselves,
00:00
we have inter-process communication score
00:00
to some coverage,
00:00
where analytics has high confidence of detection for
00:00
component object model and dynamic data exchange.
00:00
The final heatmap on the bottom,
00:00
you can see that we've kept the sum high from
00:00
tool 1 and 2 and analytics and more importantly,
00:00
we haven't upgraded inter-process communication.
00:00
This goes back to our previous lessons where
00:00
we discussed how to work with sub techniques
00:00
and make sure we're not over abstracting or over
00:00
inferring just based on abstraction.
00:00
Now, the next step is to take
00:00
this aggregated heatmap and add an interview results.
00:00
The way we'll do this is we'll look at
00:00
the interview results as a series of bullet points,
00:00
where each bullet point provides
00:00
an additional data point
00:00
for us to augment our heatmap from.
00:00
The first one is from the red team.
00:00
They say that the SOC never
00:00
detects when we escalate privileges.
00:00
Here, we're going to downgrade
00:00
that group policy modification coverage to
00:00
low confidence of detection because
00:00
this statement from the red team is very assertive.
00:00
It's pretty good evidence
00:00
that there is likely a gap there.
00:00
In this example, we're of course
00:00
going with low confidence.
00:00
But you might look at this and depending on the context,
00:00
say, that might be some confidence is a better fit.
00:00
The second statement is from the engineering team.
00:00
They say that they block
00:00
all communications over nonstandard ports.
00:00
This one is pretty straightforward and is
00:00
good evidence for a mitigation being deployed.
00:00
What we're going to do is go a little bit
00:00
outside of our normal rubric and use
00:00
orange to note in the bottom heatmap that
00:00
nonstandard port is likely to be mitigated.
00:00
The detection team gives us
00:00
another interesting piece of information.
00:00
They say that they don't use tool
00:00
1 to detect lateral movement.
00:00
Here, you can see that now we've
00:00
taken that piece of information and
00:00
we've downgraded the coverage for lateral tool transfer,
00:00
which was originally high confidence of detection,
00:00
but should be bumped down to low confidence given
00:00
that coverage was provided by tool 1.
00:00
Lastly, another interesting one from the detection team.
00:00
We struggle with all types of
00:00
inter-process communication.
00:00
This one is particularly
00:00
interesting because of that modifier,
00:00
all types of inter-process communication.
00:00
Instead of just saying we struggle with
00:00
inter-process communication,
00:00
they're saying that they struggled with any type of it.
00:00
Because of this broad wording
00:00
that they've given us, we look at this and say,
00:00
this isn't just a potential
00:00
hit to enter-process communication,
00:00
but also to all of its sub techniques,
00:00
since those are types of inter-process communication.
00:00
Now, using that statement,
00:00
we've downgraded Component Object Model,
00:00
and dynamic data exchange from
00:00
high confidence to some confidence.
00:00
When you put all that together,
00:00
you get this final composed Heatmap
00:00
or essentially all we've done is
00:00
walk through each of
00:00
the individual heatmaps and put them together.
00:00
We'll now go through a couple of different exercises.
00:00
Feel free to pause the video,
00:00
see what you think on your own.
00:00
These are intended to be examples,
00:00
but you can always just give it a shot on your own,
00:00
see what you think and then we'll
00:00
walk through it together.
00:00
Now walk through what we think is a good solution here.
00:00
This one is a pretty straightforward example.
00:00
We just have two tools.
00:00
They have a little bit of overlapping coverage,
00:00
a little bit of different coverage.
00:00
We're going to walk through the same process that we
00:00
walked through in the previous example.
00:00
Here, we start with tool 1 on the bottom,
00:00
we just copy that heatmap,
00:00
pasted it down below.
00:00
We're just going to walk
00:00
through all of the things covered by
00:00
tool 2 and add it to the bottom heatmap.
00:00
Here you can see tool 2
00:00
the supply chain compromised and we get high confidence.
00:00
Where tool 2 only has low confidence,
00:00
will bump that up to high confidence.
00:00
For these four in the middle,
00:00
you can see we all have some confidence there.
00:00
We have low confidence for tool 1,
00:00
so those in the final one are all upgraded to
00:00
some confidence under exfiltration.
00:00
This one's a little more interesting.
00:00
Tool 2 provides high confidence of
00:00
exfiltration of code repository into Cloud storage.
00:00
We're going to directly copy that into the bottom one,
00:00
but we're going to leave
00:00
the primary technique as some confidence of detection.
00:00
Then lastly, network denial of service is provided by
00:00
tool 2 but it's shadowed by the coverage for tool 1.
00:00
That gives us the final heatmap
00:00
that brings everything together,
00:00
just doing a little bit of aggregation there.
00:00
Here's another example that we can walk through.
00:00
Feel free to pause the video.
00:00
This one is more around interview results.
00:00
We'll start with this initial heatmap up top.
00:00
We have these four bullet points that we've
00:00
gotten from the interviews and the question is,
00:00
how would you modify that initial heatmap to
00:00
account for what we're being told
00:00
during the interview stage?
00:00
Feel free to pause the video
00:00
and then we will dive back into
00:00
the solution. Welcome back.
00:00
We'll again, for this one,
00:00
we're going to walk through the same process
00:00
we walked through before.
00:00
We're starting with the initial heatmap on the bottom,
00:00
we're going to walk through each of
00:00
these bullet points individually.
00:00
First, via the red team,
00:00
we're getting them saying that systems are
00:00
frequently unpatched and have vulnerabilities.
00:00
This one immediately screams exploitation to me.
00:00
You can see down on the bottom that corresponds to
00:00
previous credential access and
00:00
some lateral movement as well.
00:00
Now this one is interesting in that
00:00
they say that the systems
00:00
are frequently unpatched in their vulnerabilities.
00:00
They don't say that this is not detected well,
00:00
or it's always a problem.
00:00
It's just that this issue exists.
00:00
From that information, we
00:00
think it's reasonable to conclude
00:00
that the high confidence for
00:00
previous and remote services,
00:00
those should be downgraded a little bit
00:00
to some confidence of detection.
00:00
The second bullet point we're getting
00:00
from the engineering team that
00:00
PowerShell is disabled and all of our Windows endpoints.
00:00
This one's pretty straightforward.
00:00
They're saying that effectively that we can't use
00:00
PowerShell at all in on our Windows end points.
00:00
PowerShell as a technique should be considered mitigated.
00:00
These last two are super interesting
00:00
in that they relate to each other.
00:00
The detection team is saying that they have
00:00
high fidelity alerts for picking up cmd.exe.
00:00
This says to us that we should maybe consider
00:00
calling Windows command shell
00:00
high confidence of detection.
00:00
However, the red team gives us the opposite.
00:00
They say, to get around the PowerShell block,
00:00
we use the Windows shell.
00:00
They're saying that using
00:00
this technique is actually effective.
00:00
These two essentially cancel each other out.
00:00
Maybe we could go a little bit more with
00:00
the detection team and call it
00:00
some confidence and detection.
00:00
But just given these two here,
00:00
it seems reasonable to air on the side of
00:00
caution and assume that there's
00:00
low confidence of detection there.
00:00
The other interesting thing to
00:00
note is that from the red team,
00:00
they actually acknowledge that there
00:00
is the PowerShell block giving
00:00
further evidence that the engineering mitigation
00:00
is being deployed successfully.
00:00
When you take all that together,
00:00
you get this final heatmap as your end result.
00:00
A couple of summary points and
00:00
takeaways to close out this lesson.
00:00
Number 1, to aggregate and create that final heatmap,
00:00
you should have the following.
00:00
An analytic heatmap showing analytic coverage.
00:00
This is essentially just a heatmap
00:00
showing what the analytics cover.
00:00
Number 2, a heatmap for
00:00
each relevant tool that you're looking at.
00:00
Number 3, an understanding of
00:00
any strengths and weaknesses
00:00
that came out during the interviews.
00:00
Another summary point is that final coverage can easily
00:00
be summarized in a relatively straightforward formula.
00:00
First, we have an aggregation step
00:00
where we take the tool coverage,
00:00
the analytic coverage and the interview
00:00
positive and just bring those all together,
00:00
taking whatever is the most covered or
00:00
the highest level of detection for each technique.
00:00
Then from there we have
00:00
a subtraction step where we take that initial
00:00
aggregate and then remove
00:00
anything that came up as negatives during the interview.
00:00
Then lastly, whenever there's disagreements,
00:00
always make sure to choose higher coverage during
00:00
aggregation and choose lower coverage during subtraction.
00:00
Lastly, there's one final point to take away.
00:00
We've only really scratched the surface of
00:00
how you do aggregation in this lesson.
00:00
In the next part of this lesson,
00:00
we'll walk through how you do
00:00
partial aggregation to start doing
00:00
a little bit more complex heatmap aggregation.
Up Next
Similar Content