Compiling a Final Heatmap Part 2

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
3 hours 16 minutes
Difficulty
Intermediate
CEU/CPE
2
Video Transcription
00:00
>> Welcome to Lesson 4 Module 3 within
00:00
the attack based SOC assessments training course.
00:00
This lesson is a continuation of Lesson 3.3,
00:00
where we talked about how to compile
00:00
a file heatmap for an attack based SOC assessment.
00:00
This lesson extends the previous lesson by focusing
00:00
specifically on aggregation
00:00
for partially covered techniques.
00:00
This lesson has one primary learning objective.
00:00
After the lesson, you should be able to aggregate
00:00
partially covered techniques into
00:00
a final heatmap as part of an
00:00
>> attack base SOC assessment.
00:00
>> To kick off this lesson and really to focus it,
00:00
we're going to start with a running example.
00:00
Here we have three heatmaps for three different tools,
00:00
tools 1, 2, and 3.
00:00
These are of course very micro heatmaps are
00:00
only a handful of techniques but they
00:00
are pretty good use case to start from.
00:00
Here, we were given
00:00
these three heatmaps and we need to figure out how
00:00
we should aggregate them together
00:00
into a compiled heatmap.
00:00
We can walk through this process
00:00
following the guides in the previous lesson.
00:00
Here, we'll look at Tool 1 and
00:00
see that tool one provides
00:00
high confidence for commanded scripting interpreter
00:00
and some confidence for PowerShell.
00:00
We'll put that over on the right and mark
00:00
those as coverage there.
00:00
Tool 3 on the bottom
00:00
provides high confidence for Unix shell.
00:00
We're going to mark that as well.
00:00
Then tools 1, 2,
00:00
and 3 all provide some confidence
00:00
of detection for internal spearfishing.
00:00
This one is interesting.
00:00
Intuitively, we might look at this and say,
00:00
we want to take the highest coverage
00:00
across the tools and that's
00:00
only some confidence of detection,
00:00
so we'll mark it in yellow that way.
00:00
However, another way to look at it is since
00:00
all three tools provide some confidence of detection,
00:00
maybe all of the tools in aggregate provide
00:00
high confidence of detection
00:00
and you can see we have the two possibilities here.
00:00
This leads to an interesting question of,
00:00
how do we really aggregate partial coverage?
00:00
The answer is, it depends.
00:00
Detection is not something that's probabilistic.
00:00
All three of these tools
00:00
aren't flipping coins and saying,
00:00
we want to detect it or we don't.
00:00
They all have more or less deterministic ways
00:00
of picking up on these specific techniques.
00:00
Accordingly, when you're aggregating some coverage,
00:00
you don't have it in a way
00:00
that's always going to detect that are always not
00:00
going to detect it and there's not always
00:00
a clear way of whether or not you should
00:00
bump up coverage given
00:00
partial coverage of a specific technique.
00:00
Generally speaking, though,
00:00
you should upgrade if the sources
00:00
of coverage use
00:00
different and complimentary detection methods.
00:00
Otherwise, if the sources use
00:00
similar or not
00:00
necessarily complimentary detection methods,
00:00
you should leave the coverage as is.
00:00
Also when you're performing
00:00
partial aggregation, look at your rubric.
00:00
A quantitative approach with
00:00
a large range has a lot more leeway.
00:00
Say if you're looking at a scale of
00:00
1-500 and you're considering going from a 300-305,
00:00
there's a lot of room for change there.
00:00
But a qualitative approach like here where we
00:00
have three primary categories of coverage,
00:00
you might need a little bit more
00:00
evidence to do that bump up.
00:00
Ultimately, if you're unsure of
00:00
whether or not you should bump up the coverage,
00:00
leave it the same.
00:00
It's almost always better to
00:00
under-report coverage
00:00
>> than it is to over-report coverage.
00:00
>> We'll now walk through a couple of examples,
00:00
scenarios showing potential use case and
00:00
how we might look at it from
00:00
a partial aggregation perspective.
00:00
First, in Scenario 1,
00:00
we have a technique that has some coverage.
00:00
We have one tool that is signature-based,
00:00
providing some of that coverage.
00:00
Another tool that's behavior-based,
00:00
providing some coverage and
00:00
two relevant analytics that are also
00:00
not providing high confidence of coverage,
00:00
but some confidence as well.
00:00
In this scenario, we
00:00
probably would want to actually upgrade
00:00
the coverage from some confidence to high confidence
00:00
because the two tools we're using
00:00
are in some ways complementary.
00:00
We have one that's signature-based
00:00
and one that's behavior-based.
00:00
But we also have two analytics
00:00
to support potential detection.
00:00
In Scenario 2,
00:00
we have a quantitative coverage scenario.
00:00
Here we have three tools.
00:00
One that's signature-based, and two,
00:00
that are artifact base all
00:00
giving a coverage score of about 50 out of 100.
00:00
Given that, we don't have
00:00
a clear reason to give a big boost to coverage
00:00
but we do have some different detection mechanisms
00:00
here and we have signature-based and artifact base,
00:00
then there's a good justification for a slight upgrade.
00:00
This is something that's awarded to us
00:00
because we're able to make
00:00
that slight upgrade because we have
00:00
that big range from 0-100 for coverage.
00:00
Then the last scenario,
00:00
we again have a technique with some coverage.
00:00
We have tool one that's signature-based,
00:00
tool two it's artifact-based and one relevant analytic.
00:00
Here, probably the recommendation
00:00
should be to leave it as is.
00:00
A signature-based tool and an
00:00
artifact-based tool and one analytic,
00:00
you could argue that these might be complimentary,
00:00
but really it's not enough evidence
00:00
to bump it from some too high.
00:00
We'll close out this lesson with an exercise.
00:00
Here are three mini heatmaps.
00:00
They're just a small slice of lateral movement.
00:00
On the left, you have Tool 1 that's behavior-based.
00:00
In the middle, you have Tool 2 that
00:00
signature-based and on the right you have analytics,
00:00
which are of course just an aggregate for analytics.
00:00
Your task is to try to figure out what
00:00
the aggregated heatmap might
00:00
>> be for these three heatmaps.
00:00
>> Feel free to pause the video,
00:00
see what you think might be the way to aggregate
00:00
these three heatmaps and then when we come back,
00:00
we'll walk through our solution
00:00
to the aggregation questions.
00:00
Welcome back. We're now going to walk through how we
00:00
look at this aggregation question.
00:00
To solve it, we're going to basically
00:00
start with a blank slate.
00:00
This is just a heatmap of low confidence
00:00
that we're going across
00:00
the entire framework we're looking at.
00:00
What we'll do is we'll
00:00
walk through each of the techniques and
00:00
figure out how we should resolve them individually.
00:00
The first three that stick
00:00
out are exploitation of remote services,
00:00
lateral tool transfer,
00:00
and remote service session hijacking.
00:00
All three of these are pretty easy to resolve.
00:00
You just choose the highest among them and then
00:00
record them in your final heatmap.
00:00
The next one is much more interesting.
00:00
SSH hijacking has some confidence
00:00
of detection across all three of the heatmaps.
00:00
Here we have a
00:00
behavior-based tool, a signature-based tool,
00:00
and analytics, all providing
00:00
some confidence and detection.
00:00
With that much different methodologies,
00:00
we feel that there's
00:00
a reasonably good justification to bump
00:00
up the coverage from some to high confidence.
00:00
On the next one we have RDP hijacking.
00:00
This one is interesting.
00:00
We also have some competence of detection,
00:00
but here we have a signature-based tool and analytics.
00:00
That in and of itself to me,
00:00
isn't really enough evidence to
00:00
justify a bump up so we're going
00:00
>> to leave that one as is.
00:00
>> Remote services is also interesting.
00:00
Here we have tool one which is behavior-based and
00:00
the analytics providing some confidence of detection.
00:00
Here you could go either way.
00:00
You could justify bumping it
00:00
up since you have a behavior-based tool
00:00
and analytics but you can also
00:00
justify leaving it the same.
00:00
When I look at this, whenever I'm on the fence,
00:00
I prefer to leave things as
00:00
is just to err on the side of caution as
00:00
opposed to over-reporting so I'd
00:00
report that as some confidence of detection.
00:00
Remote Desktop Protocol follows RDP hijacking.
00:00
Again, a signature-based tool on analytics just doesn't
00:00
quite meet the bar for upgrading their coverage.
00:00
SSH has low confidence across the board,
00:00
so we'll leave that one as is.
00:00
Then VNC were in the same question
00:00
with regards to remote services and here we are
00:00
again going to lean towards
00:00
just treating it as some confidence.
00:00
Then the last four we can resolve fairly easy.
00:00
There's always a maximum
00:00
that we ultimately end up recording.
00:00
With that we get our final heatmap showing
00:00
the general way we're doing partial aggregation.
00:00
Here really the one that stuck
00:00
out as the one we did want to upgrade was
00:00
the SSH hijacking because it had
00:00
coverage across the different
00:00
heatmaps we were looking at.
00:00
A few summary notes and takeaways
00:00
to close out this lesson.
00:00
First, partial aggregation is very nuanced.
00:00
Almost always it depends on the context of
00:00
the assessment as well as the underlying rubric.
00:00
Generally speaking, if you
00:00
are performing partial aggregation,
00:00
upgrade the coverage if detection is different
00:00
or complimentary across the heat maps
00:00
you are aggregating from.
00:00
If they're not complementary or
00:00
different then you should leave them as is.
00:00
Then when in doubt, leave it out.
00:00
It's almost always better to err
00:00
on the side of caution and not
00:00
upgrade if you're not sure that
00:00
>> an upgrade is warranted.
Up Next