Compiling a Final Heatmap Part 2

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
or

Already have an account? Sign In »

Time
2 hours 51 minutes
Difficulty
Intermediate
CEU/CPE
2
Video Transcription
00:00
Welcome to lesson for Module three. Within the Attack based Stock Assessments training course, this lesson is a continuation of less than 3.3, where we talked about how to compile a file heatmap for an attack based stock assessment.
00:15
This lesson extends the previous lesson by focusing specifically on aggregation for partially covered techniques.
00:23
This lesson has one primary learning objective.
00:26
After the lesson, you should be able to aggregate partially covered techniques into a final heat map as part of an attack based stock assessment.
00:36
To kick off this lesson and really to focus it, we're going to start with a running example.
00:41
Here we have three heat maps for two or three different tools. Tools 12 and three. These are, of course, very micro heat maps. These are only a handful of techniques, but but there are pretty good use case to start from
00:54
here. We were given these three heat maps, and we need to figure out how we should aggregate them together into a compiled heat map
01:03
so we can walk through this process following the guides. In the previous lesson
01:07
here, we'll look at two a one and see that to one provides high confidence for command and scripting, interpreter and some confidence for power shell. We'll put that over on the right and mark those as coverage. There.
01:19
Two or three on the bottom provides high confidence for UNIX Shell. We're going to mark that as well.
01:25
And then tools 12 and three all provides some confidence of detection for internal spearfishing.
01:32
Now this one is interesting.
01:34
Intuitively, we might look at this and say, Well, we want to take the highest coverage across the tools, and that's only some confidence of detection, so we'll market in yellow that way.
01:44
However, another way to look at it is, since all three tools provide some conference of detection.
01:49
Maybe all of the tools in aggregate provide high confidence of detection, and you can see we have the two possibilities here.
01:57
This leads to an interesting question of Well, how do we really aggregate partial coverage?
02:02
And the answer is, Well, it depends.
02:06
Detection is not something that's probabilistic. All three of these tools aren't flipping coins and saying, Well, we want to detect it or we don't. They all have different, you know, more or less deterministic ways of picking up on these specific techniques.
02:19
Accordingly. When you're aggregating some coverage, you don't have it in a way that's always going to detect it are always not going to detect it. And there's not always a clear way of whether or not you should bump up coverage,
02:31
given partial coverage of a specific technique.
02:36
Generally speaking, though,
02:38
you should upgrade if the sources of coverage used different and complementary detection methods.
02:45
Otherwise, if the sources use similar or
02:47
you know not necessarily complementary detection methods, you should leave them. You should leave the coverage as is,
02:54
should. Also, when you're performing partial aggregation, look at your rubric.
02:59
A quantitative approach with a large range has a lot more leeway. Say, if you're looking at a scale of you know, 1 to 500 you're considering going from a 300 to 305.
03:09
You know there's a lot of room for change there, but a qualitative approach like here where we have kind of
03:15
three primary categories of coverage.
03:19
Well, you might need a little bit more evidence to do that bump up.
03:23
Ultimately, if you're unsure of whether or not you should bump up the coverage, leave it the same. It's almost always better to under report coverage than it is to over report coverage.
03:34
So now walk through a couple of examples scenarios kind of, you know, showing a potential use case and how we might look at it from a partial aggregation perspective.
03:43
First, in scenario one, we have a technique that has some coverage.
03:47
We have one tool that is signature based, providing some of that coverage, another tool that's behavior based, providing some coverage and to relevant analytics that are also, you know, not providing high confidence of coverage. But you know some confidence as well.
04:01
In this scenario, we probably would want to actually upgrade the coverage from some confidence to high confidence
04:08
because the two tools we're using are in some ways complementary. We have one that signature based and one that's behavior based, but we also have to analytics to support potential detection.
04:18
In Scenario two, we have a quantitative coverage scenario. Here we have three tools, one that signature based into that, our artifact based
04:27
all giving a coverage score of about 50 out of 100.
04:30
Given that you know, we don't have a clear reason to give a big boost of coverage, but we do have some, you know, different detection mechanisms. Here we have signature based and artifact based.
04:43
Then there's a good justification for a slight upgrade.
04:46
This is something that's awarded to us because we're able to kind of make that slight upgrade because we have that big range from 0 to 100 for coverage.
04:56
In the last scenario, we again have a technique with some coverage.
04:59
We have to one that signature based tool to that artifact based and one relevant analytic.
05:04
And here, probably the recommendations should be to leave it, as is
05:09
a signature based tool and an artifact based tool and one analytic
05:13
it's not. You know, you could argue that these might be complementary, but really, it's not enough evidence to bump it from some too high.
05:21
So not close out this lesson with an exercise. Um,
05:26
here are three kind of mini heat maps. Um, they're just a small slice of lateral movement
05:31
on the left. You have tool one. That's behavior based in the middle. You have tool to that signature based, and on the right you have analytics, which are, of course, just an aggregate for analytics.
05:41
Your task is to try to figure out what the aggregated heatmap might be for these three heat maps.
05:46
So feel free to pause the video.
05:48
Kind of see what you think might be the way to aggregate these three. These three heat maps.
05:55
And then when we come back, we'll walk through our solution to the aggregation question.
06:03
Okay?
06:03
Welcome back. We're not going to walk through how we look at at this at this aggregation question and to solve it, we're going to basically start with a blank slate.
06:13
This is just a heat map of, well, low confidence that we're going to, you know, across the entire framework we're looking at.
06:19
And what we'll do is we'll walk through each of the techniques and figure out
06:23
how we should resolve them. Individual lame.
06:26
So the first three that stick out our exploitation of remote services, lateral tool transfer and remote service session hijacking
06:33
all three of these are pretty easy to resolve. You just choose the highest among them and then record them in your final heat map.
06:40
The next one is much more interesting.
06:43
Ssh. Hijacking has some confidence of detection across all three of the heat maps.
06:48
Here we have a behavior based tool, signature based tools and analytics, all providing some confidence of detection.
06:56
With that much different methodologies, we feel that you know, there's a reasonably good justification to bump up the coverage from some too high confidence
07:04
on the next one. We have already P hijacking.
07:08
This one's interesting. We also have some confidence of detection. But here we have a signature based tools and analytics
07:15
that, in and of itself to me isn't you know really enough evidence to justify a bump up. So we're gonna leave that one, as is
07:23
remote services is also interesting. Here we have two a one, which is behavior based in the analytics, providing providing some confidence of detection.
07:32
Here, you can kind of go either ways you could justify bumping it up since you have a behavior based tools and analytics but also justify leaving, leaving it the same
07:42
me. When I look at this, you know, whenever I'm on the fence, I prefer to leave things as is just to air on the side of caution as opposed to over reporting. So I'd report that as some confidence of detection
07:54
remote desktop protocol follows already P hijacking again a signature based tool in analytics just doesn't quite
08:01
meet the bar for upgrading the coverage
08:03
as his H has low confidence across the board. So we'll leave that one as is,
08:09
And then v NC were in the same question with regards to remote services. And here we are again, going to lead toward lean towards just kind of treating it as some confidence.
08:18
And then the last four we can resolve fairly easy. There's always a maximum that we ultimately end up recording,
08:24
and with that, we get our final heat map showing the kind of the general way we're doing partial aggregation here. Really, The one that stuck out as the one we did want to upgrade was the ssh hijacking because it had coverage across the different heat maps we were looking at.
08:41
So a few summary notes and takeaways to close out this lesson
08:46
first, partial aggregation is very nuanced. Almost always, it depends on the context of the assessment as well as the underlying rubric.
08:54
Generally speaking, if you are performing partial aggregation,
08:58
upgrade the coverage
09:00
if detection is different or complementary across the heat maps are aggregating from.
09:05
If they're not complimentary or different, then you should leave them as is.
09:09
And then when in doubt, leave it out. It's almost always better to err on the side of caution and not upgrade if you're not sure that an upgrade is warranted.
Up Next
MITRE ATT&CK Defender™ (MAD) ATT&CK® SOC Assessments Certification Training

This course prepares you for the ATT&CK® Security Operations Center Certification. In this course, students should will gain a better understanding of how modern security operations can align with ATT&CK® and how to better their operations to leverage a threat-informed defense.

Instructed By