Welcome to lesson for Module three. Within the Attack based Stock Assessments training course, this lesson is a continuation of less than 3.3, where we talked about how to compile a file heatmap for an attack based stock assessment.
This lesson extends the previous lesson by focusing specifically on aggregation for partially covered techniques.
This lesson has one primary learning objective.
After the lesson, you should be able to aggregate partially covered techniques into a final heat map as part of an attack based stock assessment.
To kick off this lesson and really to focus it, we're going to start with a running example.
Here we have three heat maps for two or three different tools. Tools 12 and three. These are, of course, very micro heat maps. These are only a handful of techniques, but but there are pretty good use case to start from
here. We were given these three heat maps, and we need to figure out how we should aggregate them together into a compiled heat map
so we can walk through this process following the guides. In the previous lesson
here, we'll look at two a one and see that to one provides high confidence for command and scripting, interpreter and some confidence for power shell. We'll put that over on the right and mark those as coverage. There.
Two or three on the bottom provides high confidence for UNIX Shell. We're going to mark that as well.
And then tools 12 and three all provides some confidence of detection for internal spearfishing.
Now this one is interesting.
Intuitively, we might look at this and say, Well, we want to take the highest coverage across the tools, and that's only some confidence of detection, so we'll market in yellow that way.
However, another way to look at it is, since all three tools provide some conference of detection.
Maybe all of the tools in aggregate provide high confidence of detection, and you can see we have the two possibilities here.
This leads to an interesting question of Well, how do we really aggregate partial coverage?
And the answer is, Well, it depends.
Detection is not something that's probabilistic. All three of these tools aren't flipping coins and saying, Well, we want to detect it or we don't. They all have different, you know, more or less deterministic ways of picking up on these specific techniques.
Accordingly. When you're aggregating some coverage, you don't have it in a way that's always going to detect it are always not going to detect it. And there's not always a clear way of whether or not you should bump up coverage,
given partial coverage of a specific technique.
Generally speaking, though,
you should upgrade if the sources of coverage used different and complementary detection methods.
Otherwise, if the sources use similar or
you know not necessarily complementary detection methods, you should leave them. You should leave the coverage as is,
should. Also, when you're performing partial aggregation, look at your rubric.
A quantitative approach with a large range has a lot more leeway. Say, if you're looking at a scale of you know, 1 to 500 you're considering going from a 300 to 305.
You know there's a lot of room for change there, but a qualitative approach like here where we have kind of
three primary categories of coverage.
Well, you might need a little bit more evidence to do that bump up.
Ultimately, if you're unsure of whether or not you should bump up the coverage, leave it the same. It's almost always better to under report coverage than it is to over report coverage.
So now walk through a couple of examples scenarios kind of, you know, showing a potential use case and how we might look at it from a partial aggregation perspective.
First, in scenario one, we have a technique that has some coverage.
We have one tool that is signature based, providing some of that coverage, another tool that's behavior based, providing some coverage and to relevant analytics that are also, you know, not providing high confidence of coverage. But you know some confidence as well.
In this scenario, we probably would want to actually upgrade the coverage from some confidence to high confidence
because the two tools we're using are in some ways complementary. We have one that signature based and one that's behavior based, but we also have to analytics to support potential detection.
In Scenario two, we have a quantitative coverage scenario. Here we have three tools, one that signature based into that, our artifact based
all giving a coverage score of about 50 out of 100.
Given that you know, we don't have a clear reason to give a big boost of coverage, but we do have some, you know, different detection mechanisms. Here we have signature based and artifact based.
Then there's a good justification for a slight upgrade.
This is something that's awarded to us because we're able to kind of make that slight upgrade because we have that big range from 0 to 100 for coverage.
In the last scenario, we again have a technique with some coverage.
We have to one that signature based tool to that artifact based and one relevant analytic.
And here, probably the recommendations should be to leave it, as is
a signature based tool and an artifact based tool and one analytic
it's not. You know, you could argue that these might be complementary, but really, it's not enough evidence to bump it from some too high.
So not close out this lesson with an exercise. Um,
here are three kind of mini heat maps. Um, they're just a small slice of lateral movement
on the left. You have tool one. That's behavior based in the middle. You have tool to that signature based, and on the right you have analytics, which are, of course, just an aggregate for analytics.
Your task is to try to figure out what the aggregated heatmap might be for these three heat maps.
So feel free to pause the video.
Kind of see what you think might be the way to aggregate these three. These three heat maps.
And then when we come back, we'll walk through our solution to the aggregation question.
Welcome back. We're not going to walk through how we look at at this at this aggregation question and to solve it, we're going to basically start with a blank slate.
This is just a heat map of, well, low confidence that we're going to, you know, across the entire framework we're looking at.
And what we'll do is we'll walk through each of the techniques and figure out
how we should resolve them. Individual lame.
So the first three that stick out our exploitation of remote services, lateral tool transfer and remote service session hijacking
all three of these are pretty easy to resolve. You just choose the highest among them and then record them in your final heat map.
The next one is much more interesting.
Ssh. Hijacking has some confidence of detection across all three of the heat maps.
Here we have a behavior based tool, signature based tools and analytics, all providing some confidence of detection.
With that much different methodologies, we feel that you know, there's a reasonably good justification to bump up the coverage from some too high confidence
on the next one. We have already P hijacking.
This one's interesting. We also have some confidence of detection. But here we have a signature based tools and analytics
that, in and of itself to me isn't you know really enough evidence to justify a bump up. So we're gonna leave that one, as is
remote services is also interesting. Here we have two a one, which is behavior based in the analytics, providing providing some confidence of detection.
Here, you can kind of go either ways you could justify bumping it up since you have a behavior based tools and analytics but also justify leaving, leaving it the same
me. When I look at this, you know, whenever I'm on the fence, I prefer to leave things as is just to air on the side of caution as opposed to over reporting. So I'd report that as some confidence of detection
remote desktop protocol follows already P hijacking again a signature based tool in analytics just doesn't quite
meet the bar for upgrading the coverage
as his H has low confidence across the board. So we'll leave that one as is,
And then v NC were in the same question with regards to remote services. And here we are again, going to lead toward lean towards just kind of treating it as some confidence.
And then the last four we can resolve fairly easy. There's always a maximum that we ultimately end up recording,
and with that, we get our final heat map showing the kind of the general way we're doing partial aggregation here. Really, The one that stuck out as the one we did want to upgrade was the ssh hijacking because it had coverage across the different heat maps we were looking at.
So a few summary notes and takeaways to close out this lesson
first, partial aggregation is very nuanced. Almost always, it depends on the context of the assessment as well as the underlying rubric.
Generally speaking, if you are performing partial aggregation,
upgrade the coverage
if detection is different or complementary across the heat maps are aggregating from.
If they're not complimentary or different, then you should leave them as is.
And then when in doubt, leave it out. It's almost always better to err on the side of caution and not upgrade if you're not sure that an upgrade is warranted.