Setting a Coverage Rubric

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

3 hours 16 minutes
Video Transcription
welcome to module to within the attack based stock assessments training course in this module, we're going to discuss how you can analyze different sock components using the attack framework.
This module has one primary learning objective we're after going through the lessons in it, you should be able to map common sock technologies back to the attack framework.
In addition, we have a set of smaller secondary learning objectives that you should walk away from this module having a good understanding of
first. After these lessons, you should understand how to set and select a coverage scheme for a given assessment.
You should also know how to map informal logging strategies back to the attack framework. And you should also know how to identify the techniques that a given detection analytic might be able to detect.
Lastly, after this module, you should be able to quickly analyze tools to understand what parts of the attack framework they might be able to cover
diving into less than 2.1. Our focus is going to be on setting a coverage rubric.
This represents the second stage within the generic attack assessment methodology we discussed in the first module right now you frame the assessment, you set expectations. You've worked with the sock and identified that. Yes, an assessment is a good thing to do. And before you bring in the technical analysis in the third phase, you need to set what you mean by coverage.
Here are lesson has two primary learning objectives Number one. You should walk away able to select a coverage scheme for a given assessment and to you should know the difference between technique and sub technique coverage and how you can infer coverage between the two
diving into it. The core of an attack based stock assessment is really threefold. First you identify the thing you care about. Second, you analyze the thing, and third, you map it to attack. Of course, this is very vague and abstract and really the nuanced lies in that third phase of mapping it to attack.
This ultimately boils down to picking the part of the attack framework that you really want to map things, too.
As an example, you might want to run an assessment and map things specifically to adversaries, tactical objectives or tactics here. A good example of this is that it can be hard for an adversary to exfiltrate files from an air gap network.
Alternatively, you might want to map things specifically just to techniques or how the tactical objectives are achieved.
A good example here is that application, isolation and sand boxing can mitigate the impact of exploitation.
Here we have a very straightforward mitigation that goes against a very specific attack technique.
Or lastly, you might want to focus specifically on sub techniques or instead of just the technique at the more abstract level. You are now focusing on the very detailed explanation of specific behaviors
that are just a little bit more descriptive than the techniques themselves. And here in this example, we show a potential detection technology that you can use to spot the sub technique. Kerber roasting
now in most attack based stock assessments, we tend to work at the sub technique and technique level, but we know that it's possible to go across each each of these different three.
Now, one of the interesting things about tactics, techniques and sub techniques is that they form a very clear abstraction hierarchy, and this abstraction ultimately ends up going back to coverage itself.
Here, tactics represent the most abstract thing in the framework where, when we say that an air gap network can prevent exfiltration, were saying that this specific technology can potentially impact an entire tactic containing techniques and sub techniques
on the bottom of the hierarchy. You have the sub techniques, which describe very specific adversary behaviors.
What's interesting about the abstraction is that if you have coverage at the top level,
you can sometimes in for you get coverage at the lower level. So by saying we have coverage of ex filtration in some certain cases, but not always, you might be able to infer that you have coverage of all the techniques and sub techniques under exfiltration.
By contrast, just because you cover the sub techniques are the most specific descriptions. You can't always infer that you are able to coverage, cover the technique or the tactic.
We can walk to a few examples of this here. We've taken a very small slice of the attack framework, just a few techniques and sub techniques within the credential access tactic.
What we're gonna do here is use the legend on the bottom
to categorize each of these, based on our confidence that we can detect them going between high some and low confidence. By default, we're assuming that everything has low confidence of detection.
So as a first example, we might work with a sock that says We do a great job of detecting brute force,
and from that you can make the pretty quick conclusion Bad brute force has high confidence of detection.
Then one thing you can do, but you don't have to
is in for that because you're covering the technique, or at least or rather because the sock is claiming to cover the technique. You also have some coverage here of the sub techniques.
As another example, we might work with a sock that says we do a great job of detecting key chain and credentials from Web browsers
from that statement. You, of course, say high confidence for those two sub techniques.
And then, intuitively, you might want to say, Oh, well, if we're covering both of these, that we might cover the technique itself. But that's not necessarily a reasonable assumption. Just because you cover the sub techniques, it doesn't mean you're going to infer that you cover the primary technique,
and as a last example,
you might work with the sock that tells you we can sometimes detect OS credential jumping. From that statement, you might infer, Oh, we have some confidence of detection of OS credential dumping
here. You might want to perform the same inference as before with brute force. But again, it doesn't apply in this case because you only have sometimes detect as those keywords that the sock is using. Making an inference on any coverage for the sub techniques might not exactly make the most sense.
Ultimately, when you're looking at coverage inference either up and down the abstraction hierarchy, you're almost always looking at something that's dependent on the context in which you're looking at it, as well as the user preference. For the assessment.
As an example of user preference, you might run an assessment and say, Hey, I want to be very skeptical of coverage In this case, you might say, All right, there's no coverage. Inference. I'm going to always assume that things are low coverage.
By contrast, you might have another another assessment where you might have used a preference that you want to be more credulous, and you indeed accept coverage inference as a way to try to
guestimate to some extent what more of the results are ultimately at the end, whether or not you're using coverage. Inference depends on the sock you're looking at as well as what the assessors preferences.
So a couple of tips and dodges for doing attack mapping number one be as specific as possible. When you're doing the mapping, it's almost always better to map to sub techniques and techniques themselves over tactical mapping.
That said, Don't worry, if pinpoint accuracy. Is it possible? If you're not able to map specifically to sub techniques, it's OK to go to techniques or tactics to learn to just not have a good mapping.
Be careful with inferred coverage. I know I've walked through a few examples, but sub technique coverage does not imply primary coverage. And primary coverage does not always imply sub technique coverage. There's a lot of nuance here, and ultimately, if if you're in some way unsure about inferred coverage,
go with the skeptical route and don't assume it exists.
And lastly, when you're doing an attack mapping, sometimes there isn't a good attack match, and that's perfectly okay.
Shifting gears a little bit. I know I've said the word coverage plenty of times so far in this course, and I know I'm going to say that many times more.
But what does coverage actually mean?
Ultimately, when we're defining coverage, we really need to specify two things. What it is that we're measuring, right? Are we measuring detection? Are we measuring mitigation and then what range that measurement can take?
So as a couple of examples, here's a coverage scheme that's just simple detection. What we're measuring is, can we detect it and then the values that can take as a binary yes or no?
Here's another potential coverage scheme or we're looking at detection and mitigation.
What we're measuring here is Are we confident we can detect or mitigate it?
And the values were using in this example are categorical? No, partially, mostly and yes, we are confident.
Lastly, here's a more complex hybrid coverage scheme where what we might want to measure is, will execution of this technique cause problems
and we might score it using a numeric
or a quantitative scheme where we say, Okay, this can take on the values 1 to 100 or 100 is most problematic, and one is least problematic.
A couple of tips for defining coverage Number one. When first starting out, keep it simple. Can you detect it or not? And have you deployed a mitigation or not? These are two very simple, straightforward schemes that are great for when you're first starting out with with an attack based stock assessment.
Don't worry about pinpoint accuracy. I know I use this phrase a lot throughout this course, but, you know, really, it's important to reiterate. Assessments are supposed to paint broad strokes and overly complex coverage charts can ultimately hurt more than they help.
We talk about this to a lot in the next module,
and then, lastly, your metric should be defined by your and the socks maturity, as well as the overall goals of the assessment that you're running.
One thing I want to close this lesson with is the coverage scheme we tend to use here within this this course
here. This goes back to one of the first slides we had, and I think less than 1.1, where this is just like a notional coverage scheme where we put things in green for high confidence of detection, yellow for some confidence and white for low confidence. This is a very simple and straightforward scheme.
We thought we found it ultimately fairly powerful and present the results
really diving into it.
This is a simple scheme that is easy to present,
right. It's easy to show it, but it's also easy to score it. Coming up with it as the assessor isn't a huge challenge.
It's also useful. It's understandable at almost all levels of stakeholders you might work with, and it also presents a good message that almost everybody can use.
And lastly, confidence of detection is a good way to measure things. It's a little ambiguous, right? It's not perfectly to specific on what we're talking about, but it's still useful enough to drive results.
So a few summary notes and takeaways from this lesson number one be as specific as possible when mapping to attack. It's always in your best interest to dive in at that specific level whenever you can.
Two sub technique versus primary technique coverage and the inference between them that can be tricky.
Number three coverage should primarily be based on your goals and the organization that you're working with,
and then don't worry about pinpoint accuracy I know I've said it many times, but it's always worth repeating when running an attack based stock assessment due to the nature of the assessment, it's okay to to paint broad strokes.
Up Next