Proposing Recommendations Part 2

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
3 hours 16 minutes
Difficulty
Intermediate
CEU/CPE
2
Video Transcription
00:01
welcome to less than six and module three of the Attack based Stock Assessments training course.
00:07
In this lesson, we're going to focus on the second part of proposing recommendations and really focus on recommendations that allow you to expand beyond just technique prioritization.
00:18
This lesson has two primary learning objectives.
00:21
After the lesson, you should be able to propose recommendations to improve coverage
00:26
then, additionally, you should understand how assessments fit into a larger attack plus sock ecosystem.
00:34
Looking back at our typical recommendation categories,
00:38
we covered technique prioritization in less than 3.5. And now in this lesson, we're going to talk a little bit about process refinement, follow up engagements and a decent amount on coverage improvement
00:50
diving into coverage improvement. Our goal with this type of recommendation is to take the existing coverage, the existing heat map
00:57
and help the sock go from what they're covering today to what they could cover tomorrow.
01:03
And towards that, there are four main ways that we can recommend the sock do things.
01:07
The first is, of course, for them to add analytics. This is to help them increase coverage, looking for specific techniques, really building off of that prioritization plan we talked about in the previous lesson.
01:19
This is great for socks that are looking to complement their existing tooling, and it requires that the sock have the staff
01:26
logging in the search functionality needed to actually implement and use analytics.
01:32
The second recommendation for improving coverage is to add new tools.
01:36
This gives them better coverage off the shelf and really the best use case for socks that are still primarily using like cyber hygiene tools and are starting to branch at the more behavior based detection.
01:47
This can add significant coverage from a heat map perspective, but usually there's a longer adoption where the sock needs to make sure they're on board incorrectly and really incorporating it into their standard operations.
02:00
Another recommendation to help them improve coverage is for them to ingest more data sources.
02:06
This is to complement adding analytics, really allowing them to increase their visibility into the raw data.
02:12
This is great for socks that are looking to grow their analytic program, where they've really gotten a lot of value out of analytics, and they either want to start a new a new program or expanding their existing one.
02:23
That said really ingesting data sources for a sock to get the most bang for their buck. With this recommendation, they need to have an existing analytic process as well as the data ingestion pipeline.
02:34
And then the last recommendation to help improve coverage is to, of course, implement mitigations.
02:38
Here we want to bypass the recommendations around detection and instead prevent the execution.
02:44
This is really good for socks that have great, great control of their endpoints in their devices. But it sometimes can be challenging to verify and keep up to date that the mitigations are indeed deployed.
02:54
Diving in a little bit deeper. We have some tips for data source recommendations.
03:00
Number one always try to identify actionable data sources those that are easy to ingest. When we say easy, we don't mean to ignore the hard stuff, but rather it's good for the stock to really,
03:10
you know, balance the return on investment of a data source versus how hard it is for them to ingest it into their into their SIM platform.
03:19
The second tip is to focus on data sources that offer useful coverage improvements.
03:23
If I'm looking at two data sources say A and B
03:27
and a provides coverage of techniques I'm already potentially detecting, and b provides coverage of techniques that I'm not yet detecting at all. Then it makes more sense for me to start ingesting
03:38
data source. B.
03:38
Of course, you want to balance here the utility not just the utility of the data source, but also the difficulty of how hard it might be to ingest those data sources that offer improvements.
03:51
The third tip is to consider recommending data source collection rollout strategies.
03:57
Here. You want to not just say, Hey, go, go collect these three data sources you might want to say, Hey, go collect data source A and then implement You know, three analytics, then progressed the data source B and then eventually over to see him.
04:10
It's not always great to just give bullet point lists of things to do, but it's It's always helpful for a sock to see like Hey, you know, a strategy for them on how to actually roll out the recommendation.
04:21
And then the last tip
04:23
is to link source beta source recommendations to the socks, tooling and analytics.
04:29
If you're able to drop a connection between the existing tooling and analytic coverage, to new data sources beyond just the generic coverage heat map. You can really make sure the sock is getting the most benefit from just ingesting an additional data source.
04:44
We also have some tips for tooling recommendations.
04:46
Number one always make sure to weigh the tradeoffs between a free and open source tool versus a commercial one. Of course, there's budgetary concerns, but you also might want to consider support.
04:56
Sometimes it makes more sense to go with something open source, and other times it makes more sense to go with something that's that's commercial,
05:03
trying to focus on tool types as opposed to specific tools themselves.
05:08
This isn't a hard and fast rule, but you don't want to seem pushy with the sock and instead say, Hey, you should inject. You should acquire a tool that does endpoint behaviour based monitoring as opposed to go acquire this tool offered by this vendor.
05:20
Socks can sometimes be lean different ways on how they work with that kind of recommendation. And, of course, you do want to consider any tools that the sockets specifically looking at
05:32
when you are coming up with a recommendation on a specific tool
05:36
and then focus on tools that help increase coverage the most, but also fit within the budget.
05:42
There's a balance here between
05:44
which tool offers the most potential immediate benefit versus say, how much money the sock is willing to spend, or even the time they're willing to invest in deploying a new tool.
05:54
And lastly, when you can always try to include analysis of the tools that the sock is currently looking at,
06:00
here you can say you can give the sock heat maps that say, Here's your current coverage And then here's your coverage When you deploy Tool A versus here's what your coverage would look like when you deploy to will be
06:14
beyond coverage. There are other recommendations. You should also consider supplying specifically those that help them improve their processes. In general,
06:21
these aren't necessarily things that are attacked, focused and really are a little bit outside the scope of this training course. That said, you might want to note specific areas that the sock can improve that you came about during the course of the assessment. Examples include whether or not teams communicate well with each other
06:40
what their analytic development process looks like. How much good documentation they have. Do they have leadership support? Do they have a process for acquiring new tools? And just generally, you know, do they have good cyber hygiene?
06:51
These are all good things to ask and keep mind of when you're when you're running an assessment, and then when it gets time to do recommendations, you can call them out as specific areas of improvement that the sock can help their general operations towards.
07:06
And then the last recommendation type is additional engagements.
07:10
We view assessments and engineering as a bit of a stepping stone into a cycle where you assess your defensive coverage. You identify high priority gaps, and then you tuna require acquire new defenses.
07:20
Here we have things like threat, intelligence and other capabilities to help with identifying high priority gaps
07:26
and then for tuning in acquiring defenses. We have things like writing detections, adding new tooling, consulting public resources.
07:32
When you get back to assess defensive coverage, you don't just have to say Okay, now it's time for me to run a new assessment, but you can also consider running an adversary emulation exercise.
07:43
Now you can go beyond the scope of just the hands off kind of assessment towards something more hands on that gives you a bit more high. Fidelity results
07:53
to close out this lesson. We're gonna go through a sample exercise where we've gone through. We conducted an attack based stock assessment. We've come up with a set of prioritized techniques and now we want to recommend to the stock. We're working with
08:07
a specific tool that they should acquire
08:09
here. We're violating a little bit of our tips and that we're focusing specifically on these three tools. We have 212 or two and 23 But but but but for this example, it we're going to assume that the sock had already been looking at these tools to begin with.
08:24
So feel free to pause the video. Look at this heat map on the bottom. Look at the prioritized techniques read through the description of each tool. Again. It's very high level,
08:33
but try to think about which tool you think would complement this sock the most. And then when Ewan pause the video, we'll walk through our own solution.
08:43
Okay.
08:46
Welcome back. We're not going to walk through how we look at this and what we're going to do is walk through each of these tools and do a very quick analysis and try to count the number of techniques each one might be able to benefit.
08:58
So first we'll focus on 21
09:01
running through our analysis. The first thing we note is it runs at the network perimeter.
09:05
This tool is then focused on command and control and the next filtration.
09:09
It uses signature based detection, which is not exactly what we want to see, and there's likely a low level of detection. Accordingly,
09:16
from a data source perspective, it reads from packet captures.
09:20
We can then highlight the following techniques undersea to an ex filtration to figure out which techniques this this tool might be able to detect. And when you tally it up, you get four relevant techniques, one of them high priority and three low confidence
09:35
switching gears towards tool to well, we'll run through the same process.
09:39
This tool runs on endpoints. Now we know that from this piece of information, we can potentially pick up most techniques, depending on how the tool works. Otherwise,
09:48
it uses artifact based detection. This isn't fantastic, but you do get some coverage depending on the technique and the way it's executed.
09:56
And then this tool monitors API plus system calls.
10:01
When you run through the data source analysis, you'll see it's able to detect a fairly wide variety of techniques. And when you remove all those techniques that have high confidence of detection, you find that it can. It can potentially pick up to priority techniques, one low confidence technique and three some confidence techniques.
10:20
And then, lastly, we'll look at two or three.
10:22
This one also runs on endpoints. So again, most most of the tactics are in scope. It uses behavior based detection, which can give it, you know, some some high confidence, some high coverage, depending on the technique. And then it monitors authentication logs, Really just one data source.
10:37
Still, when you go through when you look at which of these techniques map back to authentication logs, you do get some some reasonable coverage, and then you ultimately find that
10:46
this tool might be able to detect one priority technique,
10:50
three low confidence techniques and then to some confidence techniques.
10:56
When you bring it all together, you kind of get some interesting analysis.
11:00
First, you can look at to a one that has likely low coverage. It's middle of the road cost and doesn't cover a lot of techniques.
11:07
Tool to is likely some coverage at best. In most cases, it is the lowest cost, and it covers a reasonable set of techniques.
11:15
You know, here you can actually see that that tool number two covers the most priority techniques, whereas Tool number three covers the most techniques
11:22
between low and Priority
11:26
two or three. Also, by contrast, is most expensive, but also at the at. That cost provides maybe a little bit more coverage of the techniques that it might detect.
11:35
And so, really, it ultimately is. A bit of a trade off between two or two and two or 3 to 1 is clearly, you know, not in scope is a good recommendation. And ultimately, the answer to which tool you want to recommend boils down to the budget.
11:50
We recommend generally choosing tool to. If the socks seems cost sensitive in any way, it offers some some decent coverage, its lowest cost. It covers a good amount of techniques that we care about.
12:01
Two or three, by contrast, is probably a good recommendation. If the sock has a bigger budget. Admittedly, it doesn't cover as many
12:07
priority techniques, but that's likely a good, reasonable trade off, given that you might have higher coverage of the techniques that it does potentially detect.
12:18
So if your summary notes and takeaways to close out this lesson,
12:22
number one. To help the sock enhance coverage, consider recommending the following number one. Build new analytics to detect high priority techniques.
12:30
Number to acquire new tools to help them remediate gaps
12:33
number three and ingest additional logs to enhance visibility and number four to deploy mitigations to potentially prevent techniques that are harder to detect.
12:43
Always keep in mind that when you do recommend for the sake to acquire new tools, those tools should improve coverage within the budget and the context of the sock is working on.
12:54
Additionally, new data sources should improve coverage but not improve coverage at the cost of a
13:00
super uphill battle for deploying the collection of that data source.
13:05
And lastly, whenever possible, recommend non attack or assessment enhancements when you can always keep in mind. The bigger picture when you're running an attack based stock assessment and you're delivering these recommendations
13:18
with that, we close out module three as well as this lesson
Up Next