Proposing Recommendations Part 2

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

3 hours 16 minutes
Video Transcription
welcome to less than six and module three of the Attack based Stock Assessments training course.
In this lesson, we're going to focus on the second part of proposing recommendations and really focus on recommendations that allow you to expand beyond just technique prioritization.
This lesson has two primary learning objectives.
After the lesson, you should be able to propose recommendations to improve coverage
then, additionally, you should understand how assessments fit into a larger attack plus sock ecosystem.
Looking back at our typical recommendation categories,
we covered technique prioritization in less than 3.5. And now in this lesson, we're going to talk a little bit about process refinement, follow up engagements and a decent amount on coverage improvement
diving into coverage improvement. Our goal with this type of recommendation is to take the existing coverage, the existing heat map
and help the sock go from what they're covering today to what they could cover tomorrow.
And towards that, there are four main ways that we can recommend the sock do things.
The first is, of course, for them to add analytics. This is to help them increase coverage, looking for specific techniques, really building off of that prioritization plan we talked about in the previous lesson.
This is great for socks that are looking to complement their existing tooling, and it requires that the sock have the staff
logging in the search functionality needed to actually implement and use analytics.
The second recommendation for improving coverage is to add new tools.
This gives them better coverage off the shelf and really the best use case for socks that are still primarily using like cyber hygiene tools and are starting to branch at the more behavior based detection.
This can add significant coverage from a heat map perspective, but usually there's a longer adoption where the sock needs to make sure they're on board incorrectly and really incorporating it into their standard operations.
Another recommendation to help them improve coverage is for them to ingest more data sources.
This is to complement adding analytics, really allowing them to increase their visibility into the raw data.
This is great for socks that are looking to grow their analytic program, where they've really gotten a lot of value out of analytics, and they either want to start a new a new program or expanding their existing one.
That said really ingesting data sources for a sock to get the most bang for their buck. With this recommendation, they need to have an existing analytic process as well as the data ingestion pipeline.
And then the last recommendation to help improve coverage is to, of course, implement mitigations.
Here we want to bypass the recommendations around detection and instead prevent the execution.
This is really good for socks that have great, great control of their endpoints in their devices. But it sometimes can be challenging to verify and keep up to date that the mitigations are indeed deployed.
Diving in a little bit deeper. We have some tips for data source recommendations.
Number one always try to identify actionable data sources those that are easy to ingest. When we say easy, we don't mean to ignore the hard stuff, but rather it's good for the stock to really,
you know, balance the return on investment of a data source versus how hard it is for them to ingest it into their into their SIM platform.
The second tip is to focus on data sources that offer useful coverage improvements.
If I'm looking at two data sources say A and B
and a provides coverage of techniques I'm already potentially detecting, and b provides coverage of techniques that I'm not yet detecting at all. Then it makes more sense for me to start ingesting
data source. B.
Of course, you want to balance here the utility not just the utility of the data source, but also the difficulty of how hard it might be to ingest those data sources that offer improvements.
The third tip is to consider recommending data source collection rollout strategies.
Here. You want to not just say, Hey, go, go collect these three data sources you might want to say, Hey, go collect data source A and then implement You know, three analytics, then progressed the data source B and then eventually over to see him.
It's not always great to just give bullet point lists of things to do, but it's It's always helpful for a sock to see like Hey, you know, a strategy for them on how to actually roll out the recommendation.
And then the last tip
is to link source beta source recommendations to the socks, tooling and analytics.
If you're able to drop a connection between the existing tooling and analytic coverage, to new data sources beyond just the generic coverage heat map. You can really make sure the sock is getting the most benefit from just ingesting an additional data source.
We also have some tips for tooling recommendations.
Number one always make sure to weigh the tradeoffs between a free and open source tool versus a commercial one. Of course, there's budgetary concerns, but you also might want to consider support.
Sometimes it makes more sense to go with something open source, and other times it makes more sense to go with something that's that's commercial,
trying to focus on tool types as opposed to specific tools themselves.
This isn't a hard and fast rule, but you don't want to seem pushy with the sock and instead say, Hey, you should inject. You should acquire a tool that does endpoint behaviour based monitoring as opposed to go acquire this tool offered by this vendor.
Socks can sometimes be lean different ways on how they work with that kind of recommendation. And, of course, you do want to consider any tools that the sockets specifically looking at
when you are coming up with a recommendation on a specific tool
and then focus on tools that help increase coverage the most, but also fit within the budget.
There's a balance here between
which tool offers the most potential immediate benefit versus say, how much money the sock is willing to spend, or even the time they're willing to invest in deploying a new tool.
And lastly, when you can always try to include analysis of the tools that the sock is currently looking at,
here you can say you can give the sock heat maps that say, Here's your current coverage And then here's your coverage When you deploy Tool A versus here's what your coverage would look like when you deploy to will be
beyond coverage. There are other recommendations. You should also consider supplying specifically those that help them improve their processes. In general,
these aren't necessarily things that are attacked, focused and really are a little bit outside the scope of this training course. That said, you might want to note specific areas that the sock can improve that you came about during the course of the assessment. Examples include whether or not teams communicate well with each other
what their analytic development process looks like. How much good documentation they have. Do they have leadership support? Do they have a process for acquiring new tools? And just generally, you know, do they have good cyber hygiene?
These are all good things to ask and keep mind of when you're when you're running an assessment, and then when it gets time to do recommendations, you can call them out as specific areas of improvement that the sock can help their general operations towards.
And then the last recommendation type is additional engagements.
We view assessments and engineering as a bit of a stepping stone into a cycle where you assess your defensive coverage. You identify high priority gaps, and then you tuna require acquire new defenses.
Here we have things like threat, intelligence and other capabilities to help with identifying high priority gaps
and then for tuning in acquiring defenses. We have things like writing detections, adding new tooling, consulting public resources.
When you get back to assess defensive coverage, you don't just have to say Okay, now it's time for me to run a new assessment, but you can also consider running an adversary emulation exercise.
Now you can go beyond the scope of just the hands off kind of assessment towards something more hands on that gives you a bit more high. Fidelity results
to close out this lesson. We're gonna go through a sample exercise where we've gone through. We conducted an attack based stock assessment. We've come up with a set of prioritized techniques and now we want to recommend to the stock. We're working with
a specific tool that they should acquire
here. We're violating a little bit of our tips and that we're focusing specifically on these three tools. We have 212 or two and 23 But but but but for this example, it we're going to assume that the sock had already been looking at these tools to begin with.
So feel free to pause the video. Look at this heat map on the bottom. Look at the prioritized techniques read through the description of each tool. Again. It's very high level,
but try to think about which tool you think would complement this sock the most. And then when Ewan pause the video, we'll walk through our own solution.
Welcome back. We're not going to walk through how we look at this and what we're going to do is walk through each of these tools and do a very quick analysis and try to count the number of techniques each one might be able to benefit.
So first we'll focus on 21
running through our analysis. The first thing we note is it runs at the network perimeter.
This tool is then focused on command and control and the next filtration.
It uses signature based detection, which is not exactly what we want to see, and there's likely a low level of detection. Accordingly,
from a data source perspective, it reads from packet captures.
We can then highlight the following techniques undersea to an ex filtration to figure out which techniques this this tool might be able to detect. And when you tally it up, you get four relevant techniques, one of them high priority and three low confidence
switching gears towards tool to well, we'll run through the same process.
This tool runs on endpoints. Now we know that from this piece of information, we can potentially pick up most techniques, depending on how the tool works. Otherwise,
it uses artifact based detection. This isn't fantastic, but you do get some coverage depending on the technique and the way it's executed.
And then this tool monitors API plus system calls.
When you run through the data source analysis, you'll see it's able to detect a fairly wide variety of techniques. And when you remove all those techniques that have high confidence of detection, you find that it can. It can potentially pick up to priority techniques, one low confidence technique and three some confidence techniques.
And then, lastly, we'll look at two or three.
This one also runs on endpoints. So again, most most of the tactics are in scope. It uses behavior based detection, which can give it, you know, some some high confidence, some high coverage, depending on the technique. And then it monitors authentication logs, Really just one data source.
Still, when you go through when you look at which of these techniques map back to authentication logs, you do get some some reasonable coverage, and then you ultimately find that
this tool might be able to detect one priority technique,
three low confidence techniques and then to some confidence techniques.
When you bring it all together, you kind of get some interesting analysis.
First, you can look at to a one that has likely low coverage. It's middle of the road cost and doesn't cover a lot of techniques.
Tool to is likely some coverage at best. In most cases, it is the lowest cost, and it covers a reasonable set of techniques.
You know, here you can actually see that that tool number two covers the most priority techniques, whereas Tool number three covers the most techniques
between low and Priority
two or three. Also, by contrast, is most expensive, but also at the at. That cost provides maybe a little bit more coverage of the techniques that it might detect.
And so, really, it ultimately is. A bit of a trade off between two or two and two or 3 to 1 is clearly, you know, not in scope is a good recommendation. And ultimately, the answer to which tool you want to recommend boils down to the budget.
We recommend generally choosing tool to. If the socks seems cost sensitive in any way, it offers some some decent coverage, its lowest cost. It covers a good amount of techniques that we care about.
Two or three, by contrast, is probably a good recommendation. If the sock has a bigger budget. Admittedly, it doesn't cover as many
priority techniques, but that's likely a good, reasonable trade off, given that you might have higher coverage of the techniques that it does potentially detect.
So if your summary notes and takeaways to close out this lesson,
number one. To help the sock enhance coverage, consider recommending the following number one. Build new analytics to detect high priority techniques.
Number to acquire new tools to help them remediate gaps
number three and ingest additional logs to enhance visibility and number four to deploy mitigations to potentially prevent techniques that are harder to detect.
Always keep in mind that when you do recommend for the sake to acquire new tools, those tools should improve coverage within the budget and the context of the sock is working on.
Additionally, new data sources should improve coverage but not improve coverage at the cost of a
super uphill battle for deploying the collection of that data source.
And lastly, whenever possible, recommend non attack or assessment enhancements when you can always keep in mind. The bigger picture when you're running an attack based stock assessment and you're delivering these recommendations
with that, we close out module three as well as this lesson
Up Next