Proposing Recommendations Part 2
Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or
Already have an account? Sign In »

Difficulty
Intermediate
Video Transcription
00:00
>> Welcome to Lesson 6 in Module 3 of
00:00
the ATT&CK-based SOC assessments training course.
00:00
In this lesson, we're going to focus on
00:00
the second part of proposing recommendations and
00:00
really focus on recommendations that allow you
00:00
to expand beyond just technique prioritization.
00:00
This lesson has two primary learning objectives.
00:00
After the lesson, you should be able to
00:00
propose recommendations to improve coverage.
00:00
Then additionally, you should
00:00
>> understand how assessments
00:00
>> fit into a larger ATT&CK plus SOC ecosystem.
00:00
Looking back at our typical recommendation categories,
00:00
we covered technique prioritization in Lesson 3.5.
00:00
Now, in this lesson,
00:00
we're going to talk a little bit about
00:00
process refinement,
00:00
follow-up engagements, and
00:00
a decent amount on coverage improvement.
00:00
Diving into coverage improvement,
00:00
our goal with this type of
00:00
recommendation is to take the existing coverage,
00:00
the existing heat-map,
00:00
and help this SOC go from what they're covering
00:00
today to what they could cover tomorrow.
00:00
Towards that, there are four main ways
00:00
that we can recommend the SOC2 things.
00:00
The first is, of course,
00:00
for them to add analytics.
00:00
This is to help them increase coverage,
00:00
looking for specific techniques,
00:00
really building off of
00:00
that prioritization plan we
00:00
talked about in the previous lesson.
00:00
This is great for SOCs that are looking
00:00
to complement their existing tooling.
00:00
It requires that the SOC have the staff, logging,
00:00
and the search functionality needed
00:00
to actually implement and use in analytics.
00:00
The second recommendation for improving
00:00
coverage is to add new tools.
00:00
This gives them better coverage off
00:00
the shelf and really the best
00:00
>> use case for SOCs that are
00:00
>> still primarily using cyber hygiene tools
00:00
and are starting to branch into
00:00
more behavior-based detection.
00:00
This can add significant coverage
00:00
from a heat map perspective, but usually,
00:00
there's a longer adoption
00:00
where the SOC needs to make sure they're
00:00
on-boarding correctly and really
00:00
incorporating it into their standard operations.
00:00
Another recommendation to help them improve
00:00
coverage is for them to ingest more data sources.
00:00
This is to complement adding analytics,
00:00
really allowing them to increase
00:00
their visibility into the raw data.
00:00
This is great for SOCs that are looking to
00:00
grow their analytic program where they've
00:00
really gotten a lot of value out of
00:00
analytics and they either want to start
00:00
a new program or expand their existing one.
00:00
That said, really ingesting data sources for
00:00
a SOC to get the most bang for
00:00
their buck with this recommendation,
00:00
they need to have an existing analytic process
00:00
as well as the data ingestion pipeline.
00:00
Then the last recommendation
00:00
to help improve coverage is to,
00:00
of course, implement mitigations.
00:00
Here we want to bypass the recommendations around
00:00
detection instead prevent the execution.
00:00
This is really good for SOCs that have
00:00
great control of their endpoints and their devices,
00:00
but it sometimes can be challenging to verify and keep
00:00
up to date that the mitigations are indeed deployed.
00:00
Diving in a little bit deeper,
00:00
we have some tips for data-source recommendations.
00:00
Number 1, always try to
00:00
identify actionable data sources;
00:00
those that are easy to ingest.
00:00
When we say easy, we don't mean
00:00
>> to ignore the hard stuff,
00:00
>> but rather it's good for the SOC to
00:00
really balance the return on investment of
00:00
a data source versus how hard it is for
00:00
them to ingest it into their SIM platform.
00:00
The second tip is to focus on
00:00
data sources that offer useful coverage improvements.
00:00
If I'm looking at two data sources say A and
00:00
B and A provides coverage of techniques,
00:00
I'm already potentially detecting,
00:00
and B provides coverage of techniques
00:00
that I'm not yet detecting at all,
00:00
then it makes more sense for me to start
00:00
ingesting data source B.
00:00
Of course, you want to balance
00:00
here not just the utility of the data source,
00:00
but also the difficulty of how hard it might be to
00:00
ingest those data sources that offer improvements.
00:00
The third tip is to consider recommending
00:00
data-source collection roll-out strategies.
00:00
Here, you want to not just say, "Hey,
00:00
go collect these three data sources."
00:00
You might want to say, "Hey,
00:00
go collect data source A and
00:00
then implement three analytics.
00:00
Then progress to data source
00:00
B and then eventually over to
00:00
C." It's not always
00:00
great to just give bullet point lists of things to do,
00:00
but it's always helpful for a SOC to see
00:00
a strategy for them
00:00
on how to actually roll out the recommendation.
00:00
Then the last tip is to link
00:00
data source recommendations to
00:00
the SOCs tooling and analytics.
00:00
If you're able to draw a connection
00:00
between their existing tooling and
00:00
analytic coverage to new data sources
00:00
beyond just the generic coverage heat-map,
00:00
you can really make sure the SOC is getting
00:00
the most benefit from
00:00
ingesting an additional data source.
00:00
We also have some tips for tooling recommendations.
00:00
Number 1, always make
00:00
sure to weigh the trade-offs between
00:00
a free and open-source tool versus a commercial one.
00:00
Of course, there's budgetary concerns,
00:00
but you also might want to consider support.
00:00
Sometimes it makes more sense to
00:00
go with something open-source,
00:00
and other times it makes more sense to go
00:00
with something that's commercial.
00:00
Try to focus on tool types
00:00
as opposed to specific tools themselves.
00:00
This isn't a hard and fast rule,
00:00
but you don't want to seem pushy with
00:00
the SOC and instead say, "Hey,
00:00
you should acquire a tool that does endpoint
00:00
behavior-based monitoring as opposed
00:00
to go acquire this tool offered by this vendor."
00:00
SOCs can sometimes lean
00:00
different ways on how
00:00
they work with that kind of recommendation.
00:00
Of course, you do want to consider
00:00
any tools that the SOC is specifically looking
00:00
at when you are coming up
00:00
with a recommendation on a specific tool.
00:00
Then focus on tools that help
00:00
>> increase coverage the most,
00:00
>> but also fit within the budget.
00:00
There's a balance here between which tool
00:00
offers the most potential immediate benefit versus say,
00:00
how much money the SOC is willing to spend,
00:00
or even the time they're willing to
00:00
invest in deploying a new tool.
00:00
Lastly, when you can always try to include
00:00
analysis of the tools that
00:00
the SOC is currently looking at.
00:00
Here you can give the SOC heat-maps and say,
00:00
"Here's your current coverage
00:00
and then here's your coverage when you deploy
00:00
tool A versus here's what
00:00
your coverage would look like when you deploy tool B."
00:00
Beyond coverage, there are
00:00
other recommendations you should
00:00
also consider supplying,
00:00
specifically those that help them
00:00
improve their processes in general.
00:00
These aren't necessarily things
00:00
that are ATT&CK focused and
00:00
really are a little bit outside
00:00
the scope of this training course.
00:00
That said, you might want to note
00:00
specific areas that the SOC can
00:00
improve that came about
00:00
during the course of the assessment.
00:00
Examples include whether or not
00:00
teams communicate well with each other.
00:00
What their analytic development process looks like.
00:00
How much good documentation they have.
00:00
Do they have leadership support?
00:00
Do they have a process for acquiring new tools?
00:00
Just generally, do they have good cyber hygiene?
00:00
These are all good things to ask and
00:00
keep mind of when you're running an assessment.
00:00
Then when it gets time to do recommendations,
00:00
you can call them out as specific areas of
00:00
improvement that the SOC can
00:00
help their general operations towards.
00:00
Then the last recommendation type
00:00
is additional engagements.
00:00
We view assessments in engineering as a bit of
00:00
a stepping stone into a cycle
00:00
where you assess your defensive coverage,
00:00
you identify high-priority gaps,
00:00
and then you tune or acquire new defenses.
00:00
Here we have things like threat intelligence and
00:00
other capabilities to help
00:00
with identifying high-priority gaps.
00:00
Then for tuning and acquiring defenses,
00:00
we have things like writing and detections,
00:00
adding new toolings, consulting public resources.
00:00
When you get back to assess defensive coverage,
00:00
you don't just have to say, "Now,
00:00
it's time for me to run a new
00:00
assessment," but you could also
00:00
consider running an adversary emulation exercise.
00:00
Now, you can go beyond the scope of
00:00
just the hands-off assessment
00:00
towards something more hands-on that gives you
00:00
a bit more high-fidelity results.
00:00
To close out this lesson,
00:00
we're going to go through a sample exercise where
00:00
we've conducted an ATT&CK-based SOC assessment,
00:00
we've come up with a set of prioritized techniques,
00:00
and now we want to recommend to the SOC we're working
00:00
with a specific tool that they should acquire.
00:00
Here we're violating a little bit of our tips
00:00
and that we're focusing
00:00
specifically on these three tools.
00:00
We have Tool 1, Tool 2, and Tool 3,
00:00
but for this example,
00:00
we're going to assume that the SOC
00:00
had already been looking at these tools to begin with.
00:00
Feel free to pause the video.
00:00
Look at this heatmap on the bottom.
00:00
Look at the prioritized techniques.
00:00
Read through the description of each tool.
00:00
Again, it's very high level.
00:00
But try to think about which tool you
00:00
think would complement this SOC the most.
00:00
Then when you unpause the video,
00:00
we'll walk through our own solution.
00:00
Welcome back. We're now going to
00:00
walk through how we look at this.
00:00
What we're going to do is walk
00:00
through each of these tools and do
00:00
a very quick analysis and try to count the number of
00:00
techniques each one might be able to benefit.
00:00
First, we'll focus on Tool 1.
00:00
Running through our analysis the first thing we note
00:00
is it runs in the network perimeter.
00:00
This tool is then focused on command and
00:00
control and then exfiltration.
00:00
It uses signature-based detection,
00:00
which is not exactly what we want to see,
00:00
and there's likely a low
00:00
>> level of detection accordingly.
00:00
>> From a data source perspective,
00:00
it reads from packet captures.
00:00
We can then highlight the
00:00
following techniques under C2 and
00:00
exfiltration to figure out which
00:00
techniques this tool might be able to detect.
00:00
When you tally it up, you get four relevant techniques;
00:00
one of them high priority and three low confidence.
00:00
Switching gears towards Tool 2,
00:00
we'll run through the same process.
00:00
This tool runs on endpoints.
00:00
Now we know that from this piece of information,
00:00
we can potentially pick up
00:00
most techniques depending on how the tool works.
00:00
Otherwise, it uses artifact-based detection.
00:00
This isn't fantastic, but you do get
00:00
some coverage depending on
00:00
the technique and the way it is executed.
00:00
Then this tool monitors API plus system calls.
00:00
When you run through the data source analysis,
00:00
you'll see it's able to detect
00:00
a fairly wide variety of techniques.
00:00
When you remove all those techniques that
00:00
have high confidence of detection,
00:00
you find that it could potentially
00:00
pick up two priority techniques;
00:00
one low confidence technique and
00:00
three some confidence techniques.
00:00
Then lastly, we will look at Tool 3.
00:00
This one also runs on endpoints.
00:00
So again, most of the tactics are in scope.
00:00
It uses behavior-based detection,
00:00
which can give it some high confidence,
00:00
some high coverage depending on the technique.
00:00
Then it monitors authentication logs
00:00
are really just one data source.
00:00
Still, when you go through,
00:00
when you look at which of these techniques map
00:00
back to authentication logs,
00:00
you do get some reasonable coverage.
00:00
Then you ultimately find that
00:00
this tool might be able to
00:00
>> detect one priority technique,
00:00
>> three low confidence techniques,
00:00
and then two some confidence techniques.
00:00
When you bring it all together,
00:00
you get some interesting analysis.
00:00
First, you can look at Tool 1 that
00:00
has likely low coverage.
00:00
It's middle-of-the-road cost and
00:00
doesn't cover a lot of techniques.
00:00
Tool 2 is likely some coverage at best and most cases.
00:00
It is the lowest cost and it
00:00
covers a reasonable set of techniques.
00:00
Here you can actually see that tool number
00:00
2 covers the most priority techniques,
00:00
whereas tool number 3 covers
00:00
the most techniques between low and priority.
00:00
Tool 3 also by contrast is most expensive,
00:00
but also at the cost to provides
00:00
maybe a little bit more coverage
00:00
of the techniques that it might detect.
00:00
Really it ultimately is
00:00
a bit of a trade-off between Tool 2 and Tool 3.
00:00
Tool 1 is clearly not in scope
00:00
>> as a good recommendation.
00:00
>> Ultimately the answer to which tool you want to
00:00
recommend boils down to the budget.
00:00
We'd recommend generally choosing Tool 2
00:00
if the SOC seems cost-sensitive in any way.
00:00
It offers some decent coverage. It's lowest cost.
00:00
It covers a good amount of techniques
00:00
>> that we care about.
00:00
>> Tool 3, by contrast,
00:00
is probably a good recommendation if
00:00
the SOC has a bigger budget.
00:00
Admittedly, it doesn't cover as
00:00
many priority techniques,
00:00
but that's likely a good reasonable
00:00
trade-off given that you might have
00:00
higher coverage of the techniques
00:00
that it does potentially detect.
00:00
A few summary notes and
00:00
takeaways to close out this lesson.
00:00
Number 1, to help the SOC enhance coverage,
00:00
consider recommending the following.
00:00
Number 1, build new analytics
00:00
to detect high-priority techniques.
00:00
Number 2, acquire new tools
00:00
>> to help them remediate gaps.
00:00
>> Number 3, ingest additional logs to enhance visibility.
00:00
Number 4, to deploy mitigations to
00:00
potentially prevent techniques that are
00:00
>> harder to detect.
00:00
>> Always keep in mind that when you do
00:00
recommend for the SOC to acquire new tools,
00:00
those tools should improve
00:00
>> coverage within the budget and
00:00
>> the context the SOC is working on.
00:00
Additionally, new data sources should improve coverage,
00:00
but not improve coverage at the cost of
00:00
a super uphill battle
00:00
for deploying the collection of that data source.
00:00
Lastly, whenever possible,
00:00
recommend non-ATT&CK or assessment
00:00
enhancements when you can.
00:00
Always keep in mind
00:00
the bigger picture when you're running
00:00
an ATT&CK-based SOC assessment and
00:00
you're delivering these recommendations.
00:00
With that, we close out Module 3 as
00:00
>> well as this lesson.
Up Next
SOC Assessments Demo 1
SOC Assessments Demo 2
Similar Content