Leveraging External Resources for Analytics

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
4 hours 42 minutes
Difficulty
Intermediate
CEU/CPE
5
Video Transcription
00:00
>> Hello and welcome to Lesson
00:00
2.8, leveraging external resources.
00:00
During this lesson, we will discuss
00:00
important considerations when
00:00
using publicly available analytics.
00:00
While they can vary in quality,
00:00
a key benefit to open source analytics include that
00:00
they are transparent and can be readily understood,
00:00
validated, critiqued, extended,
00:00
and more by the community at large.
00:00
It is important to remember that
00:00
not all analytics are created equally and it's
00:00
imperative to apply good critical thinking skills
00:00
to ensure that any analytic you choose to
00:00
adopt meets the standards and requirements
00:00
of your target operational and analytic environment.
00:00
The following list highlights
00:00
some well-known analytic repositories and
00:00
adversary emulation tools that can be used when
00:00
conducting the type of research and
00:00
analysis describing this module.
00:00
These along with other published resources can be very
00:00
useful and help save
00:00
significant time in engineering detections for your hunt.
00:00
Now, let's take a look at an example analytic from
00:00
the MITRE Cyber Analytics Repository also known as CAR.
00:00
Here we see the CAR analytic
00:00
named Service Outlier Executables,
00:00
which is mapped to a specific ATT&CK tactic technique
00:00
and sub technique that is just applicable to you.
00:00
While this example is mapped
00:00
to a single ATT&CK technique,
00:00
others can map to several techniques across
00:00
the framework with varying levels of applicability.
00:00
CAR also provides
00:00
an analytic coverage comparison
00:00
matrix that lists a number of
00:00
published analytics relevant to
00:00
a certain ATT&CK technique from sources such as Sigma,
00:00
ES, Splunk, and even CAR itself.
00:00
It is important to note that
00:00
the total number of analytics does not provide
00:00
any insight into how much coverage
00:00
against the ATT&CK the analytics provide.
00:00
It's necessary to evaluate any analytics
00:00
you use to determine how easily they
00:00
can be evaded and if they make sense
00:00
or need to be modified for your specific needs.
00:00
Continuing on with the scheduled tasks
00:00
technique example from previous lessons,
00:00
here we see an execution with schtasks
00:00
analytic published in CAR that may be useful to us.
00:00
This analytic is intended to detect
00:00
all invocations of schtasks.exe.
00:00
While this seems like it should have pretty high recall,
00:00
we know that schtasks is
00:00
a native Windows program that is
00:00
also used by benign to some functions.
00:00
Thus this analytic will not do well separating
00:00
benign activity from malicious activity as is.
00:00
Although varying levels of completeness,
00:00
the detection sections in ATT&CK are useful starting
00:00
point for understanding how
00:00
one might detect a given technique.
00:00
In this case, it provides data sources.
00:00
Process creation, the data source that
00:00
the CAR analytic is based around is one of them,
00:00
but ATT&CK calls out others that we could also consider.
00:00
ATT&CK also mentions native Windows configurations
00:00
and log sources that can provide additional data.
00:00
Bringing all of this information back,
00:00
let's consider what we know about
00:00
the execution with schtasks analytic we were examining.
00:00
We know that it uses process creation
00:00
as a data source and that there
00:00
are other relevant data sources that we
00:00
may want to cover with additional analytics.
00:00
We also saw in the pseudocode that the analytic looks for
00:00
a single binary name which has the
00:00
potential to be bypassed or evaded.
00:00
Finally, we also noted that
00:00
the analytic does not appear to
00:00
meaningfully distinguish between
00:00
benign and malicious activity as presented.
00:00
Questions that we should consider addressing if
00:00
we move forward with this analytic include,
00:00
what results does it provide in my environment?
00:00
What other open source analytics
00:00
could complement this one?
00:00
Is there any additional technique research
00:00
we could or should do?
00:00
How much coverage do we need to
00:00
successfully detect this activity in our environment?
00:00
Is it worth investing more time and
00:00
effort into modifying this analytic
00:00
or should we look for another
00:00
analytic that could provide
00:00
a higher return on investment?
00:00
The answers to these questions will
00:00
vary based on the organization,
00:00
purpose of the hunt, and available resources.
00:00
In summary, using publicly available analytic resources
00:00
can save you time and effort in the long run.
00:00
It's also important to carefully evaluate
00:00
these analytics in order to
00:00
determine if they meet your requirements.
00:00
This concludes all of the lessons from Module 2.
00:00
In summary, developing
00:00
sound solid hypotheses is fundamental
00:00
to the threat hunting process and guides
00:00
future data collection and analytic development efforts.
00:00
Finding low-variance behaviors through
00:00
open-source research and hands-on investigation can
00:00
help in developing more robust analytics
00:00
that cannot be easily evaded by an adversary.
00:00
Refining hypotheses and developing abstract analytics
00:00
can help set you up for success as you
00:00
progress in the threat hunting methodology.
00:00
Finally, using publicly available analytics is
00:00
another option that could potentially save
00:00
you time and effort in this process.
Up Next