Detection Approaches

Video Activity
Join over 3 million cybersecurity professionals advancing their career
or

Time
4 hours 42 minutes
Difficulty
Intermediate
CEU/CPE
5
Video Transcription
00:00
>> Welcome to Lesson 1.2 of Threat Hunting Fundamentals.
00:00
Detection approaches.
00:00
In this lesson, we'll discuss background and
00:00
context around detecting malicious activity,
00:00
including some key terms and complimentary approaches.
00:00
Let's start by defining
00:00
some key terms we'll use throughout this course.
00:00
Precision is a metric for an analytic that
00:00
indicates how few false positives it returns.
00:00
To calculate precision, we run the analytic,
00:00
count the number of true positives in the results,
00:00
and divide by the total number of results,
00:00
both true and false positives.
00:00
An analytic with good precision will not
00:00
produce very many false positives.
00:00
If a precise analytic detect something,
00:00
it's more likely to be worth the
00:00
analyst time to investigate.
00:00
One way to remember this is that the cision part of
00:00
precision is the same root as incision and means to cut.
00:00
Precise analytics, cut out
00:00
the irrelevant and confusing false positives
00:00
from the data before you see them.
00:00
Recall is a different metric for analytics and it
00:00
indicates how few relevant events it misses.
00:00
Just like with precision,
00:00
a larger recall number is
00:00
generally better than a low one.
00:00
We compute recall by running the analytic,
00:00
counting the number of true positives returned,
00:00
just like for the numerator of the precision computation.
00:00
But then dividing it by the total number
00:00
of relevant events it should have detected.
00:00
I'm often reminded of the movie Total Recall with
00:00
this metric because it indicates the analytics ability
00:00
to remember or recall
00:00
all of the relevant events
00:00
and not forget anything important.
00:00
Now it would be great if we could create analytics
00:00
that all had perfect precision and perfect recall.
00:00
If we could, then we could
00:00
confidently spend resources responding
00:00
to every detection and
00:00
rest easy knowing we weren't missing any.
00:00
However, in practice, it is almost impossible to write an
00:00
analytic that has perfect precision or perfect recall,
00:00
and improving one often comes at
00:00
the cost of making the other one worse.
00:00
For example, an analytic might include a lot of
00:00
00:00
a particular piece of malware and gain good precision.
00:00
However, if the malware is altered in a small way,
00:00
those specifics might change
00:00
and the malware variant would be messed,
00:00
thereby resulting in lower recall.
00:00
We might try to improve the recall of
00:00
the analytic by removing some of the specifics.
00:00
But doing so makes it more likely
00:00
the analytic will detect a similar but benign binary,
00:00
resulting in lower precision.
00:00
Throughout this course, we'll
00:00
refer to these two measurements of
00:00
analytic accuracy and we'll see
00:00
many examples when their intention with each other.
00:00
To help understand the concepts of precision and recall,
00:00
we'll take a simple graphical example.
00:00
For this example, there are benign events represented by
00:00
green smiley faces and
00:00
malicious events represented by red triangles.
00:00
We'll represent our analytic in the form
00:00
of a circle centered in the rectangle.
00:00
It detects anything within the circle.
00:00
We would like a circle that includes
00:00
as many red triangles as possible to get good recall,
00:00
but with as few false positive green smileys as possible.
00:00
In this example, we know everything about what's
00:00
going on so we can compute precision and recall.
00:00
We have about 56 percent precision
00:00
because a little over half of
00:00
the detected results are actually
00:00
red triangles and hence true positives.
00:00
We have about 42 percent recall,
00:00
because our circle analytic
00:00
detects a little fewer than half of the red triangles.
00:00
To improve the precision of this circle
00:00
analytic by making it smaller,
00:00
we can make it so small,
00:00
but there are no false positives,
00:00
which gives us perfect precision.
00:00
However, by making the circle
00:00
small enough to exclude those false positives,
00:00
we also made it so small that
00:00
many red triangles are also missed,
00:00
so our recall drops to just 17 percent.
00:00
We could improve the recall of the circle analytic by
00:00
increasing its radius to make it big
00:00
enough to include almost all of the red triangles.
00:00
Now the recall has gone up to 75 percent,
00:00
but our precision has
00:00
dropped back down to just 50 percent.
00:00
A lot worse than our precise circle,
00:00
and even a little worse than our original circle.
00:00
00:00
there's a 50 percent chance that it's a false alarm.
00:00
That could cause a lot of analyst fatigue,
00:00
and it might get ignored so often that
00:00
even the true positives end up going by unnoticed.
00:00
There's an additional factor to consider
00:00
for detecting malicious cyber activity.
00:00
There is usually a lot more benign activity
00:00
on systems, than malicious activity.
00:00
As a result, any circle is more likely to have
00:00
more false positives than in
00:00
the more balanced examples from the previous slides.
00:00
In this example, we have
00:00
10 times as much benign activity as malicious activity.
00:00
In practice, it might be thousands or millions or
00:00
even more times more benign activity than malicious.
00:00
This extremely low base rate
00:00
of malicious activity makes it even
00:00
more important to keep precision
00:00
and recall in mind when developing,
00:00
evaluating, and improving analytics.
00:00
Now there are historically
00:00
at least three categories of detection approaches.
00:00
One common type is signature-based,
00:00
where the analytic is often
00:00
expressed as a pattern to match.
00:00
In our simple examples from the previous slides,
00:00
we could think of this as drawing a small circle around
00:00
each red triangle after
00:00
it's been discovered by some other means.
00:00
A second approach has been to try to
00:00
define everything that is allowed and
00:00
detect and block anything that deviates from
00:00
that baseline or allow list.
00:00
In our simple example,
00:00
we could think of this as trying to list
00:00
all of the green smileys
00:00
00:00
new that shows up in the rectangle.
00:00
Finally, there's an anomaly-based approach
00:00
that is similar to the allow list,
00:00
but instead of trying to specifically
00:00
catalog all allowed items,
00:00
the list of allowed things is defined as anything
00:00
that is statistically similar
00:00
to the items in the baseline.
00:00
Anything that occurs outside of
00:00
that statistically normal activity
00:00
is then considered suspicious.
00:00
This approach, when applied
00:00
to red triangle detection in our example,
00:00
could be thought of as finding
00:00
clusters of items in the rectangle,
00:00
whether they're smileys or triangles,
00:00
and drawing circles around them.
00:00
Any new smiley or triangle
00:00
that's outside of those circles
00:00
are outside of those clusters,
00:00
is then considered suspicious.
00:00
>> Expanding on these approaches a bit.
00:00
What we mean by signature-based detection
00:00
is the approach of defining
00:00
malicious observables with fairly atomic indicators.
00:00
You can think of signature-based detection
00:00
as a regular expression,
00:00
although it's not often
00:00
implemented that way for the sake of efficiency.
00:00
A signature-based analytic might look for
00:00
a specific string and a binary or domain name.
00:00
Lists of file hashes or IP addresses known to be used
00:00
in malicious activity fall in
00:00
the category of signature-based detection.
00:00
Many commercial products that analyze
00:00
network activity or host artifacts utilize this approach.
00:00
In theory, this approach can have very good precision.
00:00
The hash of a binary that is known to be
00:00
malicious is unlikely to
00:00
be shared by a benign binary
00:00
and therefore cause a false alarm.
00:00
It's also often fairly easy to implement
00:00
this approach since a tool can
00:00
simply ingest a list of hashes,
00:00
domain names, or other signatures
00:00
to quickly use in future scans.
00:00
However, it is often
00:00
difficult to determine good signatures to use.
00:00
First, the binary or IP
00:00
address or domain name must be determined to be
00:00
malicious by some other means
00:00
because at this point in the investigation,
00:00
we don't have the signature yet,
00:00
we're trying to find the signature.
00:00
As a result, this approach is not very
00:00
helpful for the first victim of an attack.
00:00
Once that is discovered,
00:00
it might require some reverse engineering
00:00
or other research to determine
00:00
a unique signature associated with
00:00
that benign or malicious activity.
00:00
Once the signature is shared,
00:00
the receiving organization must
00:00
implement a signature management program to
00:00
ensure that they are using all of
00:00
the latest signatures and
00:00
retiring any that might have expired.
00:00
The most advanced malicious actors today make
00:00
a practice of regularly altering their infrastructure,
00:00
binaries, and other artifacts to evade this approach.
00:00
Sometimes, they are changing
00:00
their artifacts faster than it takes
00:00
defenders to discover the initial infection,
00:00
develop the signature, share them,
00:00
and get them implemented in other systems and networks.
00:00
A second approach is to explicitly define what is
00:00
00:00
or prohibit anything that deviates from that.
00:00
While signature detection describes
00:00
the malicious things that should be blocked,
00:00
this approach describes the inverse,
00:00
it describes what is allowed.
00:00
On the upside, this approach can be very
00:00
00:00
forcing them to work within
00:00
the allow list of binaries, domain names, etc.
00:00
If the allow list is very small and strictly controlled,
00:00
this approach can be effective.
00:00
However, in practice,
00:00
many exceptions are required
00:00
00:00
different software for different users,
00:00
00:00
and other common legitimate use cases.
00:00
As a result, the allow list becomes large
00:00
00:00
succeed without deviating from it.
00:00
Other downsides include
00:00
the management cost of maintaining
00:00
an accurate allow list and the constraints
00:00
imposed on legitimate users trying to get the job done.
00:00
00:00
is anomaly-based detection.
00:00
This approach is similar to the allow list
00:00
or profile-based approach from the previous slides,
00:00
00:00
listing the allowed items and activities,
00:00
an algorithm defines what is
00:00
allowed based on what is statistically normal.
00:00
This definition of normal
00:00
might occur just once during baselining,
00:00
or it might be updated periodically or even continuously.
00:00
Deviations from this normal activity are
00:00
considered suspicious and might generate an alert.
00:00
For example, the algorithm might define
00:00
normal network activity as
00:00
00:00
and any network flow with
00:00
00:00
that threshold might trigger an alert.
00:00
The advantage of this approach relative to
00:00
the others is that it has a chance of
00:00
detecting new attacks that don't have signatures
00:00
without the management burden and
00:00
constraints of a strict allow list.
00:00
However, in practice, this approach can be
00:00
extremely difficult to implement with
00:00
sufficient precision and recall,
00:00
many networks and systems have
00:00
high variability in their benign activity.
00:00
As a result, the statistically defined normal activity
00:00
must either be broad enough that it
00:00
also includes malicious activity or so
00:00
narrow that the benign activity
00:00
itself produces many false alarms.
00:00
This situation is made worse by the fact that
00:00
00:00
are using anomaly-based detection,
00:00
and therefore, make efforts to blend in with
00:00
the normal activity of their target victim systems.
00:00
We can visualize these different approaches
00:00
mapped to two dimensions of detection.
00:00
First, whether to focus on defining what's benign,
00:00
what's malicious, or what's anomalous.
00:00
Second, whether to characterize what's benign, malicious,
00:00
or anomalous based on
00:00
indicators low on the pyramid of pain
00:00
like IP addresses or higher up like TTPs.
00:00
The allow list approach focuses on defining what's
00:00
benign in terms of things like domain names and tools.
00:00
The signature-based approach focuses on defining what's
00:00
malicious in similar terms to the allow list approach.
00:00
Anomaly detection obviously focuses on finding
00:00
anomalous things potentially across
00:00
all of the levels of the pyramid.
00:00
This methodology focuses on defining malicious TTPs,
00:00
but will also include some elements of
00:00
the other approaches to define allowed exclusions,
00:00
detect anomalous behaviors, and
00:00
filter based on specific host and network artifacts.
00:00
In this lesson, we learned about
00:00
the terms precision and recall,
00:00
and how improving one often
00:00
comes at the expense of making the other worse.
00:00
We also reviewed some traditional approaches to detecting
00:00
malicious activity in cyberspace
00:00
and their relative pros and cons.
00:00
What could we do to improve
00:00
this situation and make better analytics?
Up Next
Module 1 Knowledge Check
5m