Refining Hypotheses

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
4 hours 42 minutes
Difficulty
Intermediate
CEU/CPE
5
Video Transcription
00:00
>> Hello and welcome to Lesson 2.6, Refining Hypotheses.
00:00
In this lesson, we will analyze
00:00
key considerations in
00:00
hypothesis development and refinement,
00:00
and discuss how to improve hypotheses using refinement.
00:00
In examining the implementation of
00:00
a technique while re-examining our hypothesis.
00:00
It's beneficial to think through
00:00
the activities that occur before,
00:00
during, and after the core activity we are examining.
00:00
In particular, you must explore how
00:00
this technique can be invoked by an adversary,
00:00
as we did in previous lessons.
00:00
For the scheduled task technique,
00:00
attack list seven different
00:00
sub techniques that can be used
00:00
for execution with two specific to Windows.
00:00
From that, there are at least
00:00
four different identified interfaces
00:00
that can be used to invoke those sub techniques.
00:00
Here it's important to consider
00:00
what is common across them.
00:00
Focus analysis on those common characteristics,
00:00
and that will help subsequent analytics
00:00
to be more robust.
00:00
Invoking the core behavior,
00:00
often triggers second-order activities
00:00
that can be observed.
00:00
Examining what the system does in
00:00
response, is also useful.
00:00
For example, windows actions
00:00
typically trigger events like registry modifications,
00:00
DLL loads, File writes,
00:00
and new network connections.
00:00
These actions can be noisy,
00:00
but useful event sequences can be identified,
00:00
that can help with detection efforts.
00:00
Finally, through examining
00:00
the invocation of a technique,
00:00
it's core behavior and the second-order effects.
00:00
It's helpful to identify,
00:00
what activities are unavoidable and what
00:00
is optional and achieving the desired end state.
00:00
It's important to capitalize on those unavoidable events
00:00
to maximize the resiliency of our analytic,
00:00
against evasion and reduce
00:00
data collection and processing requirements.
00:00
We call these actions, Invariant Behaviors.
00:00
As we continue to define and refine,
00:00
it's helpful to consider why
00:00
an adversary may exhibit this behavior.
00:00
Knowing why provides contexts and may
00:00
hint in other behaviors that are occurring before,
00:00
with, or after this activity.
00:00
Contextual information can be used to refine and
00:00
improve precision and or
00:00
recall of our analytic approach,
00:00
or to help investigate our initial detections.
00:00
Knowing the why,
00:00
can also help us think through
00:00
how malicious behavior differs from benign usage.
00:00
If an adversary is scheduling tasks
00:00
to help execute code remotely,
00:00
as they move laterally through a network,
00:00
they may create a series of events in parallel.
00:00
While a System Administrator scheduling tasks
00:00
across the entire enterprise,
00:00
might simultaneously create tasks
00:00
on all of the machines from their machine.
00:00
Those two cases with different purposes,
00:00
might look different when those events are examined.
00:00
As we refine our hypothesis,
00:00
it's useful to consider a range of
00:00
benign use cases for the technique.
00:00
One way to explore that space,
00:00
is to consider how each of
00:00
several typical users might exhibit this behavior.
00:00
For example, a typical user,
00:00
administrator and developer or power user,
00:00
may each exhibit this behavior differently
00:00
in the course of their normal daily activities.
00:00
Consider what types of user or
00:00
activities occur in the network,
00:00
that may be confused with malicious behavior.
00:00
What would differ between
00:00
those uses and malicious activity?
00:00
Can we collect additional contextual data
00:00
to help us distinguish?
00:00
If a malicious actor steals
00:00
the credentials of an authorized
00:00
administrator on the network,
00:00
how can we distinguish
00:00
a subsequent behavior from that of an administrator?
00:00
Finally, can we influence how authorized
00:00
users behave to help
00:00
distinguish them from malicious users?
00:00
All important things to keep in mind.
00:00
Based on attack info,
00:00
we developed a simple initial hypothesis
00:00
concerning malicious task scheduling activity,
00:00
then incorporates the tactics
00:00
the adversary is trying to accomplish.
00:00
This hypothesis is very high level and
00:00
too vague to really implement as it stands.
00:00
But now that we've thoroughly researched
00:00
the scheduled tests technique,
00:00
we can do some refinement to improve that.
00:00
In our earlier research of the technique,
00:00
we found relevant information about
00:00
the possible objectives of
00:00
the adversary and malicious task scheduling,
00:00
including potential ways they can abuse
00:00
system functionality or access to elevated privileges,
00:00
including scheduling a task and a host machine,
00:00
scheduling a task remotely,
00:00
often using elevated privileges and
00:00
scheduling a test to run as a different specified user.
00:00
Here it's useful to understand what
00:00
benign task scheduling looks like,
00:00
to help us distinguish between
00:00
those behaviors and possibly malicious ones.
00:00
A typical window system may have
00:00
hundreds of benign task schedule,
00:00
set to run as a wide variety of users,
00:00
including some with elevated privileges.
00:00
It's worth thinking through how to
00:00
identify malicious use of the technique,
00:00
amongst the benign activity.
00:00
Getting a baseline I've known
00:00
good for comparison is risky,
00:00
as every new benign task will need investigation.
00:00
There's potential for malicious tasks to snake bite.
00:00
To effectively detect these activities,
00:00
we need to think through
00:00
how malicious activities can be differentiated,
00:00
and look for features that can indicate malicious use,
00:00
such as requests to run with maximum privileges,
00:00
or request to schedule tasks on remote machines.
00:00
To focus our scope,
00:00
it could be helpful to look at
00:00
the potential use cases for
00:00
malicious task scheduling and
00:00
refine our hypothesis accordingly.
00:00
The most general use case is locally
00:00
scheduling a malicious task on the target machine.
00:00
Here, our best approach is to focus on
00:00
the invariant behaviors we identified earlier,
00:00
and refine our hypothesis accordingly.
00:00
For remote task scheduling,
00:00
consider the fact that there will
00:00
be two computers involved.
00:00
The source machine, which is scheduling the task,
00:00
and the destination machine,
00:00
where the task will actually be scheduled and run.
00:00
The destination machine will see
00:00
the same invariant behaviors. This is the local case.
00:00
But the source machine can look
00:00
for specific implementations of
00:00
task scheduling and see when they
00:00
are associated with the remote scheduling event.
00:00
Which is reasonable to do with the command line,
00:00
but may be more difficult if they use
00:00
the API or GUI implementations.
00:00
This use case is an example of
00:00
when we have a sub-optimal approach,
00:00
but can still derive a method to
00:00
detect this behavior on the source machine.
00:00
Keep in mind, it may need to be changed
00:00
through future iterations of this methodology.
00:00
In trying to distinguish between
00:00
malicious and benign use cases,
00:00
we already know that there are lots
00:00
of legitimate reasons that
00:00
a system administrator may have
00:00
for scheduling tasks remotely.
00:00
Our first thought, maybe to exclude all of
00:00
the associated activity coming from sysadmin hosts.
00:00
Adding this exclusion will probably
00:00
significantly improve our precision,
00:00
as we cut out a lot of the benign system noise,
00:00
but can potentially worse than I recall,
00:00
by creating a blind spot for our analysis that can be
00:00
exploited by an adversary
00:00
who is misusing legitimate credentials.
00:00
This is a TTP that we know from our research,
00:00
is employed by real-world adversaries,
00:00
who are trying to evade detection.
00:00
This example of a refined hypothesis narrows the scope,
00:00
to try to identify when the adversary,
00:00
maybe attempting to schedule a task as a different user,
00:00
in order to escalate their privileges.
00:00
Which you can see is not
00:00
possible across all implementations,
00:00
but is a valid use case,
00:00
that may be of particular interests in our environment.
00:00
As previously mentioned, it can be
00:00
helpful to refine your hypothesis,
00:00
by focusing on use cases and
00:00
other more specific conditions,
00:00
that can significantly improve
00:00
the precision of your results.
00:00
It is important however,
00:00
to keep in mind that doing so could overly narrow
00:00
the results of your detection
00:00
and cause your analytic to become brittle,
00:00
with respect to changes in
00:00
implementation of the technique.
00:00
Finding the right balance between precision and recall,
00:00
l is a key factor throughout this entire process.
00:00
In summary, hypothesis refinement and
00:00
improvement is an iterative process
00:00
that you continue through,
00:00
as you learn more about the technique at hand.
00:00
It is key to strike a balance
00:00
between precision and recall,
00:00
in order for this methodology to be successful.
Up Next