hello and welcome to the next module in the Cyber Threat Intelligence Course.
In this novel, we're gonna be taking a deeper dive into the cyber killed change.
Well, look a little bit more where this concept came from.
We're talking about Pass a gate of Discovery. The detection of
of malware, for instance.
ways to degrade or disrupt
the actions of an adversary.
And then we'll wrap up with the courses of action and an expanded course of action that has to come from Mr Recently.
So Louis, more data here about the cyber. It'll change from Lockheed Martin.
They call this their intelligence driven defense model.
It's certainly a very popular model has been for some time now.
There are some criticisms, of course, that it's
perhaps focuses too much on the perimeter.
That is a decent critique tip to use, but it's still a great wait for an organization to
gained some familiarity with with industry standard
on the right. We see seven steps of the cyber kill chain,
and these are basically the GP piece
Sargi Teepees of your adversary.
We covered this a little bit an earlier model,
some of the intervening steps that you might engage in
as it relates to the cyber Kill Jane or C K. C. Seven
heart also very important,
beginning with discovery of passive
first are the passive discovery of data.
There are lots of DEA sources within a typical environment
where useful information could be gleaned
we're look for and how to interpret it.
Starting off with the website visitors.
If you're using tools like Google Analytics or other similar
vendor provided assets
you're hosting, provider may have some tools
to gather different kinds of metrics. Google Analytics is very popular because they can capture
tremendous amounts of detail about how your website operates.
Which pages are popular, which pages are not being navigated to
very often. Even the overall patterns of navigation for your users can be an a lot.
way to getting some insight into how your
you're a public facing asset might be
used by your customers by your users and, of course, by adversaries.
So one of the challenges here has to do
identify proper metrics to gather,
looking at how long someone stays on the page,
for instance, might not seem very relevant.
But if you put that in the context of
someone appearing to visit a page for exactly 0.25 seconds before switching to another page,
there's probably not a human behind that activity.
It stands to reason that that's
being detectable is also important here,
someone who is using a tool to crawl a website first.
There's many tools out there for this.
This technique involves going through the Web sites
source code page by page
and new aerating any links that are discovered.
is then catalogued by the search engine, for instance, or by the tool that the Attackers running.
This should look different
when analyzing metrics and other kinds of analytics when compared to the behavior of a actual human being. On the other end,
something even more suspicious would be detecting someone trying to copy a website
again. There are lots of tools for this,
and the copying of an entire website will certainly look suspicious to various monitoring tools that you'll probably have in your organization.
net flows, for instance, would see large amounts of data going outbound to a single I P address
that looks like something strange is going on should be investigated,
so I don't find the patterns
To be fully effective is really what we're getting out here. But also think about
what kinds of capabilities and actions doesn't adversary plan for the future.
until the event happens. There might be some ability to do some prediction
or some intelligent guessing.
For instance, the idea of reverse engineering mount where
is very critical for most organizations, this is a large
in broad body. Acknowledge her brought in deep, you might also say,
but better analysis on reverse engineering of malware provides a lot of clues.
There could be domain names that were discovered in the malware
or I P addresses that are related to
maybe maybe information that links back to their command and control servers, for instance.
Also, there could be an understanding of how that were operates.
What connections doesn't try to make what files doesn't try to change
which register entries are affected when it, when it tries to install itself.
All these clues have value to the analyst
because then they could try to understand. Okay, well, now we know a little bit more about how the
the now operates weaken,
understand better how the infection happened to begin with, and hopefully
also understand how to prevent
similar malware from being successful in the future
weaponization of malware.
Generally, we think about that as occurring at the Attackers. End of the conversation.
They create a malicious payload
pdf or something else like this,
and then their goals to just get it delivered to the victim.
Sometimes weaponization happens at the victim's machine
because some file that exists already on that system is the target of the malware insertion process.
Or maybe it's a common program that the victim uses, like,
program like word or maybe even the calculator
or some other operating system tools. Perhaps.
So when the weaponization happens at the victim machine, there is expected to be some kind of residual evidence.
it could be changes to the registry is I just spoke about a minute ago
that could be files and folders that weren't there before
new files and folders missing 1000 folders.
There could be changes to system configuration files.
as we talked about in earlier chapters, might be considered indications of compromise.
So they relate back to this idea that there's
some tangible evidence of a change or changes made to a system
when malware was either delivered and activated or perhaps created and activated at the victim's machine.
Other considerations would be to look at the actual
If it's known anyway,
you'll need access to a lot of information to able to do this sort of analysis properly.
But it might be possible to look at when malware was created
when it was tested and when it was deployed.
If there's enough information available
again provide some insight into how the
the Attackers methodology operates, what kinds of capabilities do they appear to have?
What is their level of sophistication?
And also, how long does it take them to
create malware once they've gained access to a system?
These are all good good clues because it helps to inform a properly created defense and also an axe to inform
detection methods and to refine them so that they're more timely in the future.
So these artifacts and these these different clues that left behind
should be collected and
plugged into the timeline as best as possible.
This allows the analyst
to work with other individuals. Maybe, you know, security engineers or
even developer teams and so on.
And they could start to piece together how the attack happened, how long it took to
developed the, you know, the capability that caused a problem
and maybe even how long it took before detection was possible.
In the case of of a P. T. S or
advanced persistent threats,
sometimes detection can take days, weeks, months, even years.
And knowing how that timely operates is vital to
improving over over that
are making incremental improvements in the future.