now I could think about
trying to deny a threat access to your environment. To begin with, there's several different
ways to go about this.
One obvious choice would be to just
and lines the delivery mechanism.
If nothing information has been gathered during a forensics investigation,
you've got your log files files
from firewalls, proxies,
You could look for the obvious path. It's like email
e mails that have malicious attachments or links to malicious websites,
someone installing software, which appears to be safe to use but actually has malware
invented in it. This could be commercial software
that's been known to happen. It's somewhat rare, but still a possibility.
So the delivery mechanism should be understood to see if that is part of the
the vulnerability. Maybe it's something that people are doing
that is not completely a technological problem,
And we want to think about what,
through this understanding through this analysis, what assets or which individuals
were actually targeted.
if enough information is technical information is available, you can look at various forms of logging,
different events from your sin device for I. D. P s and so on
as tree, which assets were targeted? Which ones were not.
during later analysis, maybe there's some,
uh, understanding of why that is.
might also be an interesting thing to study, because there's
the typical phishing attack or social engineering based attacks, and these are
sort of the very synonymous with each other,
where a fishing attempt
does involve some social engineer, but it's not always that straightforward. It could be social engineering done in person
And that's not really
what we would normally associate with fishing, which is usually done through email
anyways. The knowing what the targeting Waas
Try to estimate what the intentions were of the adversary or the attacker.
Are they going after data Days is to try to excel Trait customer data.
get information about the public facing websites because
there's a pathway there to get into a database. For instance,
they might be trying to learn about the network topology trying to map it all out
because they are involved in a PT and they part of an A P T campaign would be of course to stun Tine,
mapping out the entire network topology of the entire network infrastructure
so that you know where the firewalls are. You know where the ideas or I P s might be located?
You'll know if there's
host based I PS and ideas in use, you might figure out where all the ropes go where all the gateways are.
These are all valuable
for the attacker to to try to realize, to help their with their long term goals.
One thing that I need to play some emphasis on
is the value of using NTP
or network time protocol.
Because I've discussed all these different ways to gather information on these different data sources.
They lose a lot of their usability if you're not synchronized to a single time source
so that a well run organization should have
a system within the perimeter of that synchronizes with an external time source typically provided by a university
or a government agency,
thes time sources are accurate to within thousands of a second,
so they make an excellent reference point.
That system that's within the perimeter now will be used to secret eyes. All the rest of the systems within the perimeter.
They all point to it
as their time source.
The reason. This is so important.
There's many reasons, but one reason
is that certain protocols will not function correctly. If
between two systems is
is not very, very closely aligned,
things like Curb Rose or P K I. Other applications may break authentication mechanisms may break.
So it's important to that reason.
The primary reason why NTP is so important is that
you want to always be thinking about
if you're gaining data from 10 different sources, all different pieces of infrastructure of your environment,
trying to do a forensic investigation
you want to know with very
you're high degree of certainty exactly which events happen and in which order.
If my clocks are two minutes off on this system and the three minutes off on this other system,
and maybe another system is a minute ahead
now, I've got to figure all the time. Delta's out before I can properly analyze the data that just causes a lot of extra work and possibly some confusion and maybe even
incorrect conclusions as well.
moving on, we could think about okay, Now we've studied Cem
instances where, by following the C K c seven methodology, we know that an attacker gun and they did some things. We got some information about that.
How do you deal with this process of
delaying or degrading these kinds of actions in the future?
That's an important consideration to consider.
The organization should always be trying to find ways to make incremental improvements in these areas,
so that as time goes on, all these processes become more mature
and there should be less time
wasted on trying to figure out what to do because it's already known methodology and everyone
should be better, Well, well versed in their job functions,
we could start with security awareness training.
This is a middle requirement for most individuals that work in any kind of 90 capacity.
Sometimes the security awareness training is not very effective. However,
it may be the same training for several years in a row, and everyone just knows the answers to the questions, and they just skip to the end and get their proof that they took the train.
That's obviously not going to benefit the organization as much as
having Maur interactive training which does change from time to tell him
and requires thinking and problems all of it.
We can also consider secure coding practices. That's an obvious choice for any organization that develops their own software in house
compliance audits. Vulnerability scans should also be a regular scheduled,
actually within most organizations,
occasionally penetration testing as well.
I may have already mentioned the two different triggers for audits, vulnerability, scanning and time testing.
General, these are time based,
so it's an annual thing or a biannual thing or a quarterly thing that's being done
well. The other trigger is a fact based,
So there's been a large incident
now. Audits, vulnerability, scanning and pen testing may all have to happen again because
something's happened and the organization needs assurance that their controls and their people
are performed correctly,
that's the best way to get that kind of information.
very Zen point to the organization, whether it's a server or work station or even a mobile device,
those can be studied as faras. They're configurations.
There be patterns of behavior for use,
and, of course, they should be hardened a CZ much as possible
to make it more difficult for an intruder to gain access to begin with
or effectively. You're trying to reduce the attack surface
so that in a penetrator,
just doesn't see as many opportunities to gain
the ability to interact with a system in an unauthorized manner.
It stands to reason also that,
as I mentioned the previous section
reverse engineering. And now they're trying to block command control,
malicious websites that perhaps the malware is trying to connect you. These are all standard
actions that you were taken or to better deal with