Hi, everyone. Welcome back to the course. So the last module we wrapped up our discussion on operating system forensics
and this module were to talk about network forensics.
So just a quick pre assessment question. Really? Time analysis occurs after an attack is complete. Is that true or false?
Are so that's false. And the name kind of gives it away. They're so real time would actually be in real time.
It's a network forensic. So basically, as the name implies, this is related to the monitoring Our analysis of network traffic, basically that's used to discover the source of attacks were other problems, right? So if there's a crime committed and we feel that the information might be in different packets on the network s o, for example, thinking like child ***, right.
So they're going out that, you know, the bad person is going out to this child *** website.
They're downloading files. We may be able to grab some good information about that communication stream inside of the network traffic.
One thing to keep in mind is that network traffic is gonna be volatile, right? So generally our best bet is gonna be real time analysis if we can get it, However, we could do postmortem as well.
So speaking of both of those on the real time, as I mentioned, is gonna be the event is actually still occurring, right? So, you know, the attacker is still active seeing that website, you know, an attacker is still, you know, hacking our systems, whatever the case might be.
And then post mortem, As the name implies, it's after, right? So nobody dies. Hopefully not, but postmortem is gonna be after the event.
So long files this evidence. So you just wanna understand, um, kind of how we can use locked files, evidence and some of the aspects of it.
So with the federal rules of evidence or f Ari's, I have abbreviated there it goes over the hearsay rule. So normally
things couldn't are not admissible if it's hearsay, right? So, technically, if you think about it in that context, you know, we'll talk about the exclusion in a moment here, But if you think about it in that context, like you can't really question the log server, right?
Like you can't go up and say, Hey, you know, you know, were you you know, it really at the club on Friday night at three. You know, 3 a.m. to sweetness. This,
right? Like the log servers. Just a machine, right? It doesn't talk to you. So in that context, they had to put a kind of an exclusion right for things like that. So, um, thinking about it that way.
The exclusion is if things are collected as part of normal business operations. So, as you know, if you like, you can prove that. Okay? We always collect logs, right? We don't We didn't just collect it, you know, on Friday at 3 a.m. At the club, we collected it. You know, Monday, Tuesday, Wednesday, Thursday, Friday. We've done this forever. And we have also collected it after the event
as well. So basically,
you know the way it's admissible here again. This is not legal advice by any means, but the way it's admissible here is basically you have to prove that you've been collecting these locks before, during and after the event.
You have to be able to produce that et cetera, et cetera, and that basically, that establishes twice trust. Excuse me, Trust worthiness as well. To show that yes, we are doing it this way. These are the logs. This is the information these air not altered. So going back to the chain of custom e custody that we've kind of hammered out throughout this entire course, Um,
going back to that aspect of it of, like, Okay. Making sure that this data is the data
that is actually from the logs
Event correlation. Just a few ones that you want to know here. There's other ones as well. So as I mentioned throughout the course, you have the free notes, and everything in the supplemental resource is download those and study them. They're gonna help you immensely for the ch EF eye exam. If you decide to go take it. And if you decide not to take it, they're definitely gonna be helpful for you
to understand different dollars points for a career in digital forensics.
So could book based, rule based automated field correlation beige in and then also time and roll based. We'll talk about each one of those kind of at a high level.
So Kobe, So here, as the name implies of a code book, just stores sets of events and in code so think of it like a master code book. Or, you know, like a cheat code book for your games
rule based As the name implies, rules are used to correlate different events.
Automated field correlations. So basically, this compares different fields of the data and determines if there's any actual correlation. So almost think along the lines of like, hey, I type of thing.
Beijing This one uses statistics and probability, so you just wanna make sure you memorize that aspect for your exam. If you ever see anything asked about which one uses statistics, it's gonna be the Beijing
and then time of rule based on this one just monitors a user or computer behavior for abnormal activity.
Network time protocol. You just want to know, basically, with NTP stand for for the exam, this one essentially synchronizing the clock's across all the network devices on DS. Synchronizing those two coordinated Universal Time or UTC again, you TC is something you just want to memorize. What that stands for again stands for coordinated universal time.
Um, just memorize that for your exam.
So what devices have logs? So a whole lot of them essentially is the answer, but you know, your rotter, your firewall, your intrusion detection or prevention systems. Your honey pots DCP Oh, DBC, which is open database collectivity, etcetera, etcetera. So essentially almost all the devices on your network are gonna have some type of logging,
which presents an inherent challenge, right?
If all of them are sending us logs, what do we do?
So talking about challenges here again, all of them are setting us logs. Right. So we have a variety of logs. The sources of data are distributed as well. So we're you know, also, the day source has changed a lot. So depending on what we're plugging into our network, the data sources, Concerta, or even with updates,
you know, ah provider may actually change the way the logs are being disseminated
from the machine. So along all those things could be very, very fluid.
Sensitivity of data is another challenge, you know? So if we're working with, like, classified information, but not all our systems are working with classified data like, what do we do? How do we handle that
formatting of the log data? So, you know, the files could come back at the logs themselves, come with different formatting on the files.
Also, log fatigue is a challenge. So, you know, you know where basically as a network admin or system.
Excuse me, security engineer. You know, our analysts are gonna be inundated with with,
you know, if a larger company, you know, like, possibly terabytes of data coming in daily that are coming to you, you know? And it's like, Okay, well, how do I look through all this? You know, you get locked fatigue in the aspect of, you know, everything kind of sorts, jumbling together and looking the same. So that's where you know, like a I and stuff like that is important. Um, you know, in different scripts
to try to reduce some of that on you
arm retention of logs. Right. So if we're getting, you know, terabytes of data day in, you know, like, where we store in this, you know, how long do we want a store? Certain log information for, You know, like, when can we purge it to try to clear up some stuff and I'll make it cost so much?
And then also, centralized logging is one solution to all of this. Excuse me. Where we can just have all the
logging information come to assert its central location. So think of a tool like Splunk, for example, where, you know, we point all our log stuff to Splunk, and then it's Plunk gives us that wonderful dashboard where we can go in there. We could set custom scripts, and then we just get a spitting out information that we actually care about
just log or something else
that you can use. Basically, this separates the log generation log storage in log analysis so it just sent. You know, it's a central depository for printers, routers, et cetera. Just gives that central depository for the logs.
So just one post assessment question
Roger saw the only device on a network that do not have logs. Is that true or false?
All right, so that one's actually really easy. So that one's obviously false, right? Cause rodders do have loves, and basically almost every device on the network should have a log attached to it.
So in this video, we just kind of covered at a high level network forensics. So some of the key points that you want to just know for your exam
and the next budget we're gonna go over the investigation of Web attacks