Host & Application Security (part 2.2) Host-Based IDS/IPS

Video Activity

This lesson continues discussing host-based software solutions and focuses specifically on host-based intrusion/prevention systems (HIDS/HIPS). This lesson discusses the advantages and disadvantages to this kind of system. Participants also learn about the importance of good spam filters and host security measures and concerns: - Data exfiltration ...

Join over 3 million cybersecurity professionals advancing their career
Sign up with

Already have an account? Sign In »

10 hours 28 minutes
Video Description

This lesson continues discussing host-based software solutions and focuses specifically on host-based intrusion/prevention systems (HIDS/HIPS). This lesson discusses the advantages and disadvantages to this kind of system. Participants also learn about the importance of good spam filters and host security measures and concerns: - Data exfiltration - Asset management

Video Transcription
Now when we talk about intrusion detection, this is true of network systems as well as host based systems. Uh, for these nets and kids, they just a few points to make sure you're aware of any type of intrusion detection system. I mentioned this with
the an earlier module. All an intrusion detection system is is a sniffer. It's a glorified sniffer,
but what it adds to the sniffing capabilities is analysis engine.
So where's the sniffer? Simply captures packets on the network, and that's fine allows on administrator to view those packets. What happens with an intrusion detection system is what's captured has been analyzed and their two ways that an I. D. S uses to determine whether or not traffic is militias. They either look two signatures
or they look a behaviors
Now with any virus. Programs were used to the idea of virus definition files updating our definition files updating this signature based files because what the signatures are there essentially descriptors of known attacks. So
an intrusion detection system in the anti virus system
that is signature based looks for known patterns. As a matter of fact, sometimes you hear these called pattern based now The problem with a signature based system is it's not gonna be good against a zero day attack, right? It's not gonna be able to test it. It only knows what it knows. So another problem with thes so
no good
for zero day attacks and must be kept up to date.
And certainly those are, uh, downsides. Those are things that you have to be very aware of these air only as good as their recent update.
So an alternative to that is a behavior based system these air sometimes referred to as a heuristic
system, and heuristic is, uh it means rule of thumb.
So with the behavior based analysis engine, usually it revolves around setting a baseline capturing routine baseline performance, maybe over horse day or week or whatever, and then traffic that's out of the norm. That's different from what the baseline traffic is would be indicated as an attack.
Ah, the downside that is, you can get a lot of false positives with these behavior based systems
because, honestly, who could describe what network? But what network traffic is like? There's always some sort of variation. Now, course it has realms of tolerance, but ultimately you're gonna get some false positives with behavior based systems almost assuredly so. They're pros and cons. So what's the answer? The answer is
most of these programs today combined,
they have signatures that they pull from, for most common and for known attacks. But they also have the capability of looking and saying, Wait a minute. This is abnormal behavior. They're also anomaly based systems. And what anomaly based systems do is they look for a protocol
that is not behaving, according to its RFC request for comment requests for comment. Um,
what's important to know about requests for comments is, if we look at a protocol like T, C, P, I, P
and T C V I. P has been around for years, it has truly become sort of the de facto standard protocol for use on networks today.
It doesn't mean it's on every network, but it means it's pretty much the default. Why?
Well, there are a lot of reasons. First of all, it's an open protocol, and what that means is Bill Gates never owned it. Neither did Novell. Neither did Steve Jobs. This was a protocol originally designed for the government in the military
but it's been open meaning at any point in time. You can see the code. You can see the requirements. You can see the functionality.
But not only that, but you can add to the functionality and add to the code doesn't mean that you could just recommend a change in its immediately implemented. Of course, there's a ratification process in a submission process and so on so forth. But with an open protocol, the entire community has access to the code in the entire community
can modify the code,
you know, given the validity of the proposed changes, that's a good thing. So with these anomaly based intrusion detection systems, they know the three way handshake, for instance, and they know it's not a two way handshakes, not a one way handshake. So anything that's not behaving, according to its RFC,
that anomaly based system would say, Wait a minute, this must be an attack.
So what you really have today, when you buy an anti virus program, is you're really getting features that protect against a spyware. You're really getting intrusion, detection and often prevention because they'll block and even terminate attacks. So you're getting a lot, Maur whereas,
you know, 10 years ago you would have one program for spyware, one program for
and the virus and the different thing for intrusion prevention. So, um, spam filters can't say enough about a good spam filter in a production environment. You know, spam is a lot more than just a hassle. It's a waste of time. It clogs up our mail servers. It clogs up our in boxes.
It's a way of distributing hoaxes,
which can be very costly on your network. Hoax is often ask users to respond, and they respond in such a way that it either furthers the the clogging up mailboxes. Or, you know,
there's a virus. Please go to your C directory and run the command del tree windows or whatever
you know, encouraging users to do silly things essentially, so span filters like Haruko to their 1,000,000 of them that are out. There are very much a part of network security,
other things that we've gotta look for data ex filtration and out of what went on with the breach at Target, I think one of the things that boggles my mind the most is nobody noticed. The export of 70 million records.
Wouldn't you think that would trigger? And somebody would go
50 gigabytes leaving our network. That seems odd. Maybe we should investigate.
Um, you know. So the idea is we need means in place to not just watch what comes onto our network, but also what's leaving our network and through what type of device or what sort of mechanism. And we call that data exfiltration as an attacker. If I'm hitting your database, I want what's in that database. I have to have a way of pulling it off.
We want to watch for that,
um, other things to be concerned with his asset management, the importance of physical security, knowing what your resource is, our knowing when they're missing. Keeping record of your hardware following up with. You know, I think that's pretty self explanatory availability. If something's missing, it's not of a
Up Next

In our online CompTIA CASP training, you will learn how to integrate advanced authentication, how to manage risk in the enterprise, how to conduct vulnerability assessments and how to analyze network security concepts and components.

Instructed By