Video Description

This lesson covers technical protection, which are also called logical protections. These are hardware or software controls which allow or prevent access to a resource. There are several technical control classifications; examples include: - Mandatory access controls - Discretionary Access Controls This lesson also covers protecting what's contained in a database through using database views, authentication methods, managing biometric systems as well as the drawbacks of biometrics and network access protection methods. Participants also learn about network firewalls which allows for access to a network relative to certain factors. [toggle_content title="Transcript"] What kind of technical protections might we think about? The terms 'logical protection' and 'technical protection' are more or less interchangeable. These are basically hardware/software controls that allow or prevent access to some resource. We start off with mandatory access controls or MACs. These are controls that are not discretionary. They're in-place. They're always there, and they're mandatory because if you want to get access to something, you have to deal with those controls. These are basically rules that identify which subjects have access to which objects and how that access is defined. Then there are discretionary access controls, or DACs. This is at the discretion of the data owner typically. So they can decide what access is allowed and who has that access. Then we have RBACs, or role-based access controls. This means that your job function defines what your access level is and what you're allowed to do with the resources that you have access to. Then we have task-based access controls, or TBACs. This means that you're defining different discreet steps that someone takes to do their job, calling those things tasks and controlling how that task is performed. Then there's ABACs, or attribute-based access controls. Similar to the role-based in that the person performing a particular job function has different aspects of their job or attributes of their job that can be defined in an access control. So if your job is the back-up operator, one of the attributes that you might have is the ability to deal with the storage and retrieval of media, just as a simple example. Then we have controls for applications. We're trying to understand the identity of a user, what kind of authentication is required for them to get access, what are they authorized to do once they have access and how do we establish accountability? Meaning can we audit all of their activities and then close that loop when there's a problem, or some other work needs to be done? One of the common applications that requires some work of this kind are databases. So what we can do is define different views within a database. This is done by the database administrator, typically. Not the database operator, usually, but the administrator or the database designer. So they look at the business requirements for different roles, different types of users, and define the views that are appropriate for that particular user. So the view might be articulated through a restrictive interface, meaning that you only see the options that are available to you, or maybe some other options are greyed out or just not visible at all. We also have to think about how security labels might apply in this environment. We can say that data is public or it's for official use only, or it's private, or it's classified. The labels would interact with the users' security level to control what they're allowed to do with that resource. Maybe they only have read-only access, or maybe they have read, write and execute access, or maybe it's no access. We think about authentication in relationship to getting access to applications to interact with data. First we have to think about identification. This means you're proving who you are through some means. I can say that I am who I am and that identity is searched against all the available known identities for that authentication mechanism. Authentication, on the other hand, is trying to provide an exact match of my identity to who I claim to be. Obviously there's lots of different mechanisms to perform this function. This is more of a one-to-one process. I have to be an exact match. For instance, if I'm using login credentials anybody can claim that they have my identity by knowing my login name, but knowing the password that goes with it is a much more difficult task, so that's a better demonstration of providing identification and authentication at the same time. We have different authentication types. The easiest way to sum this up is something that you have, something that you know, or something that you are. So something that you have could be a badge that you swipe, or you go close to a badge reader, it's a proximity badge. Something that you know would be a login and a password and something that you are would be a retina scan or a thumbprint or a palm scan. So these are attributes that can all be used together to provide very strong authentication to prove that this person is who they claim they are, they know the required information and they have the personal physical attributes that the authentication mechanism requires. Speaking of those physical attributes, that's where biometrics comes in. We have things like finger prints or palm prints. Palm prints are looking at your palm from the entire surface, not just one finger or one thumb. Then we have things like retina scans that are looking at the pattern of blood vessels at the back of your eyeball. Those patterns of blood vessels can change over time, so that's one of the challenges in dealing with a retina scan. We also have an iris scan which looks at the area around your pupil, the colored area, looking at the individual patterns and texture and color that are unique to each person's eyeballs. Facial scans are also a very reliable technique, for high-end systems anyway. You might even use signature dynamics. So someone might be able to forge your signature and it looks perfect, but they wouldn't create it in the same speed and pressure on the pen as the originator of that signature would. So signature dynamics takes all that into account and can add the pressure of the pen, the speed of the pen, as an additional factor for authenticating that person's identity. Voice recognition: another good technique for certain things. Now, biometric systems can be difficult to manage. There's issues with calibration and expense. First we have to decide if biometrics are even feasible. Could it be that the data that we're trying to protect isn't critical enough to warrant the expense and management overhead of a biometric system? Or it could be that you're protecting information that's extremely sensitive and money is no object, you just want the best possible protection. So there's, sort of, different ends of that scale there. Then we have to think about what kind of requirements does the biometric system have? Are we going to use facial features? Are we going to use a palm print and a retina scan in addition to a voice recognition component? It could be that the physical access to the biometric system has to be protected as well so that it can't be tampered with. We need to think about how the biometric system is implemented. We need to select one that suits our needs and has the features and capabilities that are required for the security objectives. So you can create templates when you register a user, or enroll a user with the biometric system, you capture some information, like a retina scan, or a palm print, and that becomes a template for further authentication of that user down the road when they want to access some resource. The configuration of the system can be a challenge. There's a calibration required. There could be multiple instances where users are trying to enroll and it's not working correctly and sometimes advance training is required, or maybe you get help from the vendor to do certain things to make sure that the system works to the level that's expected. We also need to monitor logging from biometric systems looking for certain types of events. So when a user gets enrolled, they get trained in how to use the system, their data gets captured, whatever that might be, their retina scan or a palm print, or a voice recognition, and then there's some testing that goes on to make sure that the system is operating correctly; that it's authenticating correct users and rejecting incorrect users. After a system has been implemented, there's some time period, maybe it's annual or even more often, depending on what kind of system it is, where that system needs to be re-accredited and re-certified to make sure that it's meeting the stringent quality standards that the organization has defined. Once you decide to get rid of a biometric system, you have to deal with media sanitization and the proper disposal of all the components so that no sensitive information is retained. What you wouldn't want would be to have a record of everyone's finger prints still contained in the database that that biometric system was using. That might be dangerous to get that information exposed to an unauthorized person. So what are some of the drawbacks? One of the most difficult things is the enrollment process. Some people just don't want to have their iris scanned or their fingerprints taken. They have personal reasons, privacy concerns, and so on. Failure to enroll is abbreviated as FTER. This means that someone's trying to enroll but it's not working for whatever reason, or they're doing it wrong and there's problems there. We also have another statistic called the false rejection rate. This means that the user's trying to authenticate, they are a legitimate user but it's not working. So this false rejection rate is what's called a type 1 error. A type 2 error is the false acceptance rate. That means that the system accepts a user that it should not. So they put their palm print down and even though it's the wrong palm print, the system accepts it as a valid authentication and grants access. That's a really bad thing to happen. Then we have the equal error, the EER, or the crossover error rate, the CER. These two things are ways to quantify how well a system is performing. If we have an equal balance of speed and accuracy, that means that the equal error rate is more or less balanced out. If we favor accuracy over speed, then that's where the crossover error rates comes into play. The crossover error rate should be as low as possible. The more expensive and sensitive a system is, the lower the crossover error rate will be in general. So if you're evaluating systems, you might want to look at some of these parameters to decide whether or not it's worth spending the money to get the desired level of protection. So what are some of the ways we can protect our networks? Kerberos is a solution that's very popular. It's built-in to many operating systems, including Windows. This is a single sign-on technology that uses tickets. So a user wants to get access to a resource. They go to the authentications server. The authentications server authenticates them either through some token or some other mechanism. Perhaps they have to login to the authentication server to begin their workday. Then the authentication server gives them a ticket granting ticket, which is presented to the ticket granting server. That ticket is only valid for a short period of time. Perhaps on the order of five minutes or less. Then the user presents that ticket to the resource they want to authenticate to. The resource they want to authenticate to looks at the ticket; sees that it's valid, sees that it's still viable within its time limit and grants the user access. Kerberos is much more secure than using a typical login and password because the password doesn't go across the wire. We also have to think about our different types of firewalls. Firewalls have a lot of different advantages. Basically they allow or disallow access to networks, or between the borders of two different security zones. Some disadvantages of firewalls are that there are ways to circumvent their protections. Ways to probe through the firewall to see what's on the other side, and if the correct firewall type is not being used, certain attacks can bypass the protections altogether. So, starting with the first generation firewall, this is what's called a packet filter. All we're using here is source and destination addresses, protocol, like TCP or UDP or ICMP and a port number. That's a very basic type of firewall that provides a minimal level of protection. Second generation firewall or application proxy filter tries to look at the content of the packet, looks at the header, looks at some of the content to make sure that it meets requirements for compliant traffic. Then we've got the stateful inspection firewall, or third generation firewall. This keeps track of all the different sessions that the firewall knows about. If two systems are communicating it knows that they've built their session up properly and it keeps that in a table. So it has some historical record to refer back to when it sees other traffic which might appear to be part of the session but isn't. Stateful inspection firewall is more resistant to spoofing attacks and so on. Then we have the fourth generation firewall which is called an adaptive response. This might be linked or synchronized with your intrusion detection and prevention system so that it can decide dynamically whether to allow access, based on certain factors like the volume of traffic or the behavior of the connections that are involved. And then lastly we have our fifth generation firewall, which is the Kernel process. This gets into the kernel of the operating system at the lowest possible level and makes its access decisions based on the processing of information that the operating system is doing. There's that access control list that is checked for the different things that are allowed or disallowed. Now we'll consider some of the different firewall designs. We start with the screen host. This is basically the simplest version of a firewall where we've got a user that wants to access a resource. The screen host sits in the middle. The traffic has to pass through the screen host to get to the resource. So we'll just have a single host being protected, basically, in this scenario. If we move on to a dual-homed firewall, now we have at least two interfaces. One might be a public interface. One might be a private interface, and we don't have any routing enabled between those two interfaces. The traffic comes in one interface. It gets analyzed according to the list of firewall rules and then it's either allowed out the second interface or it's denied. Then we have the DMZ, or screened subnet. This is what you use when you've got public-facing servers, maybe a web server, an email server, SharePoint, file server, and we've got multiple interfaces involved. One interface coming from the Internet into the DMZ. Another interface that might be from the DMZ to the Internet and an interface from the DMZ to an internal network. So this provides protections between a semi-trusted network and the internal trusted network and the public network. So we can broker connections between those three different zones. In some cases we still have to deal with remote dial-up access. So you might have a remote access server, or a RAS. This means that users are coming in through analogue or digital phone lines to connect to a modem to get into a network. Typically these kinds of things were phased out probably in the early to mid-2000s in favour of Internet-based access, but they might still exist, so it's important to understand some of the considerations. It could also be that because the analogue modem, or even if it's a digital modem, this system might allow access to the network in a way that circumvents some of the other security controls. So this is definitely an area that the auditor needs to pay attention to. Making sure that there's a risk analysis done and perhaps some penetration testing as well. Then we have VPN access, which of course is a good replacement for remote dial-up. The VPN creates an encrypted tunnel between two points. It could be host-to-host or it could be host to a gateway, or even from one network to another, which is a gateway to a gateway. It just depends on what the requirements are. Typically if you're using this for remote access it would be a host to a gateway. You're at home and you connect to the gateway where you're working to get onto the network that way. We have different VPN types. There's a point-to-point tunneling protocol, or PPTP. There's a layer 2 tunneling protocol, which is at the data link layer of the OSI model. This is known as the L2TP. Then we have a secure sockets layer, or SSL. This is at the layer five; the session layer. Then there's the IPSec VPN which is at the OSI layer 3, which is the networking layer. So different layers are used depending on the protocol used and the requirements for the connection. There's a lot of variety of different types of choices here. For an IPSec VPN these are very common. This allows for inbound and outbound traffic. It could be that your ISP provides the connection for you and you're basically doing a gateway to gateway connection from your ISP to the remote network that you're connecting to. Two different modes are needed for IPSec VPNs. The first one is a transport mode. In this case, the payload of the packet is encrypted but the headers, the address header, is not. This means that you can route those packets the way you would route any other traffic. You can even use a network address translation or NATing. The other mode is tunnel mode where it's more of a point-to-point connection and the entire packet is encrypted and then the encrypted packet is encapsulated in ESP, which is the Encapsulating Security Payload. This kind of traffic cannot be routed. You go into a gateway; you come out the other side. It's a tunnel that goes point-to-point. So different requirements for each of these. So just understand that in transport mode the header is exposed so the packets can be routed. [/toggle_content]

Course Modules