An Overview of Information Security Program Management

Video Activity

This lesson offers an overview of information security program management and talks about what will be covered over the entire module: - Security architecture - Security models - Evaluation criteria - Certification and accreditation - Technical project management - Cost benefit analysis - Trusted computing/protected access - Vulnerabilities

Join over 3 million cybersecurity professionals advancing their career
Sign up with

Already have an account? Sign In »

2 hours 5 minutes
Video Description

This lesson offers an overview of information security program management and talks about what will be covered over the entire module: - Security architecture - Security models - Evaluation criteria - Certification and accreditation - Technical project management - Cost benefit analysis - Trusted computing/protected access - Vulnerabilities

Video Transcription
all right. Moving on with our third chapter. Ah, force. I bury Certified information Security Manager. Our third chapter is on information security program management. So everything from Thea design and the architecture of a system through its creation
Ah, the models on which we build thes systems, how we evaluate the systems and how we determinate determined that they're ultimately to be certified and accredited.
Ah, as well as how we manage this project from beginning to end with security in mind. So a lot of good information in this chapter now the first thing I just want to review again very quickly cost benefit analysis Now in the risk management chapter, we talked about that right? We talked about the idea that
you don't want to spend more money to secure a system
than the value of what's being protected. Right? And we talk about, you know, we start off by identifying the value of our asset. We look att, threats, we look at vulnerabilities and then we try to find a cost effective solution. And I just wanted to reiterate that
because you have to have an honest understanding of cost benefit analysis before you start in your design.
If you'll remember the question, I asked him the last module, how much security is enough?
And even though so many people are tempted to say you could never have enough security, you can. The amount of security that we're gonna enforce is just enough. And what I mean by that is we're going to make absolutely certain that we protect our assets to the degree that is warranted.
We're not going to spend any more money than we have to in doing so, though
the whole purpose of security is to support our organization. And if we spend too much on security, we're not supporting our company for the same reason that I don't have a security guard in front of my house and a retina scan to get into my
kitchen or whatever that might be. We find the balance and we provide the amount of security that's enough,
right? We don't want too much security. Why? Well, because it costs money, but
the other costs associate with security or maybe less tangible. But they're every bit as important to understand performance. You will almost always decreased performance. When you add security, it takes longer.
You know every now and then if I just run out to my car to grab something, I don't want my house door to go out to my car. Why?
Because it takes too long to lock in tow. Unlock for the 10 seconds I'm gonna be outside. Right? Security slows you down.
Is it worth it? Yeah, up until the degree where it's no longer worth it, if you know what I mean. So the idea is, we expect that performance will suffer and we find that balance between their needs for performance
and our need for security. There's always a mix which we don't really go into. A lot of discussion here on separation of duties, not in this chapter. But separation of duties is so important because every role in the organization has its own purposes. One of the biggest problems I see in a lot of organizations
is they don't separate the role of network administrator
from security administrator
because they don't want to spend the money to hire two people with similar skill sets. And I say, Why not work? I knows how to secure the network. Yes, but your network person, their top priority is availability and performance. You know, I did many years as a network administrator. I knew I was doing my job when the phone wasn't ringing. He was quiet.
Everybody had what they needed the half. But as a security administrator, that's the opposite of what my goals are. As a security administrator, I want to make things secure. I wanna lock him down. I don't want everything to be quick and easy to get to. So when we think about a network idea administrator whose primary goal
his performance
versus a security administrator being focused on security, those air cross purposes. So when we do this cost benefit analysis, we have to realize we will sacrifice performance. It's just the way that it is with security, but we need to make sure we're making a good decision on how much performance will we sacrifice
because at some point in time is people can't do the work that they need to do. That's obviously not acceptable.
So the point I want to make is you've got to do an honest you gotta have an honest discussion about what sort of degradation of performance is acceptable, and you've gotta have that discussion with somebody other than the person whose job rests on high performance, if you know what I mean. So
another cost performance. You will almost always lose some performance
to security. Security slows things down.
Okay, Ease of use. It is much easier to access things in a non secure network. If you've ever gone to a friend's house and wanted to get on their WiFi network, For instance, you have that friend who has the 35
ah character WiFi password with upper lower case alphanumeric, non ALF numeric, all kinds of stuff
that's very difficult.
It's hard to use. Can't you just have your WiFi password called Password? Make it easy, right? People like ease of use. They like things toe work right out of the box. That's another cost for security, right? So when you're designing a system,
you have to understand that there will be some tradeoffs depending on the value of what you're protecting.
You may in some cases say well, this is not the most secure option, but we're doing it for ease of use, you know, passwords as a whole. It's much more secure to have smart cards or biometrics or any of numerous other, more secure methods than passwords. But why do we use passwords? Because they're easy because they're cheap.
And let me tell you somebody that's been a 90 for a long time,
a multitude of decisions Come on. Those two words we likes cheap and we likes easy. And when we get him together, a lot of decisions get made. So even in design, these are things you have to understand in order to meet the requirements of the product.
All right, backwards compatibility. Often as we move forward, we get stronger technology, better technology, faster technology, more secure technology that's not always backwards, compatible with older systems and older service is. So if we do need this toe work in an older environment,
sometimes we have to back off on security in order to exist,
and that happens in many instances. So a lot of times when we configure a system,
we usually configure it to work in, you know, a couple of different modes where its primary or preferred mode is more secure. But then, in certain situations where that more secure setting is not available, maybe it can back down to a less secure setting. You know, it really just depends. But we have to understand again as part of system design and architecture.
What our goals are.
User acceptance Users don't liketo have to jump through a lot of hoops. You know, users already have a ton of passwords to keep up with. If you think that all the websites users go to all the different passwords, they have to be aware of cat cards. That's one more thing I have to carry and not lose and keep up with and put it in the reader. Remember to take it out of the reader.
I know all this sounds very silly,
but we do have to consider how much responsibility we're gonna continue to place on end users because it's cumbersome for them. So as I t professionals, we want to implement security. But we also wanted this easy on the user's as possible. Once users start to resent the security features we implement,
then their next step is start figuring out a way to bypass them.
Our goal is to keep our users happy when it all possible. Okay, so these air discussions that we have to have before the design of the system before we begin the concepts and the architecture and figure out the goals and how we're gonna implement them, because it's an important consideration. How much security is enough,
just enough so that we can provide the other service is a swell.
All right now when we start talking about security architecture and this doesn't get really in depth, this is just very appropriate for managers. But we do have to understand some of the ways that we implement security into the design of certain elements.
And we'll talk a little bit about the idea of trusted computing, layering in isolation
and then boundaries and enforcing thes boundaries. So when we design a system, what we want to understand from the start is there elements that we consider to be very secure, their elements that we consider to be sort of secure and elements we consider to be not secure it all. Or at least that's how things have traditionally been designed.
Ah good example of this is to talk a little bit about a concept called ringed
architecture, and most operating systems are designed on a ringed architecture, and this is purely conceptual. In the ideas, thes rings indicate different layers of trust. Originally, Windows was designed upon a four ring architecture,
and so, ultimately, another way to think about that was four layers of trust within the operating system.
So the elements that were considered to be part of ring zero, so to speak, are in this little centre ring, you know, visually, and this is called the trusted computer base. The T C B
would fall in ring zero, and that's a term that comes to us from, AH, book called The Orange Book. The Trusted Computer Security evaluation, our system evaluation criteria.
That's a book that the government used to use as a means of evaluating systems based on their security implementations. So the T C B, the trusted computer base, these air those elements most tightly, most highly trusted.
You know, if you think about it,
there's some things in your system that have to be beyond reproach, and there are other things that we know just aren't trustworthy at all.
Okay, so, for instance, I don't care how secure your operating system is. I don't care how secure your memory is, how secured this? That or the other. If your system bios isn't secure,
and I'm able to corrupt that bios and calls your system to boot to a not another location. For instance, none of that other stuff matters, so your system bios must be highly, highly trustworthy.
Your processor. If your processor isn't trustworthy,
you don't have a trustworthy system. No matter what you do,
your ram your memory that's not trustworthy. You're dead in the water here. All of those elements that have the highest trust are considered to be part of the trusted computer base and architectural. It's architecturally speaking there in ring zero. Okay,
these are the most protected protected elements.
Now there are things that we don't trust quite as much like maybe file system drivers. I know this says device drivers. I would really think more like file system drivers like drivers for the NT F s filing system. If your Windows person still part of the operating system.
Ah, the memory manager. So still, you know, fairly trusted elements, but not
as trusted. It's the TCBY. So their ring one Abbott ring, too. We become a little less trusted again, so very trusted, a little less trusted, little less trusted, not trusted adult at ring to you've seen things like device drivers that provide the interface between your hardware in your operating system and then added Ring three,
our applications. Now, applications are not trustworthy at all. I don't know who wrote this application.
I don't know how they tested it. I don't know how it addresses memory issues and so on. So out here, these items that we don't trust at all. So this was how the original Windows operating system was designed. Four different layers of trust.
So the principle behind this layering is you also get isolation. You have this little conceptual boundary between ring zero and ring one than another boundary between one and two. And the idea is less trusted. Items cannot directly access more trusted items.
Okay, less trusted, can't access more trusted
unless it's given an interface
to travel through.
And that's a principle. We're gonna talk about a lot in this chapter. The idea of protected access, you know, think about if you work at a bank, your brand new, you've just been hired as a bank teller. Somebody gives you $10,000 to deposit
as a bank teller who's been there two weeks. Do you have keys to put that money in the vault?
No, hopefully not. But the money needs to go in the vault and you've got the money. So what do you do? You hand off the money to the bank manager who's a trusted intermediary or a trusted interface, and he accesses the vault. Okay, so I don't let an application directly access memory
unless it goes through what we call a trusted interface.
And you may have heard the term a P I application programming interface. That's what those do application programming interfaces allow your outer layer applications to have secured access to inner layer items. Hopefully, that makes sense.
But this whole idea of you don't allow untrusted
the access your precious resource is. They must go through an interface. And that comes to us from a security model called Clark Wilson. And we'll talk about the Clark Wilson security model in more depth than just a bit.
Okay, so this idea of boundaries and layering and isolation operating systems are built on these cons in these concepts to separate out, trusted from untrusted. Now, windows functions on simply a two ring model. You're either trusted where your untrusted
you know, Let's forget these little subtle layers of trust. You're either fully trusted or you're not trusted at all.
And so really, that allows anything from here, here and here to really have to go through
to really have to be secured in their function and in their methods to access in a ring items that really is a better measure. It's interesting. But early on, before they decided on the four ring the architecture, the Windows Architects originally wanted to have a 64 rings architecture,
which I think is kind of interesting. It shows you a little bit into the
the mindset of software developers. Let's have 64 different layers of trust, and ultimately they settled on four. Now they've even similar simplified that even more with two layers. Okay, so this idea of isolation and protection, it happens not just with the ring, the architecture of the operating system,
but also for processes
as well.
You know, if if if you look at
just to give you a couple of definitions real quick, they're not on the sides, but just just to give you this idea. So I go out and I buy an application. I go out and buy Microsoft Word, for instance. Okay, that's a program. So a program in an application, those air synonymous.
If I open Microsoft Word now it gets loaded into memory. It becomes a process
A and then every individual instruction within word is a threat, you know. So like printing or changing orientation, those air, each threat. So programs just an application. You open that program up, and now it's running in memory. It's a process and individual instructions or threads those air. Just couple of definitions I would
I would want you to have.
But every process needs its own set of resource is, and really processes are a lot like Children. You know, if you've ever had to baby sit two kids or if you have two or more kids at home. I actually took my niece and my nephew out swimming a few weeks back,
and so they're standing on opposite ends of the pool doing totally different things, both of them screaming,
Look at me, Look at me. Look at me.
That's how processes are processes. Wanna pretend they're the only process on your system. They don't want some other process stealing their wine light stealing their resource is stealing their glory. So if we're gonna allow these two processes to exist at the same time, we have to find a way of isolating them.
We're almost tricking processes into thinking they're the only ones
on the system.
So we do that by isolating and giving them each their own stack of memory
might hear. Call the stack or ah ah, you know, a set of memory. Each one has their own buffer of memory, if you will. They have their own time with the processor. They have their own time with configuration files again, kind of like kids.
Each of the Children want their own space. They want their own resource is they want their own toys. They want their own attention.
So that's another element of isolation that has to be built into a systems. Design is a means for the operating system to allow multiple processes to run, but you still allow them to run independently so they don't interfere with each other. So those concepts you can get a lot more in depth with those ideas. But again, we're gonna keep this at the management level.
All right, so
vulnerabilities when you have these systems, there are many different types of vulnerabilities that are actually attacks or allow for attacks on the actual architecture itself. One of the most common attacks that really exploits the system's architecture is an attack called the Covert Channel.
Now covert channels exactly what it sounds like. Hidden Channel. It's a hidden path for communication. A. Really. It's a means of communicating, communicating between processes across a path sometimes or in a manner that wasn't intended.
Now the two types there's a storage, a covert storage channel in a covert
timing channel.
Covert storage channel is about where the data is placed, where it's stored. So, for instance, there was an attack a while back, called the Loki Attack. L Okay, I like Ford's brother In the movies. The Loki attack
used an ICMP header space the space in ICMP header to store and transmit data. That's not where Data goes. It doesn't go behind an icy and P header. There's a specific part of a packet that's designated for the data information the payload.
So because data was stored somewhere that wasn't designed for the storage of that data,
that was a covert storage channel.
Now a covert timing channels even more sophisticated because what that does is one process communicates to another process through the modulation of system. Resource is a through modulating resource is. So what that means is one processor might spike processor you. One process
might spike processor utilization
up to 100%
then drop it down, then spike up, down, up, down, up, down according to a certain predefined pattern.
So it's using the processor almost like Morse code, to communicate with another process. That's a high end. That's a pretty sophisticated attack, but that's called covert timing Channel.
Um, other types of attacks on system architecture er, maintenance hooks thes. Don't start out as being an attack. Ah, lot of times a developer programmer will leave a little maintenance hook, which is a quick way so that they don't have to go through all the processes of authentication just a quick way that they can jump in the code to make a change.
But if you leave that maintenance hook in there and it's still available, someone can exploit that.
now race conditions,
race conditions, as in it's a race to the finish line. Our attacks on the timing of a system,
and there are lots of different types of race attacks. You know, if you just look at at just how a system should work generally,
I should identify to that system with my user name.
I will authenticate using a password.
Once I've authenticated my passwords. Been verified. Now I'm authorized to perform certain activities and actions, right? So identify, authenticate, authorize.
But if I can break the architecture of a system and caused the authentication process to slow way down
and speed up authorization
that I might be able to get authorized based on the user name rather than knowing a password that's a race condition. Anything that works on the timing of system events is a race condition.
There's another type of a race condition, a specific one called a tac cow, and that stands for time of check, time of use, time of check, in time, a fuse. So what happens with time of check time of use attacks
give you an example? Let's say that I have 5 $20 bills,
okay? And you have a $100 bill,
man, I'm tired of carrying around all these twenties If I give you five twenties? Will you give me $100 bill and you say yes. So I make a big production. 2040 60 8100. And I count that money out and I put it on the desk.
You turn around to get the $100 bill out of your back, and when you do that, I grabbed one of your twenties and tuck it back in my pocket.
By the time you give me the $100 bill and take what's on the table and put it in your pocket, I've created the variance I've created the variance between when you verified the money was there and then when you turn back around were actually able to use the money. So there's a difference between when you check and when you use
well. In this example, the way to create you keep that problem from happening is a soon as I put down, 2040 60 80 100. You grab that money and stick it in your pocket, right?
So that's true with with software's where Well, this might happen when a process of verifies that a configuration file is accurate and it is complete
if it doesn't immediately use that configuration file. But if it checks it and goes and does something else and then comes back half an hour later to use it, there's a big amount of time in which a variance could be created. So the moral of the story with Talk Tao attacks is don't allow elapse between when it's checked and when it's used.
This is system architecture design that keeps this from happening,
so you, as an end user you as a manager. This isn't a configuration setting, but it's It's so essential that our architects understand these types of attacks and these types of conditions, because we need to make sure that we eliminate as much as possible any of those lapses that would allow an attacker
thio have an advantage, or to have an opportunity to create an exploit.
The rest of what's on this slide just another idea about secure architecture. A system should be designed to fail in such a way that it's resource is our secure. In a little while, we're gonna talk about security models and security models are concepts on which we build the system or design a system
and there's a security model called the Secure State model,
Um, and ultimately, what this model says is if a system starts securely
and if a system runs securely
and if a system fails or shuts down securely than it's secure
Now, I know that kind of sounds like a no brainer, and in some ways it is. But what's so important to understand about that is unless a system is secure in all of those states, then it's not secure at all.
So again, I don't care how secure your operating system is and all the security mechanisms and features you have. If I can compromise your bios and force you to start un securely, then you're not a secure system,
right? Or if I can create a failure so that your system fails in such an unexpected way that it leaves all the doors open, so to speak, then it's not a secure system.
So, according to the Secure State model, um, we need to make sure that we implement design strategies to secure during startup and start up is actually the hardest time to secure system because all the security mechanisms haven't been loaded yet.
So how are you gonna load to secure, Uh, how are you gonna secure system where the security mechanisms haven't been loaded? What do you do?
Right, And then we have to function securely, and then we have to shut down securely. So here, when we do talk about shutting down securely, what we mean is that the system is able to fail even unexpectedly
in such a manner that all of its processes air closed so that no further breach can happen. And that is actually called maintenance mode. Maintenance mode is when the system fails in such a way that the systems air unable the processes air not able to be further accessed.
No further compromise can happen. And that's the goal.
Ah, we refer to that as being fail. Secure. Now fail secure is very different in this concept than fail. Secure would be in physical security. Okay, you might also hear this called fail safe here. Don't confuse, they'll say, for fail, secure
for a system with failsafe for fail, secure like with automatic door locks, something like that. It's totally different.
So when we say fail, safe or fail secure, we mean that the system fails into maintenance mode, so no further breach can happen
Up Next
Enterprise Security Architecture

A framework for applying a comprehensive method of describing the current and future structure for an organization?s security processes so that they align with the company?s overall strategic direction

Instructed By