So when we're talking about secure design, of course, we have to think about risks associated with the design process in and of itself. Um, one of the threat risks that we would consider his code reuse. You know, millions of lines of code go into
various operating systems, you know, Windows Vista, I think, has over 60 million lines of code.
And if we're recreating the wheel every single time, obviously that's very cumbersome and very burdensome. So it's much easier to copy, paste and take advantage of code that's already written. But the considerations there is, you know, Was it well written? Is it well written for the environment, which we're gonna use it?
You know, what are the elements that we might be bringing over with us that we don't intend? So anytime we're reusing instead of creating from scratch, we're taking advantage of what's there to timesaving technique. But we may be getting Maur or even less than what we bargained for.
Also, with design. We have to think of the idea of flaws versus bugs.
Ah, and a lot of times those terms air used interchangeably. I don't really have a big problem with that out in the field. I think a lot of times people do consider them to be the same thing. But really there is a difference. And the difference is that a flaw is an inherent fault with the code. The code has an internal vulnerability.
A bug has to do with how it's implement. So if it's implemented in an insecure manner, then that's a bug. If the code is inherently weak, then that's a flaw. So any time you know you implement something, um,
in a vulnerable environment, like maybe I put something out in the D M Z of my organization and there are no other protective mechanisms, you know, we look at that as being something that's a bug that's improperly implemented where if the code itself maybe doesn't check for input, validation doesn't do input validation.
That's more of a flaw,
all right, Other things to think about with our design of software is do we use open source or closed source for design? And this is really an argument that's as old as time, or at least as old as computing. Um, what are the benefits of open versus closed?
Well, the folks in the open source community say, Look, if we have our code be open, we open this up to the entire community computer community, Thean tire encryption community, a cryptographic community, the entire community and I t. They can look at our code.
Yes, they can break it, but they can also help us put it back together
stronger. You know, we think about the idea of peer review, and that is very beneficial.
But I want to stress that sometimes we say open codes, better clothes code is better and really well written code. Whether it's open or closed is best, right. Just because it's open code doesn't mean that it's more secure. If you look at the compromise with open SSL,
you know it was open SSL. So we assumed, Hey, it's better protected because it's out there for peer review
except there really wasn't any peer review that was happening. Open source doesn't mandate that pier of you haven't it just makes peer review possible, so that was really open. SSL was was essentially poorly written, but the piers that were,
you know, supposed to do the review never did so. Making something open does not guarantee that it's more secure
now. The premise behind closed design is. If you can't see it, you can't break it.
Why would I let you see my code? Because once you know my code, then you'll know it's vulnerabilities and you'll know how to attack. Now one of the things that we're doing with clothes designed something called security through obscurity,
security through obscurity. And again, it's that belief that if you can't see it, you can't break it.
And we know that's not true. You know what it's like, Um, disabling your wife. I s s I d
as protection, you know, just because you can't see my network is Kelly Calm means you can't break into my network, and we know that that's not true. You know, there's a lot more to, um,
attacking a system than just what's visible in what's there. So the premise that if I hide it, it's
indestructible. That's a very faulty premise, but again, I want to stress that whether it's open or closed doesn't inherently make software more secure. If it's open code, we have the potential for greater security because we have the potential for a larger community to investigate and to improve the code
when it's closed. We may not have that peer review, but maybe we've done thorough pure of you internally and that may help us. And I'll tell you, the government tends to go with the side of clothes design,
you know, for encryption algorithms, For instance, if you're gonna crack my cryptography, you need to know the algorithm and the key.
Why would I give you one of those pieces of information I'm not gonna tell you? My algorithm is or the key that's making it harder for you.
The cryptographic community, for the most part, believes in a principle called Kirk offs principle.
And what Kir coughs principal essentially says is
you've got an algorithm and a key. Let one of those be open. You don't have to protect them both. And usually it's the algorithm that's known in the key that separate
So just different schools of thought open versus closed design. I would tell you. Most people in the security field prefer open design. It's easier for code and peer review. Um, it's easier to protect. It has the benefit of having Maur eyes.
I just want to stress to you that it's neither open nor clothes that makes products secure. It's how well it's written and tested.
All right. Now, when we talk about controls evaluation, we're looking at the controls that have been implemented. We want to find out if they're efficient.
We want to find out if they're affected.
Economy of mechanism means that,
you know, again going back to that efficiency of our controls, making sure that we're providing just the amount of security that's necessary again, based on cost benefit analysis. We also consider psychological acceptability, and basically that becomes important in a couple of different ways. Psychological acceptability.
At some point in time,
users will become frustrated with security.
You know, I worked on a lot of military bases, and if I go to a base and I'm in line for 30 minutes before I can get onto the base because of their security requirements, that's very frustrating to me. Now that doesn't mean that they're gonna change their policy at the gate. But that does mean that we have to consider
how frustrated are we making our users?
Because when we frustrate our users, they'll do one of two things. They'll leave and they'll go somewhere else where things aren't so frustrating or they'll find a way to bypass security.
Now I will never forget this. I was training a class of nurses. It Wait Medical Center in Raleigh, North Carolina. It's been years ago. It was in the mid to late nineties, and I was training them on going from Windows 312 Windows 95.
And so we really spent the better point of the morning discussing the miracle of the right mouse click. This was not a really savvy group of computer users, but again, back in the nineties, many people were, You know, I'm not making any judgment with that. I'm just trying to say
these weren't people that were really hackers. Do you know these were folks that were new to computers and they were kind of learning the steps.
Now at lunch, I needed to go to eBay. I was totally goofing off at lunch, and I can't remember if I was buying something. If I was selling something, I just want to go to eBay and check it out.
So I went to eBay and I got a message up on my screen that essentially said the hospital's proxy server has blocked me from accessing this site. I don't care. I just won't go. I'll go when I when I check out at home. I didn't know the hospital cared if I went to eBay,
but as I got that message, one of the nurses that I'd been working with, you know, helping her right click versus left click one of the nurses walk, Fine said. Here, go to www dot proxy seven dot com,
And that would take me to a proxy that I could filter my request through
to trick the hospital's props, see into letting me access eBay's website.
Now in all seriousness, this is a woman that did not know howto right click of a mouse, but knew how to bypass her hospital's proxy server.
So what that tells us is,
if security is too cumbersome for our users, they will find shortcuts. I worked at another place where, in order to track incident response is the, um, the user that would submit an incident of potential issue for the support team
had to go to a screen and their and essentially they had to go through. I believe it was somewhere around
18 different screens, toe actually get to the location where they would enter the material, the material information to the incident they were reporting.
So basically, I got to go through 17 18 different screens till I get to the screen, That's meaningful to me. You know what I'm gonna do? I'm just not gonna use that software. I'm gonna send an email to my support staff and say, Hey, here's a problem I'm having. Can you take a look at it?
Because once things become too cumbersome for our users, they'll find shortcuts. So we want to make sure that the controls we put in place do have that psychological acceptability.
And don't forget when we're making our considerations for design, think about the C I. A Triad, but also think about the triple A authentication, authorization and accounting.
And then don't forget the secure design principles we covered these under the tenants off security, things like
the principles of Lise privilege and good enough security, defense in depth, avoiding single point of failure. So these we've already talked about. But again, this is just a little refresher