Security Architecture & Design (part 3.2) Integrity Modules

Video Activity

For this subsection we discuss in depth the main integrity modules of the day. We begin with Bell & LaPadula confidentiality module as an assured component for the CISSP exam. We'll discuss its three (3) property rules, what access and permissions they grant to users, the Biba Integrity module and how that trust life cycle works, and finally the Cl...

Join over 3 million cybersecurity professionals advancing their career
Sign up with

Already have an account? Sign In »

2 hours 5 minutes
Video Description

For this subsection we discuss in depth the main integrity modules of the day. We begin with Bell & LaPadula confidentiality module as an assured component for the CISSP exam. We'll discuss its three (3) property rules, what access and permissions they grant to users, the Biba Integrity module and how that trust life cycle works, and finally the Clark-Wilson Integrity module understanding what its basic premises and how that plays into user access of network resources.

Video Transcription
way. So let's talk about the one I think is most likely to be on the exam. And this is Belle Lapa, Djula.
And we have to understand that Bella Popular was designed to be a confidentiality model because this was designed to create systems designed towards protecting state secrets. So obviously confidentiality is going to be the greatest control here.
And ultimately, what this model is gonna do is it's gonna provide for confidentiality
through three rules. And there's actually even 1/4 that I'll mention here. It has a simple security property, a star security property in a strong star property.
Now would you want to understand about Bella? Popular is you can implement any of thes rules. One of the rules, all of the rules, none of the rules. So you'll see the strong property. The strong star property
is a very comprehensive property where the simple property, the star property or a little less so so it's not like you're applying all three of these rules to the same system, and these rules would be something I would implement.
The thing to understand this. Don't try to break your brain too much. Riel were Rhea World ing some of these security models
because even though you'll see elements of them in practice, you're very rarely going to see all the rules on a single system.
Okay, so the first rule of Bella Popular is the simple security property. When you see it were a property with simple,
you'll know that simple is always gonna talk about Reade access for system
When you see a property or a rule with star like the Star Security property Star is always about right access. So simple is read. Star is right. And the way I remember that is that phrase. It's written in the stars. So if you think about that phrase that'll help you remember,
all right, the simple security property for confidentiality should make sense. No read up.
Well, I've got secret clearance. I can't read top secret data That makes perfect sense.
All right, The Star Security property says no right down and no right down. Prevent somebody in an upper level from leaking secrets down the lower levels of clearance. So if I have top secret knowledge, writing documents or storing documents down below could leak or could cause confidentiality problems
again, It's not all or nothing. You might just apply. Simple or just the star property might be applied.
Now. The strong star property says no read or right up or down, so that's very comprehensive. With that model says, it's stay where you are.
And then there's 1/4 property called the Tranquillity Property, with Bella Padula and the Tranquility Property says security labels, which are the foundation for Bella Padula cannot be arbitrarily changed.
Of course not. I can't arbitrarily give myself top secret clearance instead of secret. So that's just a very basic property. And that's Bella Padula. Most people feel like Bella popular makes sense because they get the ideas of confidentiality.
So if l. A popular makes sense, that's great, because we're gonna turn it on our head on its head for Biba,
and the reason we're going to reverse it almost exactly backwards is because Billy doesn't care about confidentiality. Biber is not about protecting secrets. I want you to think about Biba, Maurine, the academic world.
Okay, think about it as protecting the knowledge base, the sanctity of the knowledge base, and you can sum up Bimba by saying down data is dirty
down. Data is dirty So what? Bebe's says. It has a simple integrity, axiom
and the simple integrity. Axiom says. No Reed down.
Why? Because that'll poison your knowledge base.
Um, if I was writing a paper for my thesis
and I sent it to the professor and the professor sees that I'm quoting Wikipedia
that affects the integrity of that paper.
So I should go to more trusted knowledge base in the academic world.
So no reed down.
Can I read up? Absolutely. That's what's encouraged.
BBA also says no right up. Why? Cause I'm gonna paul a poison the knowledge base above me.
So this is about knowledge. It's about keeping the integrity and the accuracy of knowledge. Don't read down because that's less trusted. Don't write up because you're less trusted. If that makes sense and then the invocation Property says no read or write above
Hey, kind of like the strong star property, but it doesn't address what you could do below.
Now the Clark Wilson security model couple of phrases to associate with Clark Wilson well formed transactions,
separation of Judy's,
um, forcing collusion.
Those are all ideas about Clark Wilson. This idea of well formed transactions, means or it goes back to the idea of I can't trust a user to make a well formed transaction
and with databases, for instance, garbage in garbage out.
So how can I get a user to make a well formed transaction if that's not within the user's capability?
Well, I forced that user to use an interface, and I write that interface well so that it can only create well formed transactions. Users go through an interface to access precious data or precious resource is right,
so the access triple is user ghost through an interface to access. Trusted Resource is.
But of course, we can't use that basic language. We have to make it a little more confusing. User goes through something called the Teepee,
and the T P is a transformation procedure. That's your interface
toe access, a constrained data item that you trusted resource. That's your precious data. So the Access triple is a user ghost transformation procedure to access your you're constrained data items.
separation of duties, Clark Wilson says. Ah, if you'll remember the layered architecture, er, an untrusted application can't access trusted memory without what an interface on application programming interface. That's Clark Wilson
bank teller can't access the bank vault, they go through the bank manager. Do you see how that's forcing collusion and that separation of duties? So Clark Wilson is a good thing
and three basic integrity goals. We're gonna keep unauthorized users from making any modifications
we're in, keep authorized users for making improper modifications, and we're going to try to make sure that we maintain internal and external consistency.
If I look in the database and it says I have three widgets on the shelf, I want to be able to go to the database and see three widgets.
So those were the three rules of integrity. Clark Wilson gives me that.
Now the final model that I think is of the utmost importance. Is the Brewer Nash model pretty sure you'll see something about that on the test. They like to ask about these four. The whole purpose of the Brewer Nash security model is to prevent conflict of interest,
and we're only going to see this in very particular types of databases.
So in a database that houses competitors information, you're going to see Brewer Nash controls put in place. Like, for instance, if I work for the f d. A. I have information on a lot of different pharmaceutical companies, and I shouldn't be able to go to 20 different pharmaceutical companies gathering information about each one
that indicates maybe I'm doing some insider trading.
Maybe there's some sort of abuse of power. So what? Brewer Nash says If there is a database that your employees have access to with competitors information, limit them. Tow accessing on Lee, one of those competitors, it doesn't matter which one.
So New York Stock Exchange Let's say I'm collecting information about credit card companies. These American Express MasterCard discover I could access any one of those records, but as soon as I access Visa, I'm locked into just Visa.
I can't hop from competitors to competitors to competitors.
Okay, And that's the brewer Nash. It's sometimes called the Chinese Wall model, because the idea is when I access one of the competitors or wall drops down behind me and locks me into just that set of records. It could be for a minute, an hour, a day, a week. Whoever you know, however,
ah, that's configured by an administrator.
Thank you.
That's Brewer Nash, and it's to prevent conflict of interest.
All right. Now, a few other models, just to mention, ah, the information flow model protects information as it flows across boundaries. What that tells you is it tells you how to implement security for information that flows, like with Bella Padula. You know, we said Bella Padula says No read up.
Well, can I read down? Yeah. So when I read down to lower levels of clearance, we kind of consider that to be information flowing across boundaries. The information flow model says that has to be controlled and monitored and restricted. Otherwise, you could have a confidentiality breach.
Um, the noninterference model says you have toe isolate each layer of security from the others so that information doesn't leak in a matter that goes against security policy. I always think of this model as what happens in Vegas stays in Vegas.
So what it does is anything that happens at top secret level
doesn't trickle down to secret or confidential or whatever it's isolated.
And then the phrase I want you to have with lattice models, I want you to have the phrase upper and lower boundaries these air for operating systems, and these air usually when we're talking about clearance and classification, these air systems that use labels to indicate different levels.
So secret top secret. How are those differentiated their differentiated by labels?
And there's that lattice structure that indicates I have upper and lower boundaries of access. If I have secret clearance, my upper boundary protects me or prevents me from going into top secret. So lattice the phrase they like for that upper and lower boundaries.
Hey, those are your security models. Please review them. I only anticipate and my guess would be you might see 10 questions from security architecture design. It's not a really heavy chapter, but those 10 questions are gonna come from your security models and then they'll come from the
evaluation criteria, which we'll talk about next.
Please be sure you spend some time there.
Up Next
Enterprise Security Architecture

A framework for applying a comprehensive method of describing the current and future structure for an organization?s security processes so that they align with the company?s overall strategic direction

Instructed By