00:03
Hello and welcome to the side. Very secure coding course. My name is Sonny Wear and this is a wa stopped in for 2013
00:12
a seven missing function level access control. So first, let's take a look at our definition from a WASP.
00:19
This category describes how many applications check you are. L access rights before rendering protected links and buttons. However,
00:29
applications need to perform similar access control checks each time these pages are accessed or otherwise, Attackers will be able to forge. You are elves to access these hidden pages. Anyway, we're actually going to see this in our demo in lab.
00:48
Now, if we take a look
00:49
at the OAS chart, we can see that the attack vector exploit ability is very easy
00:57
and we will actually see how easy that is in the demo
01:02
Now. The technical impacts can be moderate really depends on how much information is leaked due to the vulnerability.
01:11
Now, the security weakness states that applications are not always protecting page requests properly. Sometimes you're out protection is managed via configuration and the system is miss configured.
01:25
Sometimes developers must include the proper code checks, and they forget
01:30
detecting such flaws is easy, so that's the good news. The hardest part is identifying which pages or your l's exist to attack.
01:42
before we go any further, I wanted to take a moment to actually address the term access control.
01:49
Access Control is a technique that is used to restrict who can access what now. I'm simplifying it here, but basically the who is generally
02:00
either an individual or a group or roll.
02:05
And the what is, generally speaking, some sort of resource, whether it's a file, a directory
02:13
or any any other type of data or information.
02:16
Now there are different implementations off access controls. A couple of examples include access control lists or ankles, A, C. L's and these air usually used with, say, firewall rules where you configure in the firewall
02:36
a particular I P address that is allowed to connect to a particular back in
02:42
through that firewall. And so you actually set up a rule that states Yes, this I p can connect to this i p.
02:51
Another example of an access control is a role based access control matrix or something termed are back role based access control, and this is usually implemented inside of applications.
03:07
So applications like Web applications will design all of their functions
03:14
and operations and tie them to a particular role.
03:19
And that helps to ensure that certain roles are not allowed to perform certain actions. Now, if we take a look at this chart, this is an example of it access control matrix that could be used for our back.
03:35
A couple of security principles that get implemented when you design your application. Using a matrix
03:43
is you're going to ensure the principle of Lise privilege. This basically makes sure that people on Lee have enough authority or power in order to do their job, but no more.
03:55
It can also ensure the separation of duties. For example, if you take a look at The Matrix, you can see that the application administrator or you could think of them as the sys Admin is allowed to configure the application,
04:12
but it is not allowed to do anything within the application, including creating new records or reading records or updating records, etcetera.
04:20
So this creates a separation of duty,
04:25
and this ensures that the application administrator
04:29
he's not actually inside of the application
04:32
now. This does mean additional design because you need to ensure that the application administrator
04:40
is not allowed to configure themselves
04:43
to basically be a super user.
04:46
In cases where this is not possible,
04:49
you would then implement monitoring. So monitoring would basically capture
04:56
every single action that's done by users, in particular super users or users that have more privileged than other users.
05:06
And the monitoring would capture those queries or capture
05:11
the operation's done by them on a daily basis and have that monitoring done by another group, maybe an information security group, for example.
05:19
So now let's get into some of the attacks. The first is failure to restrict u. R L.
05:26
This is also known as forced browsing. It's a very simple concept. Basically, when you go to you are ill and you notice that, say, you're logged into your bank account and you notice that your account number follows at the end of the U. R L.
05:44
If you change the account number and it actually permits the viewing of of another account,
05:49
this is a failure to restrict. You are well, obviously, the only person that should see their bank account details would be the owner of that account. This is a very good example of how the application code or the actual application programmer
06:04
has failed to restrict the operation, to confine that to a particular color individual. And that usually comes in the form of authorization checks, which we have spoke about in other modules.
06:19
Now I want to extend this thought
06:23
beyond just individuals, and I want to talk about it
06:27
in regards to binary files.
06:30
So this starts secure coding standard
06:33
S E C. 57 j G states do not grant untrusted code access to classes in inaccessible packages,
06:46
So this is really about class loading issues.
06:50
This is where you can see in the chart that particular class loaders may or may not perform access checks. Now what does this mean?
07:04
languages can have security policies,
07:10
and they can have security managers that enforce these policies.
07:15
And so unfortunately, there can be situations and for sometimes legitimate reasons where the class letter may not actually do any kind of access check when the class is loaded against the policy,
07:31
we can see that that's true in the case of a bootstrap class loader extensions class later,
07:39
not in the case of a systems class lit or so here. Any class that's loaded
07:44
must go through the security manager check, and the security policy is is thin checked for their permissions.
07:54
And then this Euro class loader it has a maybe it's stating, Maybe because it's really left up toothy application programmer
08:03
to perform that optional
08:07
So looking a little bit more into the description,
08:11
the rule basically states that a binary using Jarvis laying class loader
08:16
has the ability to bypass the Java security manager and its access control policy.
08:24
And, of course, the example as we saw in the chart,
08:26
Is this your real class loader? Specifically, it's constructors. Convey bypassed the optional step
08:33
of the pat package access check available
08:37
in the Czech package. Access. Call off the security manager class. If you've ever heard of unprivileged app, lets or sandbox breakouts, where malicious code is allowed to be leaked through the sandbox and then installed on victims machines.
08:56
This is one of the ways that it could get through is because
09:01
there is this bypass that's being done for this check package access and a lot of cases. It could just be the fact that the application programmer doesn't realize they need to put this check in place. If we take a look at some specific code of non compliance,
09:20
we can see here that in step number one,
09:24
we're going to read the required by code from a socket connection and create a new custom class loader.
09:31
So what's coming in is tthe ian trusted code, and you can see it's even labeled untrusted code class dot class dot Get class later, just to point out that it's coming now into the application
09:48
Neither the constructor nor any other method performs any security manager checks, allowing the class loader to low the existing classes and define new classes in any package, including restricted packages such as Java lang Java security.
10:07
This can allow for an attacker crafted jar foe
10:11
two very easily be loaded by a class loader.
10:16
And then the third step is
10:18
overrides to get permissions method without delegating to the super class, thus avoiding the use of the default and more restrictive security policy.
10:31
basically completely bypassed the get permissions altogether by overriding it with a local version
10:39
and thus ensuring that the super class didn't get called,
10:43
which all of these things combined together
10:46
make for a perfect storm for the creation and loading of malicious code. Now, as I said, there could be legitimate reasons for allowing by code,
10:58
to be looted in bio class loader. There certainly are,
11:03
um, inversion of control techniques and things like this,
11:07
where it is permitted.
11:09
But when you take it in context of particularly an outlet, you can see how easy it is, Ah, for malicious code to be introduced. So if we take a look at the compliant version of the code, we've got two steps here. We can see that there are two critical security manager checks that are being done. Number one first.
11:28
It uses the security managers check package definition method
11:33
in the class loaders load class data method,
11:39
and that check package definition method throws a security exception. If the code is not granted the required permission
11:48
and step number two is, you see that the second security manager check involves checking package level access.
11:58
check package access method.
12:01
It's critical to check whether the class letter is allowed to load classes from restricted packages. The security check requires the Java Lang runtime. Permission be allowed
12:13
with the target package name.
12:16
And of course, no check is done. That's where we get our malicious code.
12:22
Now the case study comes from get, huh? Get Hub actually had a bug bounty.
12:28
A bug bounty is a program
12:30
where company allows people to find security vulnerabilities in their website
12:35
and they can actually get some sort of monetary compensation for finding them.
12:43
What's interesting is that in this bug bounty there were many missing function level access control problems that were found. These are the top four bug bounty recipients,
12:56
and they were the ones that basically identified them. And you can just see them listed there. Insufficient authorization checks,
13:03
organization member, disclosure time timeline, event disclosure, etcetera. So this gives you an idea of how prevalent
13:13
and how easy it is for this particular vulnerability to be an oversight on the part of the application programmers.
13:22
Now let's move into the demos portion of our module