Time
10 hours 19 minutes
Difficulty
Intermediate
CEU/CPE
12

Video Transcription

00:05
code reviews. Developers don't like code reviews because it causes them work. Usually find things during code review that have to be fixed. However, security people love code reviews because it prevents work down the road. You're saving all of these, uh,
00:22
vulnerability exploits and attacks against your applications
00:26
by doing code reviews.
00:29
So a code review is the notion that you're gonna go and look at your application source code to try to proactively identify potential vulnerabilities. You're trying to find spots where a new attacker could do sequel injection could do buffer overflows.
00:45
You're looking at the application to figure out what could go wrong
00:50
at the code level. This is a crucial part of the software development lifecycle RST L C. As because it's a preventative measure, it's to help eliminate the opportunities that Attackers have to take advantage of your application or exploit them in the wild.
01:07
There are a lot of different testing techniques out there, a lot of different code review techniques. The first we'll talk about his black box versus white box testing Black spot. Black box testing is the idea that you have no prior knowledge of the system, but you're going to try to test against it or attack it.
01:26
White box testing is a slightly different version of that where you do have knowledge of the the application. Perhaps you have the code sitting in front of you. You've got a network diagram, but you've got some knowledge of the system that you are testing
01:42
and allows you to do perhaps a more targeted focus test, as opposed to a broad days to approach approach
01:49
that you would do with black box testing.
01:53
There is dynamic and static testing from code review perspective. Dynamic testing is you're looking at the system on observing it in real time while executing the program, so you're kind of stepping through things. Dynamic testing is done through sand boxing,
02:10
usually virtual sandbox ing, where you've got a copy of the program running.
02:15
You're doing memory analysis on it. You're looking at the code through a debunker that's dynamic testing. Static testing is where you're looking at the system or the code while it's not being executed. So you've got just a copy of the source code in front of you, and you're looking through it that static testing.
02:35
There's also manual versus automated testing manual testing is the idea that you've got somebody who's manually going through the source code,
02:46
typically with a pen and paper, or maybe keyboard and screen. Ah, and going through the code and looking at it line by line to see what could be wrong with it. They're automated tests is, well that you can run specialized programs to do code reviews or fuzz er's that will actually go through and do automated testing
03:06
on the good code reviews the very in depth code reviews usually use some combination of
03:12
all of these good code reviews will almost always do automated testing and then supplemented with manual testing of critical functions. Because the automated testing can't catch everything on, there's no, you know, computer power out there like what's in between your ears s o. The manual testing.
03:30
If you find good code, reviewers
03:32
could go a long way to supplementing what you do with the the automated side
03:38
buzzing. We talked about this a little bit, but fussing is this idea that you're using automated methods to find input. Validation airs with fuzzing. What you're essentially doing is throwing all sorts of combinations of input to an application and seeing where it craps out on you.
03:58
It's a really popular tool for finding operating system bugs, application bugs.
04:02
But if you look at the diagram here in the center, this really exemplifies what's going on with fuzzing here on the left. You're starting the the actual program. You start the fuzzing process by providing some type of test input or you no strings or your l's or something to the application.
04:23
And then you're looking at the program. Is it still running? Did it crash? If it's okay in the program still running, that means our input was benign.
04:32
Will come up with another set of input and try it again.
04:35
And what we're looking for is the combinations of input strings or, you know, actions that caused the program. Thio either crash or hang
04:46
because those are the interesting cases. Those are you know, input did not provide expected outcome,
04:53
right. So when you check the program health, if you see that it hangs, essentially kill the program
05:00
started again and start your test again. But note the fact that the crash, you know, or that the input that you provided cause the program toe hang because now you can provide that to the developer, and they can go fix that because it was an unexpected case if the program actually crashes,
05:17
this is an interesting one, because that means that the application
05:21
has crashed and maybe there's an opportunity to write some type of buffer overflow. Are heap overflow or something like that? There's some opportunity to write an exploit for this that might be be able thio. Take advantage of your program, whatever it is that you're fussing. So you want to triage that crash and see
05:42
if you can understand what that crashes actually caused by
05:46
what are the parameters around that such that maybe it could or could not be exploited? But you want to save that Testa's well and store that with your test results, because that's definitely an input case that you want to look at
05:59
on. Have the developers fixes well, is why I said that this is a nightmare for developers. They don't like this because it causes them work every time you find one of these hangs or crashes. But from a security perspective, I like this because if I can find this now,
06:15
you know a year from now, when my programs out there in the wild I'm not gonna be responding to
06:20
security incidents as a result of you know, this bug.
06:27
So a good example of this you confined these tools on the Cali Lennox distribution from offensive security. This is a program called Spike. Ah. And here in the screen, shot in the on the left side. Here. We're actually fuzzing a FTP server with a set of known inputs
06:46
on. And so what we're doing is providing,
06:48
you know, on the
06:50
on the user input string Instead of providing username, we're providing all sorts of combinations of data just to see if the FTP service will choke on it on. So it's going to try different character combination's different letter combinations on At some point,
07:10
you mayor minute find may or may not find that the program crashes
07:14
in this particular example, the program does crash on. It was with an input of ah, bunch of capital A's or hex character. 41.
07:24
How would you find that if you were doing this manually?
07:27
Simply by luck, you just happen to stumble on it. Or if you were actually reviewing the source code you might actually see that there's a you know, an unbounded string check or something like that in the code that you could think about. Go. Yeah, well, maybe that will cause a crash down the line.
07:46
But by providing an automated, fuzzier like Spike,
07:49
you can craft all of those inputs on even have the program generate inputs for you random inputs for you, and just throw it out the program, Let it run overnight and see what it does.
08:03
Testing considerations before you run off and fire. Ah, code
08:09
code review or fuzzing application at your at your service is that you want to test take a look at the attack surface, Understand? You know where user input occurs, What type of application you're actually running. Look at the results that you're testing tools will provide to you.
08:28
Are the tools usable? I mean, do they make sense in your organization from a testing
08:33
perspective?
08:35
Do the tools support the technology that the application was built on? You can't use, say a, uh, python tester on a assembly level program just won't work. So make sure that you're using technologies that are compatible with each other, and you're selecting tools that work
08:54
and also look at the performance and resource utilization.
08:58
If these tests require massive amounts of computational power because it's trying so many combinations or you need to paralyze things because you've got so many potential test cases, make sure you consider the fact that you might need some additional resource is to run this test.
09:16
All of these considerations should go together when you're starting to think about
09:20
Well, what tool am I gonna use for this? Am I gonna use immunity to bugger to do things manually? Am I gonna use Spike to do automated fussed fuzzing? Am I going to go grab 10 guys, put him in a room on throw pizza Adam for two weeks while they do a code review?
09:37
All of these types of considerations should go into figuring out whether you are,
09:41
um,
09:43
you know, using the right tools or not
09:46
from a system level testing perspective, you want to make sure that the testing that you're doing is accounting for all the possible cases that you can think of. Now you think of all the potential inputs that your program might see. It's huge. It's a huge space
10:05
but look at the things that are most important to you and use that to kind of refine what tests you actually run
10:11
s so you might be looking specifically at things like privacy related issues. Can I compromise data out of this thing? Performance issues or stress conditions?
10:24
Usability concerns, compatibility with other applications or interface testing all of those sorts of things you might want to use to be able to refine your test and come up with a smaller test space. Otherwise, it's a pretty big, unbounded problem
10:39
and then maintenance considerations. So once you've run your test, that's great. But there are things that you need to do. Tow. Follow up on that every time you revise your application. It's probably a good idea to revise your testing plan and make sure that
10:58
how you're testing your application still is in line with how the application works
11:03
on. That takes close coordination with the developers and testers to make sure that they have the right test plan built
11:09
Anomaly evaluation. Let's say you start seeing the crashes. You're seeing the hangs while you're doing automated fuzzing, dig into the code and figure out what is causing that I'm willing to bet that you're gonna find one of your coders is using, you know, unsafe string functions. Or there's some coding practice being used
11:30
that's causing these particular bugs.
11:33
And in that case, if you understand the root cause of that, you can fix that. You know, send the guy to a secure coding class or specifically, look for that type of air, and that will give you, ah, much easier way to reduce your attack surface by eliminating those airs at the source.
11:50
Once you find problems making sure that you identify them, track them
11:56
Ah, and actually manage the changes associated with that in some guided fashion such that you know, the testers reported an air over here. You don't want that to get dropped and not make it into the actual you know, build release
12:11
s so as you're going through your testing, make sure you're using pick anything. Get hub. Could be an Excel spreadsheet,
12:18
whatever issue tracker and a bug track that you're using, make sure you put all of that information in such that you can track it all the way through to completion on when you're iterating on the actual task to fix this stuff. You can keep track of it. And you know what's been fixed on what hasn't been fixed.
12:35
And then, of course, update the documentation. You might find that you know, certain user inputs are coming from users because they don't know how to use the system. S O is part of your testing. Make sure you update your documentation on all of these considerations. Go into effect after you've done your test and you
12:54
I need to improve your application was as a result of that.

Up Next