code reviews. Developers don't like code reviews because it causes them work. Usually find things during code review that have to be fixed. However, security people love code reviews because it prevents work down the road. You're saving all of these, uh,
vulnerability exploits and attacks against your applications
by doing code reviews.
So a code review is the notion that you're gonna go and look at your application source code to try to proactively identify potential vulnerabilities. You're trying to find spots where a new attacker could do sequel injection could do buffer overflows.
You're looking at the application to figure out what could go wrong
at the code level. This is a crucial part of the software development lifecycle RST L C. As because it's a preventative measure, it's to help eliminate the opportunities that Attackers have to take advantage of your application or exploit them in the wild.
There are a lot of different testing techniques out there, a lot of different code review techniques. The first we'll talk about his black box versus white box testing Black spot. Black box testing is the idea that you have no prior knowledge of the system, but you're going to try to test against it or attack it.
White box testing is a slightly different version of that where you do have knowledge of the the application. Perhaps you have the code sitting in front of you. You've got a network diagram, but you've got some knowledge of the system that you are testing
and allows you to do perhaps a more targeted focus test, as opposed to a broad days to approach approach
that you would do with black box testing.
There is dynamic and static testing from code review perspective. Dynamic testing is you're looking at the system on observing it in real time while executing the program, so you're kind of stepping through things. Dynamic testing is done through sand boxing,
usually virtual sandbox ing, where you've got a copy of the program running.
You're doing memory analysis on it. You're looking at the code through a debunker that's dynamic testing. Static testing is where you're looking at the system or the code while it's not being executed. So you've got just a copy of the source code in front of you, and you're looking through it that static testing.
There's also manual versus automated testing manual testing is the idea that you've got somebody who's manually going through the source code,
typically with a pen and paper, or maybe keyboard and screen. Ah, and going through the code and looking at it line by line to see what could be wrong with it. They're automated tests is, well that you can run specialized programs to do code reviews or fuzz er's that will actually go through and do automated testing
on the good code reviews the very in depth code reviews usually use some combination of
all of these good code reviews will almost always do automated testing and then supplemented with manual testing of critical functions. Because the automated testing can't catch everything on, there's no, you know, computer power out there like what's in between your ears s o. The manual testing.
If you find good code, reviewers
could go a long way to supplementing what you do with the the automated side
buzzing. We talked about this a little bit, but fussing is this idea that you're using automated methods to find input. Validation airs with fuzzing. What you're essentially doing is throwing all sorts of combinations of input to an application and seeing where it craps out on you.
It's a really popular tool for finding operating system bugs, application bugs.
But if you look at the diagram here in the center, this really exemplifies what's going on with fuzzing here on the left. You're starting the the actual program. You start the fuzzing process by providing some type of test input or you no strings or your l's or something to the application.
And then you're looking at the program. Is it still running? Did it crash? If it's okay in the program still running, that means our input was benign.
Will come up with another set of input and try it again.
And what we're looking for is the combinations of input strings or, you know, actions that caused the program. Thio either crash or hang
because those are the interesting cases. Those are you know, input did not provide expected outcome,
right. So when you check the program health, if you see that it hangs, essentially kill the program
started again and start your test again. But note the fact that the crash, you know, or that the input that you provided cause the program toe hang because now you can provide that to the developer, and they can go fix that because it was an unexpected case if the program actually crashes,
this is an interesting one, because that means that the application
has crashed and maybe there's an opportunity to write some type of buffer overflow. Are heap overflow or something like that? There's some opportunity to write an exploit for this that might be be able thio. Take advantage of your program, whatever it is that you're fussing. So you want to triage that crash and see
if you can understand what that crashes actually caused by
what are the parameters around that such that maybe it could or could not be exploited? But you want to save that Testa's well and store that with your test results, because that's definitely an input case that you want to look at
on. Have the developers fixes well, is why I said that this is a nightmare for developers. They don't like this because it causes them work every time you find one of these hangs or crashes. But from a security perspective, I like this because if I can find this now,
you know a year from now, when my programs out there in the wild I'm not gonna be responding to
security incidents as a result of you know, this bug.
So a good example of this you confined these tools on the Cali Lennox distribution from offensive security. This is a program called Spike. Ah. And here in the screen, shot in the on the left side. Here. We're actually fuzzing a FTP server with a set of known inputs
on. And so what we're doing is providing,
on the user input string Instead of providing username, we're providing all sorts of combinations of data just to see if the FTP service will choke on it on. So it's going to try different character combination's different letter combinations on At some point,
you mayor minute find may or may not find that the program crashes
in this particular example, the program does crash on. It was with an input of ah, bunch of capital A's or hex character. 41.
How would you find that if you were doing this manually?
Simply by luck, you just happen to stumble on it. Or if you were actually reviewing the source code you might actually see that there's a you know, an unbounded string check or something like that in the code that you could think about. Go. Yeah, well, maybe that will cause a crash down the line.
But by providing an automated, fuzzier like Spike,
you can craft all of those inputs on even have the program generate inputs for you random inputs for you, and just throw it out the program, Let it run overnight and see what it does.
Testing considerations before you run off and fire. Ah, code
code review or fuzzing application at your at your service is that you want to test take a look at the attack surface, Understand? You know where user input occurs, What type of application you're actually running. Look at the results that you're testing tools will provide to you.
Are the tools usable? I mean, do they make sense in your organization from a testing
Do the tools support the technology that the application was built on? You can't use, say a, uh, python tester on a assembly level program just won't work. So make sure that you're using technologies that are compatible with each other, and you're selecting tools that work
and also look at the performance and resource utilization.
If these tests require massive amounts of computational power because it's trying so many combinations or you need to paralyze things because you've got so many potential test cases, make sure you consider the fact that you might need some additional resource is to run this test.
All of these considerations should go together when you're starting to think about
Well, what tool am I gonna use for this? Am I gonna use immunity to bugger to do things manually? Am I gonna use Spike to do automated fussed fuzzing? Am I going to go grab 10 guys, put him in a room on throw pizza Adam for two weeks while they do a code review?
All of these types of considerations should go into figuring out whether you are,
you know, using the right tools or not
from a system level testing perspective, you want to make sure that the testing that you're doing is accounting for all the possible cases that you can think of. Now you think of all the potential inputs that your program might see. It's huge. It's a huge space
but look at the things that are most important to you and use that to kind of refine what tests you actually run
s so you might be looking specifically at things like privacy related issues. Can I compromise data out of this thing? Performance issues or stress conditions?
Usability concerns, compatibility with other applications or interface testing all of those sorts of things you might want to use to be able to refine your test and come up with a smaller test space. Otherwise, it's a pretty big, unbounded problem
and then maintenance considerations. So once you've run your test, that's great. But there are things that you need to do. Tow. Follow up on that every time you revise your application. It's probably a good idea to revise your testing plan and make sure that
how you're testing your application still is in line with how the application works
on. That takes close coordination with the developers and testers to make sure that they have the right test plan built
Anomaly evaluation. Let's say you start seeing the crashes. You're seeing the hangs while you're doing automated fuzzing, dig into the code and figure out what is causing that I'm willing to bet that you're gonna find one of your coders is using, you know, unsafe string functions. Or there's some coding practice being used
that's causing these particular bugs.
And in that case, if you understand the root cause of that, you can fix that. You know, send the guy to a secure coding class or specifically, look for that type of air, and that will give you, ah, much easier way to reduce your attack surface by eliminating those airs at the source.
Once you find problems making sure that you identify them, track them
Ah, and actually manage the changes associated with that in some guided fashion such that you know, the testers reported an air over here. You don't want that to get dropped and not make it into the actual you know, build release
s so as you're going through your testing, make sure you're using pick anything. Get hub. Could be an Excel spreadsheet,
whatever issue tracker and a bug track that you're using, make sure you put all of that information in such that you can track it all the way through to completion on when you're iterating on the actual task to fix this stuff. You can keep track of it. And you know what's been fixed on what hasn't been fixed.
And then, of course, update the documentation. You might find that you know, certain user inputs are coming from users because they don't know how to use the system. S O is part of your testing. Make sure you update your documentation on all of these considerations. Go into effect after you've done your test and you
I need to improve your application was as a result of that.