All right. Now, we've looked at functional testing Where we look at whether or not the software performs. Is it supposed to? Now we're gonna look at non functional testing, checking things like its performance isn't going to meet the objectives of the business from a performance standpoint. Where is the software gonna be a bottleneck?
Will the software satisfy the requirements that we've committed to in the service level agreement? Hey, that's performance testing. Then we can also do load testing and stress testing these to go very closely together. So with low testing, what we're trying to figure out is, how much can the software handle?
How many connections? How much Processing Hammond users. How many tasks
can the system handle gracefully?
Stress testing We're looking at Okay, what happens if we exceed those points? How does the software failed? Does it fail securely? Does it fail in a manner where no further breach could be compromised? Ah, can the software recover gracefully that comes under stress testing.
So with low testing were kind of looking to see what it can handle
with stress testing. We want to see what happens if we exceed those limits. We've learned in load testing. So a lot of times, the two go hand in hand.
Ah, scalability testing. Is this a software environment that can grow or is it very limited to our existing structure? And of course, we always want to be very forward thinking we always want to design. That's gonna allow our organization to grow and get past. You know, the early on limitations of the business.
We'll test the environment. We want verification that the security of the environment in which we're gonna install the software
is going to support the installation of the software, that it's a secure environment, that it's gonna be implemented. Well,
interoperability testing again. You know, this application isn't gonna exist in a vacuum. It's going onto a network with lots of other systems and lots of other functions, and other applications and processes are all in place. What we want to find out is, will our application sit in
or is it gonna cause trouble with other systems?
We wanted to sit right in. We want to be interoperable, and one of the main ways to allow interoperability is by following the standards. You know, if we try to be very proprietary. In our nature, we tend to miss out on following the standards that allow us to fit into most in the environment
disaster recovery testing. So here what we're talking about is with the application in the event of the failure, are we going to be able to recover? The application ended Stata,
based on the criticality determined within our organization. So essentially, what that comes down to is can we restore what we need to restore quick enough to be about you?
So when we talk about disaster recovery, we talk about ideas like maximum tolerable downtime, which the very longest Aiken be without this component before my company suffers a loss. Well, when we talk about disaster recovery testing, can I restore within that maximum tolerable downtime?
Or is this software that's gonna take
ages and ages and ages to restore? Because I may not be able to meet those requirements? Also, if it's a database application, how quickly can I recover? The process is how many processes would be lost in the disaster based environment.
Do I have the appropriate control so that I can restore data appropriately? All that's got to be checked,
all right, then simulation testing.
It's great to do all these tests in the lab, and we absolutely have to do it. But ultimately, um, the real test is how it works out in production. Well, we can't afford to send the non tested a non verified system into production. So the key is really making sure that my lab environment
minutes mimics production
as much as possible. That's not always easy because we have so many different elements out in production. But a sure sign that I don't have a good match is when we have software that performs very well in production. I'm sorry in labs, and then it gets to production and it fails. Well, obviously there's gonna be some sort of variation between our lab environment
and the production environment.
Our goal is always gonna be to get those two as close as possible.
Different types of testing, other testing. We may test for privacy, making sure that the sensitive information is protected appropriately. We've mentioned that already user acceptance testing. We've talked about already letting our end users get the product in their hand, making sure the application will meet their needs.
Remember, these are the folks that are gonna be using the software anyway.
We want to make sure that we get their feet back.
All other testing will be completed before we turn it over to the user. So we've gone through our unit testing. We've integrated with done regression testing. Really? This is one of the final things that we do before turning over this product to production is we do our user acceptance testing. Um, often,
we want to make sure that this is close as close to real world scenario usage is possible
because we want our users to have confidence and we want to have confidence in the product that we're producing.
In just a moment, we'll take the final step and talk about security test.