4 hours 39 minutes
here we are at less than 5.7. We're going talk about delivery maturity specific to the old WASP maturity model.
We're gonna take a look at it and mention the maturity model for for delivery
and look a test verification specifically and discuss some of the application test.
So there's as I mentioned before, we just kind of reiterate there's four different levels for maturity. We're gonna take a look at test verification dimension for this, since at that phase we're in
and look at the specific sub dimensions for application tests and then dynamic application scanning
for testing. Verification. The maturity model doesn't have anything listed for the basic Level one, but for Level two, they should said you should be doing security tests for some of your important components. As you get to the next level, do some integration tests for the same components,
and then, as you get to the higher level, you should be doing high coverage of all security related modules. All your integration tests. You should be doing smoke tests as well, to just really mature your application testing
and then for the sub. There's another sub dimension here for the dynamic testing where at level one you just be doing the simple scan for as you get to the next level, you should be doing client side dynamic components and then using different roles, kind of mentioned about defining the business logic failures
that would be specific to a role that you're not going to see when you're doing
regular Dass scanning because they're looking for patterns, not business rules.
A quick check here.
Do you see what are the benefits for using multiple scanners?
So each scanner has a different strengths and weaknesses. We actually I showed this in in the demo where we compared the results of the S E A. From OAS tool in the essay from the I asked toward found some the delta between the two, so they may not speak specific to the strength of the weaknesses, but they just
have to do with their ability to find whatever use patterns that use whatever source data they're using to find vulnerabilities.
And I said multiple scanners can cover the gaps, and Andy, the combination of the two can give you a better sense of just drilling down and really finding everyone of the vulnerabilities that are out there are giving you a better coverage
and needed. But you really gonna need to balance the price. So especially if you're not using,
um, open source tools. If you actually paying for the tools,
you're gonna want to know our how much budget you have and balance this. Is it worth the money I'm gonna put out there to find the small gaps? Or that might be during the testing phase where you actually test different applications or scanners. See what what value you're getting from them
for the dynamic depth were just falling back on level three, you'd be covering some of the more hidden components input vectors, looking at sequential operation and again actually implementing multiple scanners like we just talked about.
And as you get to the highest level, you be looking for a lot more coverage analysis, especially if you're running containers with micro services. You be stuck. Your start looking at service the surface communication as well toe Identify any vulnerabilities within that what we would think of maybe as a quote unquote trusted arena, you're going to want to look at that.
We looked at the maturing our def SEC ops pipeline specific to the ah, loss. Ma jewel
in this. Sorry the model. And in the next lesson, we're gonna wrap up because we've reached the end of the module.