come back for the final module.
This is modulate, and it's a conclusion of the course. So let's just
revisit the concepts of what we learned.
This is let lesson 8.1.
So during the course, we started off with the idea of remembering that when I'm trying to get across whole time, is that
would you want to be able to do is develop a maturity model for integrating def SEC ops? Eso all these concepts. All these that that we've learned is think about them instead of a whole list as a whole holistic view is, how can I do these over time? Or maybe evaluate what I already have?
How can I mature my pipeline,
my whole project into incorporating some of these concepts? We developed a deficit cops pipeline and in Jenkins. And so we demonstrated each one of the stages how we added all new security tools, how we actually did the deployment into, uh,
virtual instances or into containers.
And we selected the tools to automate security testing, looked at the outputs of them from the even, looked at the planning phase from the perspective of tools that we would give a developer so that, as they're writing in the I D,
they could be fixing these bugs. So the idea is, we always want to push this as far left as we can. Let's fix these bugs. Tex
fixed these issues early
on. We different j between static analysis and dynamic analysis, looking at the source code before it's built, and then looking at the application after it's built in a running state so that we can
see if there's any new vulnerabilities that are introduced. One. It's once in center running statement pulled in the third party libraries or any other components that are necessary.
We talked about kind briefly, the idea of continuous integration and continuous delivery. How this fits in with agile and
this hands off of humans in the process of,
UH, code is developed. It's pushed its run to this automated pipeline, and it's deployed without any humans being needed. Unless there that there's some problems that were encountered.
We introduced the idea of I asked tool running in the application and performing some of this evaluation of the security findings and looking at them from the perspective of had almost a bridge between the source code tool and the dynamic tool.
And then we threw out with mapped our tools and our processes to the NUS Secure Software Development Framework. This is a good one of the or limited resource is out there that really provides requirements for developing secure code.
We demonstrate the need for third party library review is called software composition analysis and identified some of the these running vulnerabilities within applications. And
the idea is that we really need to patch these libraries and same way that we fix bugs. In that same way, we patch operating system just should be part of our regular process. Is reviewing the applications reviewing what third party dependencies air being pulled in there, or even from the perspective of the supply chain is
if we have a Java script or somebody that's being pulled from a source that week, we don't control. Should we be doing that?
And we introduced the rasp as an extension of the I s tool is the road instead of different running for testing, actually running in an active phase and blocking
vulnerabilities from there, and we want to do this as a stop gap measure in between the time of discovery, the vulnerability and actually being able to fix it so that we're not vulnerable out there. And but using this in conjunction with waft and then just a soul defense in depth, where you're identifying tools and using multiple layers
to protect your application and not relying on one single tool
talk about infrastructure is a code, the benefits of it, the repeatability,
um, ability to have ah stable environment from development to production without any drift between that, so that we can. We can really identify vulnerabilities throughout the life cycle or find them early. And these new bugs don't get introduced down the line where we didn't have to go back and fix something in production. Or
make sure what the difference is in between development and production, which should be the right setting or the with rich.
That is the correct environment.
We talked about kubernetes and container orchestration, looked at what micro services mean and the benefits of moving to that type of architecture, and we took a look at the Cloud Native Computing Foundation,
some of their projects for operations monitoring and
looked at that. The reason we're looking at that is to find some good open source products out there that are stable that are sure that have been tested. And we know that we're getting a good software. That it's not going to go
fall out of development is sometime in the near future so that we can, and we also don't get locked into some specific fender.
It's one final quiz. What is the proper order of the phases of def sec ops? Is it
Try this again. Fix this three to
So the final quiz here. What's the proper order of the phases of def sec ops? Is it delivered? Develop, Deploy
or is it
developed? Delivered. Deploy
or is they developed? Deployed Deliver.
I'll give you a chance to think about this, but you should should be easy one to get.
We first developed the code. We deliver the artifacts to Repository, and then we deploy the application to production. That's the proper order.
So thank you so much for watching this video. I hope you learned a lot, actually learned a lot myself because it took a lot of research to
pulling old all these ideas and get him in the right order. hopefully the right sequence. Developing installing the software, myself, setting up that Jenkins pipeline again. Thank thanks for watching. If if you need to contact me, have any further questions or interested, Here's my linked an address.