Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *

Already have an account? Sign In »

4 hours 39 minutes
Video Transcription
come back to Lesson 6.6, where we'll talk about communities.
So this lesson will describe the basic component. Classify some of the steps of setting up a cluster
and examine the Kubernetes cluster and then demonstrate the need for container ization.
We talked about Kubernetes quite a bit are on and off. Here's the acronym. I think I may have shown it before the Kate S.
Um So it's kubernetes is for container orchestration, and you may have seen some discussion doctor versus kubernetes. They're not really the same that they're different in the sense that they that they can do similar things or somewhat overlap.
But what we've actually found out over time is that you work better together.
The doctor had the idea of Docker Swarm, which was similar to the kubernetes in the orchestration. But Cooper, 10 days just seems to work a little bit better on, and Kubernetes can manage docker host,
so the way it's set up is automatic container provisioning. It's it sets up the networking, does the load balancing and scales across the notes.
And here's just the quote. It's Communities is an open source system for automated deployment, scaling and management of containerized applications
make it hopefully a little simpler, simpler directly from them.
Tears of the basics. You create a cluster, which is to conduct a collection of containers that work as a single unit.
And then when you deploy the app, you set it up in the configurations. That which says, Here's how to create here is that the update? You have the deployment controller for self healing, and the idea of a pod is it contains the shared resource is so you may have
the services that are related or need some type of sharing. And so you would put that into a single pod,
and then you need to expose the APP could by default. It's it's not exposed. It also, you define whether it's gonna be an internal external, whether you want it. Nat Load balancing exactly what you want,
and then you set up the way the app can scale by associating pods where they're available. Resource is on the nodes.
And then there's this interesting idea of rolling updates with Jim mentioned before.
What that really involves is you create your container image and you put in a repository and then you point kubernetes at that repository and say years where the image And then as the applications running, uh, Aziz containers were running they
and you need to update them instead of taking everything offline and not getting the customers all for scheduling downtime.
What you do is that you put a new, ah container or make it available. And so as thes service or containers not need anymore, and it's torn down, the new version of the container gets put in. And so you have this slow rolling of application or ah of these thes containers being being pulled up. So if you imagine a
video streaming service, you wouldn't want to just kick everybody off and say, Stop your movie, you lost it or
hey, don't go watch a movie. This time you would have this new version out there, and as
that people are done watching their movie, whatever video, and then they stop it. That container goes away, and when a new one is requested, the new version comes out there so you can get it just rolling updates.
She has a question for you. Do you understand the importance of rolling updates as I explain them
so the idea of no downtime. You have this ability to just slowly migrate or update units of
do you can. You can patch the image. You don't have to patch it in production. And this helps, especially if you do it in compliance, because you can do your compliance assessment on that image because you know that's what always going to be out there. I don't need to test every version that's out there,
and there's no management. As I mentioned in production, falling along with this infrastructure is a code ideas. We don't patch anything. Production
for the next ideas that kubernetes cluster. So you have a a master that does the managing for the scheduling state scaling along with rolling updates. You can actually have multiple masters if you need it for redundancies.
And, ah, the term node is the the worker unit were the apse Ron, so it's indistinguishable. Whether it's a VM or physical system doesn't really matter. Just idea that there's a cube lit in each one of these that does that has a communication with the master
and for the actual application deployment, the Masters. One that takes care of that does the scheduling when to run in the nodes and the load balancing and all that.
And as you mentioned, a pot is related to docker instances it if you want. If you're used to that terminology,
so why should we contain Arise? Especially if you're moving to a micro services environment, you need the ability to segment on build at scale. It helps simplify and creating. He's working on these units of logical parts of your application,
and so when you break it down to those components,
whether it's you, this is what handles my, uh, orders. This is what does the payment processing if you break them up that way, when you're building them, it simplifies the work, and also, when you're updating, you can update. Instead of having this whole monolithic application you have to pay.
You can update thes individual units or these micro services,
and the idea then brings operations closer to the development. Because there's the code that's being developed in these D services. Are are specifically are tied very tightly to the architecture on, and so that's when you can get a better version or with your infrastructure code tying the version to the code and the infrastructure,
Um, and then it also simplifies. You don't have to build a whole OS and hard in it. You can have these thes containers that have limited number of libraries and their limited amount of code, and you can heart in that
instead of the whole operating system. And there's obviously you can minimize. Resource is because it's not a full VM. It doesn't take as much memory usage. Any any of that in the CPU is a little bit less as well, and also launch quicker because if you've ever spun up on instance, you can do it in seconds versus a VM, which could take minutes
and then obviously makes auto scaling possible. Because we have thes Let's smaller resource is a lot quicker launch time. We can we can launch a lot quicker.
So we talked about containing workers orchestration in this lesson, and this is the end of the module soul. We're gonna wrap it up in the next lesson
Up Next