3 hours 51 minutes
what are containers and how can you use them in azure
virtual machines, virtualized hardware and are good for migrating legacy applications from on premises to azure. But they have some disadvantages.
For example, if your APP has components that require different runtime environments, you'll need multiple VMS to run it.
Here's an example where you have an n g i N x Web proxy
no dot Js runtime for your business proxy
Python runtime for your batch processing and mongo DB as your database.
Of course, you can install those all on the same VM, but you have no way to restrict the resource usage of one component and make sure it doesn't impact the others.
Also, installing all the components on the same machine compromises the security of the application.
This is why it's better to use multiple VMS
because VMS are emulating full computers. Tasks like starting and stopping them is slow and often takes a few minutes.
Another issue with the VMS is that the guest OS consumes resources like CPU and memory that cannot be made available to the applications impacting the efficiency of the VMS.
If you need to achieve the same isolation as the VMS approach but want to increase the efficiency of your infrastructure.
You can use
You can deploy multiple containers on the same host using only a single OS and avoid the overhead of multiple VMS and operating systems.
Containers are lightweight because they do not require a full OS food, and they can be created, scaled out and stopped within seconds.
This allows you to quickly respond to changes in demands.
Because containers don't require additional OS. All resources are dedicated to the application.
This significantly increases the efficiency of the infrastructure.
Containers, unlike VMS, virtualized the operating system and allow you to run multiple applications on top of a single OS.
Containers are closely related to a new trend in the application architecture called microservices architecture
a microservices, a service that is a small, well defined scope and is loosely coupled from any other service.
Instead of building one monolithic application, you build many small services that each fulfill a single business function.
Then you stick those services together and provide the business logic of your application.
Each micro service can be deployed as a set of containers that are configured to work together.
Now this is all well and good. But what are the benefits of using microservices
Well, First, they can be implemented by separate teams that have respective experts to implement the functionality.
They can also use different technologies, frameworks and programming languages.
You're not required to use a single stack for all the services, which can help you by leverage your team's expertise and makes hiring developers easier.
You can also release and deploy microservices independently from each other as often as you want, And deployments can be lightweight and don't require a lot of time.
Because microservices are small pieces of business functionality, they require a smaller code base, which makes them easier to maintain and roll back. If a bug is discovered,
and last but not least, microservices can be scaled independently.
You can just increase the number of instances of the micro service that is bottleneck in your application and leave. The remaining as is
because one application can consist of tens or hundreds of microservices, each of one which can be comprised of multiple containers. Deploying, managing and scaling those manually is impractical.
This is why container orchestration solutions like kubernetes are needed
with the help of kubernetes, you can handle the demands of managing containerized applications at scale.
Here's how it works.
A kubernetes cluster consists of multiple notes.
Those can be virtual machines that have a container engine installed on them.
One of the most popular container engines is Doctor
Cooper. Netease manages the placement of pots, which can consist of multiple containers.
You can think of a pot as a single micro service.
Because Kubernetes communicates with the nodes as well as the pods, it can dynamically move pods between notes.
Let's say if one of the pod fails,
kubernetes can automatically restart it.
If a whole note fails, kubernetes can redeploy the bot on a healthy note.
Kubernetes can do even more.
It can scale a pod by decreasing the number of containers within. It can stage the deployment of a pod to reduce the downtime, and it can even roll back the deployment if something fails.
In addition, it can manage the storage.
Persistent volumes can be mounted on one or more containers to allow them to persist. The data between pod restarts
this way. If the node fails and the pot needs to be redeployed on another node, the data will still be available when the new pod instance starts.
Of course, applications running on kubernetes can use any cloud based storage solution to persist. Their data
kubernetes networking plug ins enable functionality like network isolation, policy driven network security like firewalls, load balancing and exposing pods to the Internet.
Those plug ins can also effectively manage name resolution between the pods.
And last but not least, Kubernetes has a rich set of a piece that can be used to automate deployment and management, as well as to extend the platform with richer functionality.
As your supports, docker containers for Lennox workloads and Windows containers for Windows ones and offers a few services for managing containers
as your container Instances or a CI is a past service that allows you to run a container without the need to manage virtual machines. In the docker engine,
you just upload your container and run it.
Another service for managing containers is azure kubernetes service or a ks.
A ks is a complete orchestration service for containers that can be scaled to hundreds or thousands of notes.
The third service as your offers, is the azure Container registry or a CR that allows you to upload inversion your container images.
A. C R is similar to Docker Hub and is fully compliant with a docker container registry. AP
Using a CR, you can create your own private container repository and use only approved container images within your applications.
You can configure both a CI and a ks to pull the images from Azure Container Registry.
This covers the container technologies and services available in Azure.
In our next video, we'll look at the platform as a service option for compute azure APP service.
Become an Azure Cloud Engineer
As one of the dominant cloud computing services, Microsoft Azure is responsible for more than ...
AZ-900 Azure Fundamentals
The Microsoft Azure Fundamentals practice test by CyberVista helps you prepare for and pass the ...