in this video will cover compute technologies. Specifically, we'll talk about four categories. Virtual machines, containers, serverless and platform based.
You've probably been exposed to virtual machines before, and this paradigm, the hyper visor, coordinates execution of a base machine image. This is the operating system image against the underlying host hardware. Hyper Visor takes any requests for underlying hardware such as memory or local disk,
and it brokers it accordingly. With the underlying host machine
and other infrastructure devices,
the isolation qualities of ah hyper visor make multi tenancy possible. If this isolation were to fail, the core tenant of cloud services and the multi tendency nature as well as the business model for that matter would be put into question meltdown Inspector were recent hardware level vulnerabilities.
Both of these vulnerabilities had to do with exploiting the way CP use handled access to memory spaces.
Basically, when exploited, this allowed to bypassing the isolation of memory spaces. This meant that if this vulnerability were ever exploited in a multi cloud environment, one tenant could be able to access the memory being used by another tenant. If those two more virtual machines were running on the same host
fixes have been put into play for these particular vulnerabilities. But it doesn't mean that there won't be future vulnerabilities of a similar nature.
Containers air segregated execution environments that leverage reef sources of the host OS that they're running on. Think of it as an evolution to the virtual machine but much lighter weight, which means multiple containers can run on the same machines. This lightweight aspect also allows you to create and destroy container instances more rapidly than a virtual machine.
Docker is by far the most pervasive container technology.
Then you have other technologies that help you manage horizontally, scaling containers, scaling the underlying VM nodes that host those containers and deploying new versions of the containers. Cloud vendors have passed solutions to help with this, such as Amazon's elastic container service or reserves managed container instances.
But Kubernetes is a very popular solution for managing large container clusters
so popular that large providers even offer kubernetes past solutions themselves. For example, G k E, Google Kubernetes engine, a ks azar's kubernetes service and Eks, Amazon's elastic community service
using these services will still create a degree of vendor lock in, but not his type of lock and if you were to be using the providers proprietary past service for managing containers, and you can still take advantage of kubernetes capabilities when you go this route. Such a sidecar injection while a proprietary container manager may not provide all those capabilities
in full transparency. Serverless computing does involve servers, but the cloud user doesn't concern themselves with creating that server. Managing that server are any of the life cycle functions similar to a container, but the run times dependencies air also managed for you.
This means there are limitations to the kinds of things you can do with serverless functions. But those limitations continue to be reduced as cloud providers strengthen this offering.
The technology was originally introduced by AWS to take advantage of spare compute cycles available in data center hardware. Serverless functions should start up, performed the processing and shut down in time intervals less than five minutes, maybe up to 15 minutes.
Since the provider controls how this work is distributed across their entire pool of hardware, you can see how they are able to take advantage of these short running bursts of compute
to optimize hardware usage throughout their data center. In this model you're only paying for the time that the function itself is executing. The provider manages the details of scaling out this execution environment when there are spikes in demand. If you have a predictable workloads, this approach may not be the most cost effective as v, EMS or containers.
But if the work look is scheduled or has peaks and valleys,
there's a value and simplifying your own operations that you don't have to worry about the time it takes to start new PM's or the complexity and managing an elastic cluster of containers. The graphic in the middle has three icons. Thes air, the most popular serverless technologies. Out there, you have AWS lambda as your functions and Google functions.
The C S. A defines platform based workloads as anything running on a shared platform that is in a virtual machine or a container.
This overlapping definition can cross confusion with serverless. The main thing to remember for your CCS K exam is that in this paradigm, the provider is responsible for securing the platform all the way down to the facilities themselves, just like they would with any other pass offering. The first icon is for Azar's cosmos db
stored procedures running in a database past environment like this
would be falling into this category. Similarly, if you have machine learning using a pass such as Google Cloud Machine Learning, which is the second icon, that would also be considered a platform based workload. In this video we talked all about compute in the four major categories virtual machines, containers,
server lis and platform based Compute.