Compute Technologies

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
9 hours 59 minutes
Difficulty
Intermediate
Video Transcription
00:01
>> In this video, we'll cover compute technologies.
00:01
Specifically, we'll talk about
00:01
four categories: virtual machines,
00:01
containers, serverless, and platform based.
00:01
You've probably been exposed to virtual machines before.
00:01
In this paradigm, the hypervisor
00:01
coordinates execution of the base machine image.
00:01
This is the operating system image
00:01
against the underlying host hardware.
00:01
The hypervisor takes any requests
00:01
for underlying hardware,
00:01
such as memory,
00:01
or local disk, and it brokers it accordingly
00:01
with the underlying host machine
00:01
and other infrastructure devices.
00:01
The isolation qualities of
00:01
a hypervisor make multi-tenancy possible.
00:01
If this isolation were to fail,
00:01
the core tenant of Cloud services
00:01
and the multi-tenancy nature,
00:01
as well as the business model for
00:01
that matter would be put into question.
00:01
Meltdown and Spectre were
00:01
recent hardware level vulnerabilities.
00:01
Both of these vulnerabilities
00:01
had to do with exploiting the way
00:01
CPUs handled access to memory spaces.
00:01
Basically, when exploited, this allowed to
00:01
bypassing the isolation of memory spaces.
00:01
This meant that if this vulnerability were
00:01
ever exploited in a multi-cloud environment,
00:01
one tenant could be able to
00:01
access the memory being used by
00:01
another tenant if those two virtual machines
00:01
were running on the same host.
00:01
Fixes had been put into play for
00:01
these particular vulnerabilities,
00:01
but it doesn't mean that there won't be
00:01
future vulnerabilities of a similar nature.
00:01
Containers are segregated execution environments that
00:01
leverage resources of
00:01
the host OS that they are running on.
00:01
Think of it as an evolution to the virtual machine,
00:01
but much lighter weight, which means
00:01
multiple containers can run on the same machines.
00:01
This lightweight aspect also
00:01
allows you to create and destroy
00:01
container instances more rapidly than a virtual machine.
00:01
Docker is by far the most pervasive container technology.
00:01
Then you have other technologies that help you
00:01
manage horizontally scaling containers,
00:01
scaling the underlying VM nodes
00:01
that host those containers,
00:01
and deploying new versions of the containers.
00:01
Cloud vendors have passed solutions to help with this,
00:01
such as Amazon's Elastic Container Service
00:01
or Azure's managed Container Instances.
00:01
The Kubernetes is a very popular solution
00:01
for managing large container clusters.
00:01
So popular that large providers even
00:01
offer a Kubernetes PaaS solutions themselves.
00:01
For example, GKE, Google Kubernetes Engine, AKS,
00:01
Azure's Kubernetes Service,
00:01
and EKS, Amazon's Elastic Kubernetes Service.
00:01
Using these services will still
00:01
create a degree of vendor lock-in,
00:01
but not as tight of lock-in if you were to be using
00:01
the providers proprietary PaaS service
00:01
for managing containers.
00:01
You can still take advantage of
00:01
Kubernetes capabilities when you go this route,
00:01
such as sidecar injection,
00:01
while a proprietary container manager
00:01
may not provide all those capabilities.
00:01
In full transparency,
00:01
serverless computing does involve servers,
00:01
but the Cloud user doesn't concern
00:01
themselves with creating that server,
00:01
managing that server, or any of the life cycle functions.
00:01
Similar to a container,
00:01
but the runtimes dependencies are also managed for you.
00:01
This means there are limitations to the kinds
00:01
of things you can do with serverless functions,
00:01
but those limitations continue to be reduced
00:01
as Cloud providers strengthen this offering.
00:01
The technology was originally introduced by AWS to take
00:01
advantage of spare compute
00:01
cycles available in datacenter hardware.
00:01
Serverless functions in startup perform
00:01
the processing and shutdown in time
00:01
intervals less than five minutes,
00:01
maybe up to 15 minutes.
00:01
Since their provider controls how this work is
00:01
distributed across their entire pool of hardware,
00:01
you can see how they are able to take
00:01
advantage of these short running bursts of
00:01
compute to optimize
00:01
hardware usage throughout their datacenter.
00:01
In this model, you're only paying for the time
00:01
that the function itself is executing.
00:01
The provider manages the details of scaling out
00:01
this execution environment when
00:01
there are spikes in demand.
00:01
If you have a predictable workloads,
00:01
this approach may not be the most
00:01
cost-effective as VMs or containers.
00:01
But if the workload is scheduled
00:01
or has peaks and valleys,
00:01
there's a value in simplifying
00:01
your own operations so that you don't have to worry
00:01
about the time it takes to start new VMs or
00:01
the complexity in managing
00:01
an elastic cluster of containers.
00:01
The graphic in the middle has three icons.
00:01
These are the most popular
00:01
serverless technologies out there.
00:01
You have AWS Lambda,
00:01
Azure functions, and Google Functions.
00:01
The CSA defines
00:01
platform-based workloads as anything running
00:01
on a shared platform that is in
00:01
a virtual machine or a container.
00:01
This overlap in definition can
00:01
cause confusion with serverless.
00:01
The main thing to remember for your CCSK exam
00:01
is that in this paradigm
00:01
the provider is responsible for securing
00:01
the platform all the way
00:01
down to the facilities themselves,
00:01
just like they would with any other PaaS offering.
00:01
The first icon is for Azure's Cosmos DB.
00:01
Stored procedures running in
00:01
a database PaaS environment like
00:01
this would be falling into this category.
00:01
Similarly, if you have machine learning using the PaaS,
00:01
such as Google Cloud Machine Learning,
00:01
which is the second icon,
00:01
that would also be considered a platform-based workload.
00:01
In this video, we talked all about compute and
00:01
the four major categories: virtual machines,
00:01
containers, serverless, and platform based compute.
Up Next