Azure Kubernetes Service (AKS)

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
14 hours 28 minutes
Difficulty
Intermediate
CEU/CPE
15
Video Transcription
00:00
>> Hello Cybrarians.
00:00
Welcome to Lesson 5.4 of Module 5
00:00
of this course titled AZ-301,
00:00
Microsoft Azure Architect Design.
00:00
Here are the objectives that we'll cover in this video.
00:00
We'll start by introducing
00:00
Azure Kubernetes Service and where
00:00
exactly this service is and what it provides.
00:00
We'll then discuss briefly
00:00
the architecture of the service
00:00
particularly the control plane and the worker nodes.
00:00
We'll cover AKS from a security perspective,
00:00
we'll cover AKS from
00:00
an availability perspective which
00:00
are best practices for this.
00:00
Finally we'll conclude with
00:00
the integration that exist between AKS and ACI
00:00
and how we can post our workloads into
00:00
ACI when deploying them to AKS.
00:00
Let's get into this. Azure Kubernetes Service is
00:00
a managed Kubernetes orchestration service.
00:00
What does this mean?
00:00
What this means is that AKS makes it simple
00:00
to deploy and manage a Kubernetes cluster in Azure.
00:00
It does this by allowing us to
00:00
offload a lot of the provisioning and
00:00
management responsibilities to Microsoft
00:00
while we focus on the application.
00:00
AKS also offers multiple Kubernetes versions
00:00
and as new versions become available in AKS,
00:00
a cluster can be upgraded
00:00
using the Azure portal or using Azure CLI.
00:00
During the upgrade process,
00:00
notes are carefully cordoned and
00:00
drained to minimize destruction to running applications.
00:00
Also native Kubernetes has
00:00
a rich ecosystem of development and
00:00
management tools such as Helm and Draft
00:00
and even the Kubernetes extension for Visual Studio Code.
00:00
This tools that we currently use to work with
00:00
native Kubernetes works seamlessly with AKS also.
00:00
Let's get into these in a bit more details.
00:00
Here is a diagram of the Kubernetes architecture.
00:00
Now, Kubernetes is an open source solution that provides
00:00
APIs that controls how and
00:00
where containerized applications will run.
00:00
In other words, it allows us to be able
00:00
to orchestrate a cluster of
00:00
virtual machines and schedule containers
00:00
to run on those virtual machines
00:00
based on available computer resources
00:00
and the resource requirement for each container.
00:00
To achieve this, Kubernetes has
00:00
a control plan which consist of the master node and that
00:00
includes Kubernetes components like the API server which
00:00
provides interaction from management tools
00:00
like Kubectl or Kubectl,
00:00
however you pronounce that.
00:00
It has the ACD component which is what
00:00
maintains the state of
00:00
our Kubernetes cluster and configuration.
00:00
It has the cubes scheduler which determines what
00:00
nodes can run our workloads and then starts the pots.
00:00
It also has the nodes which are
00:00
the actual virtual machines that run
00:00
containerized applications and services.
00:00
This is where AKS [inaudible].
00:00
As a managed service,
00:00
AKS greatly reduces the complexity for
00:00
deploying a Kubernetes cluster
00:00
and the core management tasks.
00:00
For example, when we deploy an AKS cluster,
00:00
the Kubernetes master and
00:00
all the nodes are deployed and configured for us.
00:00
We don't need to configure components
00:00
like a highly available ETCD store.
00:00
That's all taken care of by the Azure platform. How so?
00:00
The control plane which is
00:00
the Kubernetes master that we talked about
00:00
earlier that's completely managed by the Azure platform.
00:00
Azure handles critical tasks like
00:00
self-monitoring and maintenance for us.
00:00
We only manage and maintain
00:00
the agent nodes which is what runs our actual containers.
00:00
This is where it even gets better.
00:00
The managed Kubernetes master or the
00:00
control plane, it's completely free.
00:00
You hear that right. Completely free.
00:00
We only pay for the agent nodes within our cluster.
00:00
We do not pay for the master
00:00
and we do not pay for the management of that.
00:00
That's all taken care of.
00:00
Now, here are some of the element of orchestration that
00:00
AKS supports and that AKS provides to us.
00:00
Scheduling for example, what that
00:00
means is that it automatically finds
00:00
a suitable machine with
00:00
sufficient resources to run our containers.
00:00
Affinity or anti-affinity which means that
00:00
we can specify a set of containers that
00:00
should run nearby to each other maybe for
00:00
performance reasons or we can see a set of
00:00
continuous needs to run
00:00
sufficiently far apart from each other maybe for
00:00
availability of reasons and
00:00
the enforcement of that configuration
00:00
will be managed for us.
00:00
Also health monitoring which
00:00
means that it watches out for
00:00
container failures and it automatically
00:00
reschedules containers that fails.
00:00
Fail over which means that it constantly
00:00
keeps track of what is running
00:00
on each node and it
00:00
reschedules containers from filled nodes to held notes.
00:00
Scaling, which means that it can add or remove
00:00
container instances to match the demand
00:00
and this can happen either manually or automatically.
00:00
Networking, which means that it
00:00
provides an overlay network for
00:00
coordinating containers to communicate
00:00
across multiple hosts and machines.
00:00
Service discovery which enables
00:00
containers to look at each other
00:00
automatically as they move around between
00:00
host machines and as their IP addresses changes.
00:00
Also coordinated application upgrades which manages
00:00
container upgrades to avoid
00:00
application downtime and also
00:00
enables roll back if something goes wrong.
00:00
Here are all the different features and
00:00
functionalities that AKS provides for us.
00:00
Here are some of the security best practices for AKS.
00:00
Number 1 we need to manage who has
00:00
access to AKS from the platform level.
00:00
We can do this with existing Azure AD Identities.
00:00
The principle that we want to follow is
00:00
the principle of least privilege.
00:00
Remember this is talking about from the platform level.
00:00
The other aspect that we want to take care
00:00
of is from the cluster level.
00:00
We want to control and limit who has permissions to
00:00
AKS or to the Kubernetes
00:00
API server itself from the cluster level.
00:00
We can also integrate this with Azure AD so that we
00:00
can simplify the management
00:00
of identities from a single place.
00:00
The other best practice is around using pod identities.
00:00
In some cases, our pods
00:00
or application or containerized applications
00:00
that are running within this pod need
00:00
access to setting Azure services.
00:00
It's not a good idea to use fixed credentials to do this.
00:00
The best idea is we integrate with something called
00:00
Azure Managed Identity to achieve this.
00:00
What that means is if
00:00
an application that's running within our pod needs
00:00
to access an Azure service like
00:00
C-Core database or Cosmos dB Azure storage,
00:00
they can simply request for a bearer token
00:00
from Azure AD using the pod identity,
00:00
obtain the token and then use the token to
00:00
access the application that they need access to.
00:00
Number 4, limit container access to resources.
00:00
What this means is with
00:00
the pod identities that we talked about,
00:00
we want to limit access to actions that
00:00
containers can perform within Azure platform.
00:00
We provide least number of permissions and we want
00:00
to avoid the use of privilege escalation.
00:00
Also it's a good practice to
00:00
regularly update to the latest version of Kubernetes.
00:00
To stay current on new features and bug fixes,
00:00
we want to regularly upgrade to
00:00
the Kubernetes version that
00:00
Microsoft has released as the latest supported vision.
00:00
We also want to process Linux node updates and
00:00
reboots using Kubernetes reboot daemon and AKS
00:00
automatically downloads and install security fixes
00:00
on each Linux nodes
00:00
that are running within our AKS cluster.
00:00
But it does not automatically
00:00
reboot if a reboot is necessary.
00:00
We want to use this Kubernetes reboots
00:00
daemon to watch for painting reboots and then
00:00
safely cordon and drain
00:00
the nodes to allow the nodes to reboot and
00:00
apply the updates and so that we can be as secure
00:00
as possible as we get to the operating system.
00:00
If we're using Windows 7 nodes within
00:00
our AKS cluster we want to regularly perform
00:00
an AKS upgrade operation which will safely cordon off
00:00
the host and then drain the pods
00:00
and then deploy updated notes.
00:00
Here are some availability best practices for AKS.
00:00
Number 1, plan for AKS clusters in multiple regions.
00:00
AKS is deployed into a single region by
00:00
default but to protect our system from
00:00
regional failures we want to deploy our application
00:00
into multiple AKS clusters across different regions.
00:00
Now, if we have multiple AKS clusters in
00:00
different regions we want to use traffic manager to
00:00
control how traffic flows to
00:00
our applications that runs in each cluster.
00:00
Azure traffic management is
00:00
a DNS based traffic load balancer that can
00:00
distribute network traffic across regions.
00:00
Number 3, use
00:00
geo-replication for container image registries.
00:00
Do not forget this in planning for
00:00
availability because it's very easy
00:00
to focus on availability for the running applications
00:00
and forget that the container images
00:00
for applications are actually stored in
00:00
either Azure Container Registry
00:00
or a private registry somewhere.
00:00
You want to ensure that you
00:00
have availability configured for that also.
00:00
The other option that we want to do is we want to plan
00:00
for application states across multiple clusters.
00:00
Because where possible we don't
00:00
want to store service state inside the container.
00:00
Instead what we want to use is use an Azure platform
00:00
as a service option
00:00
that supports multi-region replication.
00:00
Finally we want to replicate
00:00
storage across multiple regions.
00:00
If we're using Azure Storage we
00:00
want to prepare and test how to migrate
00:00
our storage from primary region to
00:00
the backup region and
00:00
our applications might use Azure storage for their data.
00:00
Maybe because our applications are spread across
00:00
multiple AKS clusters in different regions,
00:00
we need to ensure that we have a way to keep
00:00
the storage in sync and we also want to
00:00
test the fail over across the different storage clusters.
00:00
When it comes to scale over
00:00
containerized applications that are running on AKS,
00:00
the Kubernetes Scheduler allocate
00:00
a pod to run on nodes within the cluster.
00:00
If we run out of resources with
00:00
existing nodes we can add more nodes to a cluster
00:00
provided that we have not reached
00:00
the Azure cluster limit and I'll be
00:00
showing you how to do this in the demo.
00:00
But it may take a few minutes
00:00
for those nodes to successfully
00:00
provision and before the Kubernetes Scheduler
00:00
is allowed to run pods on them.
00:00
There is another option however.
00:00
To rapidly scale how our AKS cluster,
00:00
we can integrate with
00:00
Azure Container instances which allows us to
00:00
quickly deploy container instances
00:00
without additional infrastructure overhead.
00:00
We discussed this earlier.
00:00
When we connect AKS with ACI,
00:00
ACI becomes a secured logical extension
00:00
of our AKS cluster. What does this mean?
00:00
What this means is that we can use ACI
00:00
as a virtual node for AKS.
00:00
This is done using something
00:00
called the ACI connector which
00:00
turns ACI into a virtual node
00:00
and this is based on the open source virtual Kubelet.
00:00
This will be installed on our
00:00
>> AKS cluster and it's going
00:00
>> to present ACI as a virtual Kubernetes nodes.
00:00
We can then use AKS Virtual Nodes to provision
00:00
pods inside ACI that starts in seconds.
00:00
This enables AKS to run with just enough capacity for
00:00
our average workload and as we
00:00
run out of capacity in our AKS cluster.
00:00
This brings me to the end of this lesson.
00:00
Thanks very much for watching and
00:00
I'll see you in the next lesson.
Up Next