Time
14 hours 13 minutes
Difficulty
Intermediate
CEU/CPE
20

Video Description

Host Scalability part 1 This lesson offers a breakdown of the module, which will cover the following:

  • The functions of DRS cluster
  • Explains benefits of Enhanced vMotion compatibility (EVC)
  • Creating a DRS cluster
  • Viewing DRS cluster information
  • Removing a host from a DRS cluster

A DRS cluster is a collection of hosts and virtual machines (VMs); it allows for load balancing. However, the virtual machine (VM) must be DRS enabled in order for it to be used.

Video Transcription

00:04
Oh, I'm doing fine. Paleo. Welcome to Cyberia. We're talking about the virtual ization installation configuration management course, and we're now in module 12 hosts scalability.
00:16
So in this lesson will be describing the different functions of a deer esque cluster.
00:22
What E. V C means it's enhanced the motion compatibility. We'll talk about some of the benefits that that provides.
00:30
See what's involved with creating a de arrest cluster. We'll look at the deer s cluster info
00:35
and remove the host from DRS. Closer so you can see how that works.
00:39
This lesson will be in several parts,
00:42
so stay tuned for the other parts after this one.
00:46
All right, So what is the cluster? We've talked about this a little bit previously. It's a collection of hosts and v EMS.
00:52
In this case, we have high availability and disaster. Sorry. Distributed resource scheduler enabled A chain DRS cluster is what we're looking at.
01:02
Do you arrest gives us a lot of advantages. When we power up of'em,
01:06
we get a recommendation for its initial placement within the cluster. So if I'm gonna power of'em three,
01:12
the rest can tell me. Well, it should probably go on the SX two based on the bolo that CSX warning the SX two are using
01:19
how much memory and and processor is available would determine where DRS thinks the VM should should go to best take advantage of your available resource is
01:30
dear. Us also allows low bouncing.
01:33
So if I have to be EMS on, let's have all three of'em zanni SX. One
01:38
dear s might
01:40
decide that one of those beams will be best to move to the second host,
01:44
so I've got 201 1 on the other. Now the load is banned list a little bit better,
01:49
and you can also use DPM, which is distributed power management, and this will actually power down your hosts when they're not needed. So if you had a multi host cluster, let's say like four or six hosts
02:01
as your uh, your workload builds up during the day,
02:07
you're all of your hosts will be running. But as the workload tapers off, perhaps some of the V EMS could be moved to three hosts, or maybe even two hosts and the other to host could be powered down
02:17
when the workload rises up again. Those other hosts could be powered up in V EMS. We move to them.
02:23
So this lets you save on the power requirements of keeping all four hosts running at all times. It's a great future.
02:31
We don't get to explore that in the lab, unfortunately.
02:35
Okay, so what kind of prerequisites do we need for dear s?
02:38
First of all, the VM should should all be the motion compatibles. All the requirements for
02:44
the motion should be met in north to allow the V EMS to move between the different host When DRS
02:50
makes a recommendation to do so.
02:53
Also, the most in the cluster have to be part of the V motion Network.
02:58
If you want to use load balancing so you might define a network called the Motion and softer VM Colonel Port enable that future which we'll see in the lab.
03:07
But that needs to be there so that the thieves to host can communicate with each other to allow the PM's to move back and forth.
03:15
Both of the hosts. Russians say both, but all of the hosts in the cluster need to be connected to the same shared storage.
03:23
This is because we might be moving the VM files in addition to the VMS than themselves being registered with each host.
03:31
All of the discs for the V EMS need to be on that same shared stores, so they're accessible to both hosts.
03:39
And then we have to make sure that our show storage capacity is large enough to hold all the V EMS in this case, these three of'em.
03:47
So both those canasta up access to storage. The story is large enough to hold all the PM's at once.
03:53
Then the registration from one host to another for different VM
03:58
can be done with DRS load balancing.
04:00
Pretty, pretty
04:01
amazing when this all works as expected.
04:04
Uh huh.
04:05
So, one of the things we have to think about is the automation level that we'd like to the arrest Coster to utilize.
04:13
You can start with a manual automation level, which means that you manually decide whether the EMS go and manually decide to be motion them around to do your load balancing.
04:24
This is a great way to begin using DRS.
04:27
Just so that you can have it tell you what the recommendation is. But you don't actually apply the recommendation automatically. You decide to do it manually. So this gives you a chance to see how the software works and become more comfortable with it.
04:41
We could make the automation level partially automated, which means that now the placement of initial VM power ups will be automatic.
04:48
However, the dynamic balancing still has to be done manually.
04:53
On the last option is to fully automate
04:56
the automation level. Now, the initial the emplacement and the dynamic balancing has done automatically for you.
05:02
You'd only want to go to this level. Once you're sure that your manual load balancing initial placement activities were producing the results you wanted.
05:13
You wouldn't want to go to fully automated right away, especially without considering the migration threshold
05:18
settings. Because then you might end up with the EMS moving around more often than you wish and causing different kinds of problems because of that.
05:28
Speaking about the migration threshold, this is a slider that you can see when you're looking at your automation level, and we can go from conservative all the way too aggressive
05:36
at the conservative side.
05:39
D. L s uses basically three different
05:42
ah priority levels for making recommendations.
05:45
If it's conservative than on Lee. Those things that a priority one will be will be used for recommendations to be moved around.
05:54
If we're somewhere in the middle than priorities, one into the EMS will be recommended to move for low bouncing or initial placement purposes. And then, if you become aggressive, move it all the way to the right side. Now, priorities 12 and three will be used
06:09
for moving VM around and their initial placement. So this gives you the ability to kind of Taylor how aggressive you want the relocation of the EMS to be.
06:19
And again, this is something you want to ah,
06:23
play around with a little bit before you decide to use in production so you can get an idea of how it works and what the appropriate settings are for your particular environment.
06:32
All right. Moving along to E V c
06:34
that stands for enhanced of emotion compatibility,
06:39
and this is
06:41
giving you the, uh,
06:43
the option to use incompatible CP use on the host within your cluster
06:47
up until ah, a couple of recent versions off of S X I This wasn't possible for the center,
06:55
but this is a sort of a newer future, but it is really powerful. You have three different options here. We can leave EVC disabled, which is what it is by default,
07:06
or you might enable it for a M D hosts or your Intel hosts.
07:11
So it's one of these three options that you're allowed to choose here
07:15
and effectively. What this means is that if you've got a
07:18
ah cluster with Intel hosts that have different CPU families,
07:24
you might be able to add 1/3 Intel host of that cluster with a different CPU family altogether if you enable the E V C.
07:33
We cannot mix and match Intel a name D host in the cluster unfortunately,
07:41
but this is a great feature to give you better compatibility when you want to do the emotion in between hosts.
07:46
If the if you don't have even see enabled now you're you might have to do a cold migration, which means the VM must be powered off before it can be moved to another host.
07:59
With this enabled, we can leave the VM up and running, and the way that this works
08:03
is that the Baseline CPU compatibility
08:07
gets determined by the hosts in that cluster. So we enable e V C.
08:13
We can force the hosts to look at all their their CPU instruction sets and four features and pick one that becomes the baseline.
08:22
So if I I could have three different entire CP use, one of those becomes the baseline because it's the one that has the most in common with the other two.
08:31
All of the host in the cluster will now use the baseline CPS features, which means that if you've got a older legacy system and you've got some newer systems that are in the cluster, the legacy says that will end up being the baseline. So you you don't want to use this feature if you don't have to, because
08:50
ideally you're hosting. Your clusters should be identical if possible.
08:54
Same processor, same amount of memory. This gives you the most predictable performance, and it's easiest to maintain,
09:01
uh, where, if your host cannot be configured to use the baseline, let's say it's just so different than the other hosting the custard that it won't be able to join the cluster. So some checking is done
09:13
during the, uh, the effort to join a host to a cluster.
09:16
So the VC gives us the ability to be a little flexible the way we use that.
09:22
So we have several requirements listed here.
09:24
Uh, the CPS must be from a single vendor in your cluster
09:28
and your intel
09:31
for intel. They have to be court to micro lock a tractor or newer. And for a MD has to be Opteron
09:37
first generation or newer.
09:39
So keep that in mind when you're
09:41
trying to decide which hardware you would like to use.
09:43
You must be running at least 3.5, update two or later for your E S X. I host installation,
09:50
but the hosts almost be connected to the center
09:54
in the BIOS settings for the host, the virtual ization hardware setting must be enabled.
10:00
And if you looked at your bios for your host, you've probably noticed it's under. Usually it's under advanced CPU features. So look for that.
10:07
And then we have the next X d bit.
10:11
This means that we can do, um,
10:15
execution disabled
10:16
for a M D's or
10:18
um
10:22
or or for Andy or Intel CP use. And what this means is when we expose this setting to the to the cluster,
10:31
this also changes your ability to do the motion compatibility
10:35
so we'll talk about that a little bit more in the lab and you'll see what that looks like on the interface
10:41
and then, lastly, both of the herd up both. But all of the hosts in your cluster must be configured for the motion. In order for this to work, we want to move the EMS from one host to another.
10:52
So we talked about what a cluster is. Some of the features that DUI arrest provides initial placement,
10:56
load balancing and power management.
11:00
Then we have the prerequisites. The PM's must be the motion compatible. All the hosts on the PM's must be connected to shared storage.
11:07
We looked at the different automation levels and your migration threshold, so you'll have to play around a little bit to get comfortable with how they work.
11:16
Then we know what the VC bit does and why that's important for
11:20
using different CPU families with from the same vendor.
11:26
We talked a little bit more about what the cluster requirements were.
11:31
Faras, which CPU family's heir compatible Which version of the SX I
11:35
being connected to be center and so a lot. All right, stay tuned for the next part. Thank you

Up Next

Virtualization Management

Our self-paced online Virtualization Management training class focuses on installing, configuring and managing virtualization software. You?ll learn how to work your way around the cloud and how to build the infrastructure for it.

Instructed By

Instructor Profile Image
Dean Pompilio
CEO of SteppingStone Solutions
Instructor