Hello and welcome to Part two of Module 12. We're still talking about our host scalability, but now we're going to go on a little bit more detail in some of the configuration options. First thing you want to think about is when you build your GOS cluster, you can decide where you want to swap file to be.
You can store with the V M, which is the default sighting.
Well, you can specify a day store that you'd like to store that maybe you have an SSD data store.
That would give you the highest possible performance, so that might be a good option to think about.
But you'll try different options depending on what your performance
requirements are. Typically, we live it with the BM.
Then we have our affinity and anti affinity rules. These air An interesting feature of DRS.
When I define an affinity rule,
this means I'm trying to keep certain PM's together
An anti affinity rule means I'm trying to keep certain VM separate.
reasons for keeping of'em together would be,
um because they've got similar performance characteristics and we want the host that they're running on to be able to be managed more easily.
Keeping them separate
would be because the VM performance characteristics are very different and one host might be better at hosting this one or this group of the EMS. Another will host might be better hosting a different group of the EMS. It depends on your set up,
and you'll want to try different combinations of E EMS and rules in order to find the best possible.
way to use your resources effectively
when you created these kinds of rules. We need to think about creating DRS groups so I could make a group of the EMS, or group of hosts,
of'em or host to be part of multiple groups. And when you define an affinity or anti affinity rule, we need to use the groups. In order to do that, you can have one V M in a group or one host in a group,
but it must be defined as a group in order to be used with the rule wizard,
and we'll see that when you when you get to the lab later,
so another of option that we have
is to do a V M two hosts rule.
This says that I'm going to make so affinity between the V M. D. U S group or a hosts DRS group
to a particular host,
and this is ah, good thing. For certain reasons, maybe I want to keep. VM is running on one particular host because of licensing requirements, for instance, or I might want to, uh,
keep them apart for for other reasons. For low balancing for hat, perhaps,
so you can set up a requirement or preference. The requirement
is saying that something of the Via must run on this host.
When it's a requirement with affinity, you would be must run on. I could have a requirement with anti affinity to say must not run out.
And when you see the wizard, you can decide if I want to force of'em to run in the host or just have a preference for the VM to run in the host.
The preference means that should run.
So I could say VM should run on a host for a V M or for a preference, affinity, rule or and should not run on a host for a preference anti affinity rule.
So you have some soft enforcement here, So this is, ah, rule way say it's a preference we can violate this preference. If I'm doing a DRS recommendations or high availability,
then the preference could be violated
if I say it's a requirement where must run or run on or must not run on now? This is a strict enforcement that cannot be violated,
so you have to be careful how you just design your host.
Are your VM two hosts rule to get the correct configuration that your software or that your environment requires?
Adding a host to a cluster is very simple. Once the cluster is ah built, you simply drag and drop the the host from within the environment to the cluster object,
and you'll get a wizard that will pop up to that. You'll have to answer a few questions
about data stores and some other things very simple to use,
and you also get an option to maintain your resource pool hierarchy.
your production resource pool testing development resource pool and you've got V EMS in various different locations and you've adjusted your thresholds for of limits and reservations, as we talked about in a previous lecture
you won't have to create. Create all that by just dragging the host to the cluster
so you can preserve that hunter key. And when that happens, you'll see that your resource pools get nested into the hierarchy or into the inventory, and you'll get the message that it's been grafted from the host that it was on before it was joined to the cluster.
It's a very easy to understand. Once you see the way the inventory gets updated,
you can go to your cluster summary tab
toe. Look at the details of your cluster,
and this is the kind of information you might see here. I've got ah via more dear as Cluster.
It tells me my automation level, which in this example is fully automated.
Whether or not I'm using power management automation also will be noted here, set for automatic. Let's say
if I have any pending the arrest recommendations, I'll get a number here right now with zero.
But as you start to use DRS, you might see that you get a number that could be dear, less false as well.
Maybe DRS tried to automatically move of'em, and it couldn't do it for some reason, so you'll get a number there to tell you that you've got some faults.
Then we've got migration thresholds. If you remember, we talked about that slider being from the conservative level to the aggressive level.
If if it's at the um
aggressive level, then we're going to apply. All recommendations may are automatically
otherwise it might be some other setting like manual or partially automated.
Then when you're looking at load balancing, which we'll see in the lab
if your cluster is unbalanced, meaning that I've got more V EMS on one host than I do on another than this, said Target host loads standard deviation.
Well, we'll show some some different values,
the range to be from, you know, 0.1 to 10 point something. It's quite a wide range of values.
But when this, when this deviation standard deviation gets large enough,
then the current host loads standard deviation will show you that the load is either balanced or unbalanced,
and we'll see that when we do the lab. So you can tell that DRS is sensing that the bounce is incorrect and then you can decide to take action manually or automatically.
Okay, stay tuned for part three. Thank you