hello and welcome to have number 19
in this lab. We will be working with the high availability functionality of the sphere.
So we'll be creating a cluster out of two hosts
and we'll test some of the H A functionality.
And then we'll look at resource usage statistics
try to understand a little bit about what happens when you add and subtract V EMS and hosts from a cluster.
We'll also get to explore the restricted mission control
and manipulate the slot size. All right, so let's get signed into our environment.
Still in the V. M. We're
So the first thing that we're gonna have to do is create the cluster itself.
So very simple to do
you go to host and clusters where we are right now, or you can do control shift H.
I feel like those shortcuts.
and select New cluster.
We're just gonna call this my cluster
and we have to enable
high availability. So we'll check this box for H A.
This means that if a if
register to a different hosts in her cluster. If one of those posts go down
than those V EMS automatically get moved to the other members of the cluster. We're only gonna make a to note cluster.
So it's pretty obvious where the PM's will be going.
We're not going to use DRS until a little bit later. We'll do a lab for that.
Okay, We're gonna leave host monitoring. In effect,
that means we're using a heartbeat
across something like an n f s d a store.
We'll leave the admission control
and then we're going to change the policy. However, for mission control
to be a percentage of the custom resource is
reserved as fail spare capacity
feel over spare capacity. Okay,
the ree strike priority
and isolation response. We're gonna leave those
at the default settings
via monitoring also at the default.
enhanced the emotion compatibility or e V C settings.
We're just gonna leave us disabled for now.
We'll talk more about that later.
And then for the swap file location will keep it at the same director is the virtual machine, which is
typically what you'll do.
And then we'll get our summary screen
Okay, so now we have a
conceding the inventory here, my cluster
and what I can do is just take a host and drag it to my cluster
Now I've got a cluster of one host.
Okay, so we're gonna go select your cluster
and we'll go to the summary tab
and check the cluster status linked.
parameters here related to H A.
But right now we're just interested in the cluster status.
the master host is top 100. It's the only host, so it has to be the master.
There's no host connected to the master.
I can see we have four V ems that are being protected by being members of this cluster.
So Croc's CSX to which is this virtual host,
the center and win seven
and then our harpy data store.
We don't have one configured yet
because we only have one host.
We're going to show all cluster entries
so we can see that we moved a host into the cluster,
the user account that did this work.
We also have some events
showing each individual machine moving into the cluster.
Then we have an error here saying that we have insufficient resource is to satisfy. Fail over.
That's because the cluster only has one note.
let's drag the second host into the cluster.
Let's try it again. They bought it and grab it right
Okay, so now I have a cluster with two hosts.
You see how easy it is to just drag
tracked the other host in.
Now I want to go back to Mommy
and I'll look in on my cluster entries.
another message about H A fail over resource is
messages about network redundancy.
So harpy data Stores.
So it was connected to master. Now we show one, whereas before it was zero,
I still have four PM's,
and now our harpy data store is
We can see both hosts are connected to NFS
and so they will use a small file within that data store to establish heartbeat between these hosts.
If the harpy goes down, obviously
we'll take some kind of action, depending on what kinds of alerts and alarms we have set up
and the other Klink that we need to look at his configuration issues.
And this is a little bit more what I was talking about earlier. We can see that
we have no management network redundancy,
so this is just qualifies as a warning.
It doesn't mean that the cluster can't work.
It's just trying to tell us that
if the manager network goes down, there's no redundant path.
So you may lose contact with your host. So that's that's kind of what we're getting at here.
The Harpy data stores
is also on Lee set toe one.
We're supposed to have at least two again for redundancy reasons.
And that basically wraps up what we need to look at as far as
the initial configuration of the clusters will go ahead and close that window
and move on to our next task.
Okay, so we can see that
we've got V EMS running on each of the hosts in the cluster.
and the virtual machines tab
I've got view center that win 7 p.m.
Yes, that's true, which is actually this virtual host
and then my 2012 serve, which is crux.
If I look at the V. EMS on got 200
I can see that. I've got the Windows seven clone
number two running here.
Okay, So what we want to do now is test
the high availability functionality. So
then this win seven clone to should automatically move to the 70.100 host.
So first of all, let's
That way we can keep an eye on it's rebooting activity when we need to.
And I will simply right click the host
message asked me if I won't really want to do this, I'll go ahead and say yes.
And my reason for rebooting is
so we'll watch this screen for just a moment here.
So it says that it's that initiated the reboot
down in our tasks area. But nothing's really happening. It takes a few seconds
for us to actually start working. So if I select my cluster now
go to my tasks and events, Tab
can see, I've got a bunch of entries here. What I'm interested in mostly is the cluster entries,
and now we can see that I'm getting some messages.
back here, we've got sufficient resources are available to satisfy an itch a little fail over.
So that's what just happened.
And then we've got some other entries
changing of the status of the different alarms.
So there's my sail over action,
and now my alarm has gone from grey to yellow,
so this takes a little while to run. If I go back to host 100
appears to be running on the other host.
That V M is not running there.
Let's look at our console. We can see that we're in the process of rebooting.
So within probably less than a minute, the VM was brought up and running on the other host.
She works pretty quickly
now, depending on how much
activity is going on with the V M and how busy the hosts are.
This activity, this process rather could take
considerably longer amount of time.
But right now it's it's pretty pretty quick,
so we're back to our cluster in the inventory.
And now we're going to select the summary link
And here we can see various things like our total number of hosts in the cluster told number of processors, Summer days stores, virtual machines.
We still got a little bit of change going on, but click cluster status.
I got 100 is the master.
I've got zero host connected because that hostess still rebooting
we can still see that
going on here. It's almost done.
I still have my five protective e ems.