Lab 20 - Configuring Fault Tolerance on a Virtual Machine VM (Lab)
Video Activity
Configuring Fault Tolerance on a Virtual Machine VM (Lab) This lesson covers configuring fault tolerance on a virtual machine (VM) in a nested ESXI environment. Participants receive step by step instructions in how to configure fault tolerance on a virtual machine in this lab-based lesson.
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or
Already have an account? Sign In »

Video Description
Configuring Fault Tolerance on a Virtual Machine VM (Lab) This lesson covers configuring fault tolerance on a virtual machine (VM) in a nested ESXI environment. Participants receive step by step instructions in how to configure fault tolerance on a virtual machine in this lab-based lesson.
Video Transcription
00:04
Hello and welcome to Lab number 20
00:07
in this lab. We will be configuring
00:10
felt tolerance in one of our V EMS
00:15
and because I'm running a
00:18
you nested
00:20
VM environment nested, Yes, *** I environment.
00:24
I'll go back to my
00:27
home hosting clusters.
00:30
I got 100 is a physical host,
00:34
docked 200 to 1 our virtual hosts,
00:38
and I've got my my VM that I'm going to be
00:42
enabling for fall talents running on this cluster.
00:48
So right now it's on the cluster. It's powered on.
00:53
It looks like it's on, uh,
00:56
the second host in the cluster dot to a one.
01:00
I can select of'em,
01:02
click it summary tab and get that information as well.
01:06
You can see that it's on to a one.
01:07
So if you a few things to say about this,
01:11
uh,
01:11
obviously you can create hosts from the ice, so image just the same way you would
01:18
a physical host.
01:21
For the purposes of this lab, I've created this kind of a setup.
01:23
This isn't something you would you would want to do in a production environment for performance reasons.
01:29
It's not really a supported configuration, as Faras VM wears concerned either
01:34
but for the purposes of doing this lab
01:38
and showing how fault tolerance works,
01:40
Uh, I've decided to set up a cluster with these two
01:44
notes,
01:45
So I reconfigured the cluster that we saw in the last lab
01:48
where we're working with high availability.
01:53
Okay, so the first thing I need to do is shut down my guest operating system.
01:59
And in order for
02:01
this nest Idi excess s X I environment
02:06
toe work with fault tolerance. I need to make
02:08
some changes to the advanced options on the virtual machine.
02:15
And you can't do those changes until the virtual machine is actually shut down.
02:20
That's why we need to
02:21
to go and power this guide up. This guy down
02:25
should be just about done.
02:29
All right. Now that the VM is powered down,
02:31
we can right click and go to edit settings.
02:35
First thing we need to look at is our CD Rahm. Dr.
02:38
We need to verify that it is set to client device
02:42
fault. Tolerance is not supported when host device
02:46
or a data store, so file is being referenced.
02:50
So make sure that set a client device.
02:53
Then we have to make sure that passed through i d is selected.
02:58
This is also required for fault tolerance,
03:01
especially in this nasty environment.
03:05
Yes, Xie nested environment.
03:07
Now I'll go to my options tab
03:10
and under the advanced area, I will select General,
03:19
and we'll go to our configuration parameters.
03:29
All right, so we have one parameter we have to change
03:31
first. That's a replay
03:34
dot Allow FT for allow fault tolerance.
03:38
We make these settings because of the way of the lab set up is if you're running
03:43
on multiple physical hosts in your cluster, you shouldn't have to make these changes
03:47
now. I actually have to add two more parameters.
03:51
Actually, one of them is already here,
03:53
but
03:54
we'll see how that would be added. You just type in replay dot Allow
03:59
Bt on Lee
04:00
and make sure that parameter is set to true.
04:04
Then we'll click the add row button for the second parameter,
04:10
and that one is
04:12
replay dot Allow
04:14
FT for allow fault tolerance
04:16
Also set to truth.
04:20
No, I didn't
04:21
work, right.
04:29
It is there. Okay, It was strange. It wasn't showing
04:32
at the top of the screen.
04:33
All right, so I've got my three replay parameters allow Bt only true
04:39
ft. Allow f t true
04:41
and replay dot supported must also be true.
04:45
I'll click okay
04:47
and click.
04:49
Okay.
04:50
Next, I need to check
04:53
my manager network on both of my hosts
04:56
so that I can enable fault tolerance log in
04:59
and the logging basically ascending the
05:01
memory changes within the VM
05:04
across the network
05:06
to the other host to provide the fault tolerance
05:10
uh,
05:11
V M's memory on the other host.
05:14
So we'll select our hosts. Go to configuration
05:16
networking.
05:20
I'll click my properties
05:25
and I want to go to my
05:30
the Motion Network.
05:32
I've got my VM network, but the V Motion Network is the one for all of my management traffic. So this is the one I'm gonna select
05:39
that we can see the fall childhoods. Logging is disabled,
05:42
so I'll click the edit button.
05:44
Check this box for fall times logging.
05:47
Go ahead and click. Enable.
05:49
Get a message
05:51
saying that it's not recommended
05:55
to use a photo or a physical neck
05:58
for fall tolerance, logging and for of emotion. So go ahead and say yes for this. That's just an advisory
06:04
statement to let you know that your performance would be better if you
06:09
separated
06:10
the
06:12
oops didn't take my change. If you separate
06:17
those functions using a different interface
06:23
now we'll go to my second host
06:27
properties.
06:29
Look at my management network
06:35
and I will
06:38
at this.
06:40
Actually, this network should be called the motion.
06:45
Better to have that matching.
06:47
And I want to
06:48
enable fault tolerance, logging
06:51
just like I did on the first host.
06:54
Yes, I do want to continue.
07:02
All right, now I know I have a V m Colonel Port
07:05
two
07:08
pork groups named of emotion on each hope on each host.
07:13
Now what I want to do is activate fault tolerance on the actual viene. So I can right click,
07:18
go down to the fault tolerance
07:20
menu item and then turn on fault alerts
07:25
saying that the memory of reservations VM will be changed the memory size of the V M and maintained until this is turned off. Do you want to turn it on? I say yes.
07:38
So that'll take a little while to run.
07:41
You'll notice that the secondary VM appears for a short period of time.
07:46
It does actually.
07:47
Hi
07:47
create that
07:49
part of the functionality while we're enabling fault tolerance.
07:55
If I go to my cluster.
07:57
I can see I've got both the EMS here
08:00
both powered off. At this point.
08:03
I look at one host. I've got secondary running on
08:05
0.200
08:09
and the primary is running on dot
08:13
to a one.
08:13
All right, so our next task
08:16
is to go ahead and power on the V M.
08:18
All right. Before we power on the V m,
08:22
we're click the summary tab and now we can see we have a new fault tolerance area in the summer.
08:28
You can see that it's not protected right now, mostly because it's not running.
08:33
We know where the secondary location is.
08:37
And we can watch this area here for changes once we decide to power up the V M.
08:43
All right, so I got it selected. I'll go ahead and click play.
08:52
I could see that the secondary VM is now starting up.
09:00
Well, watch this change
09:03
now. It says that is protected.
09:07
Shows me where my secondary location is,
09:11
and both PM's air powered on
09:15
interesting bit now would be to open a consul for each of these side by side.
09:37
And as the as the VM eventually will will boot,
09:41
we'll be able to watch
09:43
both Windows update simultaneously.
09:50
It didn't resize the consul for making me go. I didn't do that.
10:07
All right, so I'm gonna go ahead and log in.
10:11
Okay? So both of'em zehr still
10:13
powering up
10:16
or actually have logged in, But I'm waiting for the log in to
10:18
complete,
10:20
and we can see that
10:22
that we've got our
10:24
protected status.
10:26
I can see how much CPU that the secondary VM is consuming, how much memory it's consuming.
10:31
And I can see the amount of
10:35
network bandwidth it's being utilized to send the memory changes from the primary to the secondary
10:41
as well as
10:43
eh interval. This V lock step shows me
10:46
how much time delay there is
10:48
between one V. M to the next.
10:52
As the fault tolerance
10:54
functionality is being used.
10:58
Now that I log into my
11:01
to my systems and Aiken,
11:03
for instance, any changes I make on the primary VM
11:07
open a command prompt.
11:09
I can look at my i p address.
11:16
We can see that both both vm czar
11:22
operational
11:24
now the
11:26
in order thio
11:28
simulated fail over.
11:30
What we can do is right. Click on the fault tolerant VM
11:35
and basically what we're doing is
11:39
simulating one of the of the two hosts on the cluster going going away.
11:45
You could do that by shutting down a host, but this is actually much quicker.
11:50
So we contest, fail over,
11:56
go back to my summary tab for the V M.
12:05
And if I look at my two consuls,
12:07
I can see that
12:09
one of the one of the PM's now the secondary
12:13
has been shut down
12:16
because that
12:16
that, um,
12:20
hosted
12:20
is effectively removed
12:22
from from the fault tolerance configuration,
12:28
I can see that the VM is no longer protected because I'm only running on one host.
12:33
You can't
12:33
have a fault tolerant VM effectively running on the same host. That wouldn't give you a whole lot of benefit.
12:41
And then the last thing we can d'oh!
12:43
Even while the VM is running, not a problem. We want to go in, and
12:48
I actually have to wait until this process is finished. But you can right click
12:52
and just able to fault tolerance.
12:54
If you want to turn that back off.
12:58
I got a bit of an alert, probably due to, ah, memory usage.
13:05
Still not letting you do it Well, just wait a moment for that.
13:09
Okay, so it took a couple of minutes for the fail over
13:13
test, too,
13:15
to complete.
13:16
Now we can go ahead and turn
13:20
salt tolerant off
13:24
says that we will remove our protection. We we want that to happen. So we'll say yes.
13:31
And we noticed that the fault tolerance pain on the summary tab is no longer visible. Okay, that concludes. Lab number 20 in the next lab
13:39
will be working with the distributed resource scheduler or DRS.
13:43
So we're going to enable DRS for our cluster
13:46
and then
13:48
create a load imbalance so that we can test the deer s functionality.
13:54
See you on the next lap.
Up Next
Similar Content