Time
7 hours 58 minutes
Difficulty
Intermediate
CEU/CPE
8

Video Transcription

00:00
Welcome back. In this episode, we're going to take a look at a load balancer demo
00:04
and our demo. We're gonna look at creating the load balancer, configuring the load balancer with back in pools, health probes and rules. And then finally, demo the load balancer with a couple of virtual machines running some web servers on them. Let's jump out to the azure portal
00:20
back here in our azure portal. I want to show you a couple of things before we jump into creating our azure load balancer. First I want to show you I have a couple of virtual machines deployed here. This 1st 1 is named Webo too. You can see it has a public I P address of 1 37 1 17 1 10 1 73
00:40
I have a second virtual machine name, Web 03
00:45
And it has a public I P address of 52.1 68 1 71 72
00:51
on both of these virtual machines I've installed I s to act as our Web server. And on each one, I've added a static image to show that the website is up and running
01:02
here in the first half We can see right here, Webo to with our public I p address on it.
01:07
And here we have Web 03 with its public I P address and each of these servers are in an availability set right now, and we're gonna use that as the target for our load balancer.
01:19
Let's go back to the portal.
01:23
Let's go check out load balancers.
01:27
Let's create our first load. Balancer,
01:34
as always, will select our research skirt.
01:38
We'll give our load balancer name,
01:42
select its region.
01:47
And here we have our options that we talked about back in the slides in the previous episode. First, we have the type of an internal or a public load balancer
01:56
again, public load balancers will have a public I P address on them or his internal will be used for traffic just inside our virtual networks.
02:05
In this case, since we're gonna be using this for our Web service here, I'll leave it at public. And then we have our skew options of basic and standard
02:13
for this demo. I am just going to use basic, but it's generally recommended. Now you just go ahead and use a standard skew.
02:21
Since we're creating a public load bouncer. It will need a public i p address. I don't have an existing public I p address resource, so we're gonna have it create a new one
02:29
and we'll give it a name.
02:34
Right now we leave the public i p address as dynamic, but we wanted to make sure to keep the same public I p address every time we would choose static.
02:42
Let's go ahead and review and create.
02:51
Let's go check out our new load balancer.
02:54
First, let's go check out some of its settings by looking at the friend and I p configuration.
03:00
Here we have the front and public I P address that we assigned during its creation, but we wanted to We could add another one or go into this configuration
03:12
and choose a new public i p address
03:15
or go ahead and create a new one to assign.
03:17
Next. Let's go create a back in pool for our load balancer to use to direct traffic too.
03:25
Since this is going to our production web servers, I'm just gonna give appropriate name
03:32
and next we have to choose what we're gonna associate too.
03:36
We have those options we mentioned this slide and availability set to a single virtual machine or virtual machine scale set.
03:42
Let me select availability set.
03:46
And here I have the availability set that I already created with our Web service behind it. So it's selected.
03:53
After choosing the veil bloody set, we have to choose the targets inside of it.
03:59
So here is our Web. Oh, too,
04:01
we'll choose. It's private, I p address
04:06
Hamlet said Webb. 03 And it's private I p address.
04:13
That's all the servers and availability set for us. Dad, let's click on, okay.
04:21
And for you, refresh our back in pool here
04:26
and expand our pool that we created.
04:29
We can see both web servers are in it and are currently running, and this shows their private I p addresses. Well, now that we have a back and pool, let's go and create a health probe. So we know if one of these Web servers happens to have an issue and stop responding
04:46
here in our health probe, I'm going to give it a name for our purposes. We're gonna check poor 80 and make sure it's up and running.
04:55
We're gonna leave the protocol at T C. P R Port is already said it 80. And next we have an interval. And this is the amount of time between each health probe attempt. Basically, how often is gonna check to make sure Port 80 is open
05:08
and then next we have our unhealthy threshold.
05:11
So if we have to test or probes that fail consecutively, we're going to consider the virtual machine is unresponsive and unhealthy and take it out of the pool temporarily. So this means with an interval of five seconds. And to this means with an interval of five seconds between each probe
05:28
and two probes that need to fail, it's going to take 10 seconds before the back in pool will stop sending request to the failed server.
05:35
Let's go ahead and click on. Okay,
05:40
now that we have a back in pool and health probe available, let's go create a load balancing rule to bring this all together.
05:48
We'll need to give our load balancing ruling name.
05:51
This is going to run over I p V four
05:57
and our front and I P address is gonna be the one we've already created.
06:02
We're gonna continue using TCP. We're gonna be looking for requests coming over Port 80 and we're going to send it to Port 80 on the back end, inside of our pool. Next we select our back in pool that we're gonna be sending their request to
06:17
We're gonna associate it with our Port 80 health probe.
06:20
And next, we're gonna select our session. Persistence.
06:25
This persistence or affinity is going to determine if the client is sent to the same back in pool or not.
06:30
You remember former our last episode? This is thief. I've to pull hash distribution method that we talked about, such as the Source Port An I P. Address to the destination port an I p. Address and protocol
06:44
for a demo. I'm gonna select client I, P and Protocol to use all five of our hash options.
06:49
Next, we have idle time out, which is how long we want to keep the connection open without relying on the client to send keep alive messages. So, basically, after four minutes, if we haven't heard from the client, we're gonna consider it a new connection or session, so you can see we've pulled everything together. We have the front end i p address of our load balancer
07:08
we have our back and pull that recreated.
07:10
And our health probe along with our session, persistence or affinity was going to click on okay to create our load balancing rule.
07:21
Now the rules complete. Let's go back and look at our load. Violence or overview.
07:27
Here we have the public I p address of our load balancer. Let's go and select the copy icon to that to put it on our clipboard.
07:34
And if we open up a new tab
07:38
and go to this I p address
07:45
well, see that it hit Web Oh, too,
07:46
Of an I p address of 1 37 1 17 1 10 1 73
07:51
which you can see is different from the public i p address of our load balancer.
07:57
So next, what we're gonna do is I'm already already peed into our Web server here, and you can see in our power show window I've displayed the computer name of it.
08:05
So what we're gonna do is stop the Web server,
08:09
and after 10 or 15 seconds, we should be able to refresh this page and see the icon or display image for Web. 03
08:30
There we have it on the second refresh. We're still hitting the same public I p address. But since Webo too is now down an unhealthy it is redirected us. Tow Web. 03
08:43
Now, in the back end, I went and restarted the Web service on Web. Oh, too. And if we refresh our public, I'd be addressed to the load balancer.
08:52
We'll see that were redirected back to it after it has come back up and healthy
08:58
back here in Azure Portal. Let's go take a look at one more concept with are inbound Nat Rules,
09:03
as mentioned in the last episode, This is where you can take traffic on an incoming port and translate it to a different port. So let's go ahead and create an inbound nap rule.
09:13
First, we need to give our net roller name, and what I'm gonna do is set up already pee, too.
09:18
Our web oh, to virtual machine
09:22
already have our friend and I p selected.
09:26
And for the service, you might be tempted to select the RTP service. But what we're setting here is what's gonna be on the public or external interface of the load balancer.
09:35
So, for example,
09:37
I might set this to 50,000
09:41
and then select the target Virtual Machine of Webo too.
09:46
Select. It's private I P address. And then we're gonna do a custom port mapping,
09:52
and our target port isn't gonna be 50,000. But it's gonna be our RTP port of 3389 This means the load balancer is going to be looking for traffic coming in over Port 50,000. And when it sees that is going to redirect it to Webo to inside of our availability set and to our target port of 3389
10:16
we now have our inbound that rule on the load balancer,
10:20
which means we could hit this public I p address of our load balancer over Port 50,000 is going to redirect to Webb 02 to port 3389 allowing us to our DP to our virtual machine through the load balancer,
10:35
and you get set up a rule for each of your virtual machines that are in the back end pool and have different incoming ports for each one.
10:43
This would allow you to access each one through the load balancer without exposing it directly to the Internet.
10:48
That does it for a demo. Let's jump back to the slides and wrap this up.
10:52
Coming up next, We're gonna take a look at another option for load balancing or working with our applications with an introduction toe application gateways.
11:01
See you in the next episode.

Up Next

AZ-300: Microsoft Azure Architect Technologies

Azure Solution Architects are responsible for taking business requirements and turning them into solutions. This course provides an introduction into Azure, Microsoft’s cloud platform.

Instructed By

Instructor Profile Image
Jeff Brown
Instructor