19 hours 58 minutes
Welcome back. In this episode, we're going to take a look at a load balancer demo
and our demo. We're gonna look at creating the load balancer, configuring the load balancer with back in pools, health probes and rules. And then finally, demo the load balancer with a couple of virtual machines running some web servers on them. Let's jump out to the azure portal
back here in our azure portal. I want to show you a couple of things before we jump into creating our azure load balancer. First I want to show you I have a couple of virtual machines deployed here. This 1st 1 is named Webo too. You can see it has a public I P address of 1 37 1 17 1 10 1 73
I have a second virtual machine name, Web 03
And it has a public I P address of 52.1 68 1 71 72
on both of these virtual machines I've installed I s to act as our Web server. And on each one, I've added a static image to show that the website is up and running
here in the first half We can see right here, Webo to with our public I p address on it.
And here we have Web 03 with its public I P address and each of these servers are in an availability set right now, and we're gonna use that as the target for our load balancer.
Let's go back to the portal.
Let's go check out load balancers.
Let's create our first load. Balancer,
as always, will select our research skirt.
We'll give our load balancer name,
select its region.
And here we have our options that we talked about back in the slides in the previous episode. First, we have the type of an internal or a public load balancer
again, public load balancers will have a public I P address on them or his internal will be used for traffic just inside our virtual networks.
In this case, since we're gonna be using this for our Web service here, I'll leave it at public. And then we have our skew options of basic and standard
for this demo. I am just going to use basic, but it's generally recommended. Now you just go ahead and use a standard skew.
Since we're creating a public load bouncer. It will need a public i p address. I don't have an existing public I p address resource, so we're gonna have it create a new one
and we'll give it a name.
Right now we leave the public i p address as dynamic, but we wanted to make sure to keep the same public I p address every time we would choose static.
Let's go ahead and review and create.
Let's go check out our new load balancer.
First, let's go check out some of its settings by looking at the friend and I p configuration.
Here we have the front and public I P address that we assigned during its creation, but we wanted to We could add another one or go into this configuration
and choose a new public i p address
or go ahead and create a new one to assign.
Next. Let's go create a back in pool for our load balancer to use to direct traffic too.
Since this is going to our production web servers, I'm just gonna give appropriate name
and next we have to choose what we're gonna associate too.
We have those options we mentioned this slide and availability set to a single virtual machine or virtual machine scale set.
Let me select availability set.
And here I have the availability set that I already created with our Web service behind it. So it's selected.
After choosing the veil bloody set, we have to choose the targets inside of it.
So here is our Web. Oh, too,
we'll choose. It's private, I p address
Hamlet said Webb. 03 And it's private I p address.
That's all the servers and availability set for us. Dad, let's click on, okay.
And for you, refresh our back in pool here
and expand our pool that we created.
We can see both web servers are in it and are currently running, and this shows their private I p addresses. Well, now that we have a back and pool, let's go and create a health probe. So we know if one of these Web servers happens to have an issue and stop responding
here in our health probe, I'm going to give it a name for our purposes. We're gonna check poor 80 and make sure it's up and running.
We're gonna leave the protocol at T C. P R Port is already said it 80. And next we have an interval. And this is the amount of time between each health probe attempt. Basically, how often is gonna check to make sure Port 80 is open
and then next we have our unhealthy threshold.
So if we have to test or probes that fail consecutively, we're going to consider the virtual machine is unresponsive and unhealthy and take it out of the pool temporarily. So this means with an interval of five seconds. And to this means with an interval of five seconds between each probe
and two probes that need to fail, it's going to take 10 seconds before the back in pool will stop sending request to the failed server.
Let's go ahead and click on. Okay,
now that we have a back in pool and health probe available, let's go create a load balancing rule to bring this all together.
We'll need to give our load balancing ruling name.
This is going to run over I p V four
and our front and I P address is gonna be the one we've already created.
We're gonna continue using TCP. We're gonna be looking for requests coming over Port 80 and we're going to send it to Port 80 on the back end, inside of our pool. Next we select our back in pool that we're gonna be sending their request to
We're gonna associate it with our Port 80 health probe.
And next, we're gonna select our session. Persistence.
This persistence or affinity is going to determine if the client is sent to the same back in pool or not.
You remember former our last episode? This is thief. I've to pull hash distribution method that we talked about, such as the Source Port An I P. Address to the destination port an I p. Address and protocol
for a demo. I'm gonna select client I, P and Protocol to use all five of our hash options.
Next, we have idle time out, which is how long we want to keep the connection open without relying on the client to send keep alive messages. So, basically, after four minutes, if we haven't heard from the client, we're gonna consider it a new connection or session, so you can see we've pulled everything together. We have the front end i p address of our load balancer
we have our back and pull that recreated.
And our health probe along with our session, persistence or affinity was going to click on okay to create our load balancing rule.
Now the rules complete. Let's go back and look at our load. Violence or overview.
Here we have the public I p address of our load balancer. Let's go and select the copy icon to that to put it on our clipboard.
And if we open up a new tab
and go to this I p address
well, see that it hit Web Oh, too,
Of an I p address of 1 37 1 17 1 10 1 73
which you can see is different from the public i p address of our load balancer.
So next, what we're gonna do is I'm already already peed into our Web server here, and you can see in our power show window I've displayed the computer name of it.
So what we're gonna do is stop the Web server,
and after 10 or 15 seconds, we should be able to refresh this page and see the icon or display image for Web. 03
There we have it on the second refresh. We're still hitting the same public I p address. But since Webo too is now down an unhealthy it is redirected us. Tow Web. 03
Now, in the back end, I went and restarted the Web service on Web. Oh, too. And if we refresh our public, I'd be addressed to the load balancer.
We'll see that were redirected back to it after it has come back up and healthy
back here in Azure Portal. Let's go take a look at one more concept with are inbound Nat Rules,
as mentioned in the last episode, This is where you can take traffic on an incoming port and translate it to a different port. So let's go ahead and create an inbound nap rule.
First, we need to give our net roller name, and what I'm gonna do is set up already pee, too.
Our web oh, to virtual machine
already have our friend and I p selected.
And for the service, you might be tempted to select the RTP service. But what we're setting here is what's gonna be on the public or external interface of the load balancer.
So, for example,
I might set this to 50,000
and then select the target Virtual Machine of Webo too.
Select. It's private I P address. And then we're gonna do a custom port mapping,
and our target port isn't gonna be 50,000. But it's gonna be our RTP port of 3389 This means the load balancer is going to be looking for traffic coming in over Port 50,000. And when it sees that is going to redirect it to Webo to inside of our availability set and to our target port of 3389
we now have our inbound that rule on the load balancer,
which means we could hit this public I p address of our load balancer over Port 50,000 is going to redirect to Webb 02 to port 3389 allowing us to our DP to our virtual machine through the load balancer,
and you get set up a rule for each of your virtual machines that are in the back end pool and have different incoming ports for each one.
This would allow you to access each one through the load balancer without exposing it directly to the Internet.
That does it for a demo. Let's jump back to the slides and wrap this up.
Coming up next, We're gonna take a look at another option for load balancing or working with our applications with an introduction toe application gateways.
See you in the next episode.
AZ-301 Microsoft Azure Architect Design
This AZ-301 training covers the skills that are measured in the Microsoft Azure Architect Design ...
15 CEU/CPE Hours Available
Certificate of Completion Offered
Become a Microsoft Azure Cloud Engineer
As one of the dominant cloud computing services, Microsoft Azure is responsible for more than ...