Hello and welcome back to Cy Berries. Microsoft Azure Administrator A Z 103 course. I'm your instructor, Will Carlson. And this is Episode 44 as your load balancer.
In today's episode, we're gonna talk about the many configuration options available to us as administrators here in Azure, relating to load balancers.
And we're gonna talk about that as we go through configuring a load balance or here in portal,
you get started. We're gonna go ahead and step into portal and we're gonna come down here till load balancers. We're gonna go ahead and slick ad load balancer.
As with most resource is here, we have to select a subscription and a resource group. I'm gonna go ahead and create a new resource group here for our load balancer.
then I'm gonna name our load balancer as well.
Go ahead and put this down into the central US region.
We can see our first options here about load balancers has to do with the type of load balancer that it's going to be now. Public load balancer is most likely going to be the one you're familiar with. And this is gonna be a load balancer that takes traffic from the public Internet and balances it or sends it through to resource is on the internal Internet
rather on the internal network.
A good example of this would be a website that goes through a load balancer to multiple Web servers that automatically scale using a scale set here in azure to handle traffic load.
An internal load balancer is going to be one that does Onley routes, internal traffic and that could be used for a Web server that communicates with multiple database servers. Who's traffic is load balanced to those database servers.
Now we have a couple of skews here, available to us or load balancers. Here in Azure, the basic load balancer is going to be free.
It's also only going to support connections of up to 100 servers, and all of those virtual machines must be in either the same availability set or the same scale set
a basic load. Balancer also does not support https, and it's gonna be open toe all traffic by default. If you want to filter that, you'll need to do that through the use of a network security group.
Now a standard load balance her is gonna be able to service up to 1000 servers across any mix of availability sets scale sets
also as long as they're in the same virtual network.
A standard load balancer is also going to support http as health probes
and it's gonna have security closed by default. So if you want traffic to come into your standard load balancer, you're going to have to allow that through a network security group rule
One last thing here has to do with the pricing of these two options and the basic load balancer is going to be free. Whereas the standard load balancer is going to depend and be priced based on the number of rules that you use in the amount of data that the load balancer processes,
we can come down here into public I p address and either create a new one or use an existing one. We're gonna go ahead and create a new one.
We're gonna go ahead and set this I p address assignment to Static just so that I p address doesn't change.
We don't need to add an I p B six address. We're all good there
and then we could go ahead and select review create. But I want to go ahead and point out some of the options available when they select standard.
I'll see that the standard load balancer has to be associated with a standard public I p address
so we don't have the option of whether that's dynamic or static. A standard public I p is static by design.
We also have the option now here to select an availability zone,
we can either manually choose thehe veil ability to zone that we want to deploy the low balance or two. Or we can select his own redundant for a load balancer. That's just that it's gonna be redundant to any loss in a particular availabilities. Up
for the second this example, we're gonna go back to basic well, so like review create.
Once that validation passes, we'll go ahead and create that load balancer. Now that that load balancers created, we can go ahead and see more about the load balancer by clicking here on the load balancers, blade and clicking into the load balancer that we just created.
Now there are gonna be four things that are required for us to set up before our load balancers going to work. And that's gonna be the front and I P configuration, the backend pool, the health probe and the load balancing rule.
Before we get to that, I want to point out here real quickly that a load balancer is gonna function on Layer four of the O S I model. So it's going to be looking at TCP and UDP traffic to do its job.
Once we've deployed this load balancer, we do not have the ability to change the skew. So we've set this one is basic. We cannot change that to standard. So you need to know which of the Scuse you need before you're setting up in configuring the load balancer here in the azure.
Something else to consider is that the front end's in the back ends of the low balance or need to be within the same virtual network
load balancer Front End's cannot communicate across global virtual net piers. So we cannot set up a load balancer in one region and expect to communicate with resource is across a global virtual network appearing setup.
But now to finish configuration of this load, balancer here we're going to select into the front and blade, and we're gonna see that
the I P address has already been created here. Let's click down into here to see what this is all about.
We can see that this I p address has not been assigned to just yet.
And we can go ahead and select the I P address that we'd like to use here and save that to essentially marry our load balance or to the I p address that we would like to use. And this is the I. P address that we're going to use to communicate with the load balancer.
Let's go back a level here.
Now we need to configure our back and pool in the back end Pool is going to be the set of resource is that the load balancer is actually balancing traffic for?
So we can select, adhere,
have to name this and then I have to select the resource is that I'm gonna associate this back in pool with
I already have an availability set set up from a previous episode.
We're gonna go ahead and use that availability set.
Now I can see that I need to associate the network interfaces from this availability set with this load balancer. So I simply select adhere.
I'm gonna select my A V set 01 virtual machine
and in select that I p address of that network interface card that I like to use. And remember,
virtual machines can have multiple network interface cards. And I'm gonna do the same thing here for the second virtual machine in my availability set
and select that network interfaces. Well,
now I can go ahead and select, okay?
And what I've done is create the back and pool that this load balancer is going to be communicating with
moving along. I can go ahead and set up the health probe that's gonna be associated with this load balancer.
And this health probe is gonna communicate from the load balancer to the virtual machines that are in our setup currently. And as long as his virtual machines respond, then the load balancer knows these machines are online.
we're gonna leave this a TCP. But since these air windows machines, we're gonna go ahead and use the health probe on poor 33 89. It's gonna check every five seconds and when two of these probes in a row fail the load balancers going to know that that particular virtual machine is off line.
And at that point, the load balancer will not pass traffic to that virtual machine any longer.
I'm gonna select, Okay, Now that our health probe is created, we can move on to the fourth and final step and configuring our load balancer. And that's setting up a load balancing rule. I can select, adhere in this blade and name this rule.
And for the most part, I can accept all of these defaults before our example. I'm gonna come down here and change the port to 33 89
and that's gonna be the front end port on the load balance or this listening for traffic. I'm also gonna change the back end poured for the back and pool to 33 89 as well.
And then we can see down here. The other option of reference or a relevance is gonna be the session persistence option.
What this is going to do is whenever a client of a particular I p communicates with the load balancer, the load balancer is going to maintain session state for that client, I p. And make sure that that client continues to talk with a specific resource on the inside of the load. Balancer.
This is gonna be particularly useful for things like shopping carts
or where state is going to matter across the connection.
You can also set this to client I, P and Protocol.
Then the other option here is the idle time out. So this session is going to be kept open without any communication for four minutes until the load balancer finally drops that session,
we're gonna leave all of these things as default.
I will mention, too, that this floating I p has to do with particular sequel workloads. But for our example here, it's not relevant. So I can act like, okay,
and that's configuring the last step of setting up our load balancer here in Azure.
One other thing, I'll point out why that deployment is finishing is the concept of an inbound Nat rule. Now I'm sure you're familiar with Nat Rules in your production life or even in your home network. And in that rule basically says when somebody tries to communicate with my i p address on a particular port. What am I going to do with air traffic?
So right now we're allowing traffic to port 33 89 on the public. I p of this load balancer
and the load balancer is gonna balance that traffic across our two virtual machines and our availability set.
But what happens is an administrator if I need to get to one of those particular servers to do maintenance on it from the public Internet
Right now, I don't have a way to do that. But through inbound that rule, I can set it up to where that's possible. So, for example, if I wanted to get to server number one, I could go ahead and go to 33 91 or any port of my choice. If I wanted to get the server to, I could go to port 33 92
But it would go ahead and route me through to the server of my choosing instead of via the load balance or where I would just get whatever server I happened to get.
Now that we're all set up, we can come back out here to the overview tohave of our load balancer, and we can see the public I p address of our load balancer is right here.
And now I can go ahead and open remote desktop and navigate to the I p address of our load balancer and click Connect. And as soon as that log N v a r d p, except certificate, you can see that I'm connecting to one of my RTP servers in my availability set all via my load balancer.
So in today's episode, we talked about the concepts of using a load balancer both as a external or internal or a private or public load balancer.
We talked about incoming NAT rules, allowing us to set a particular port that allows us to get to a specific resource beyond the load balancer. As administrator,
we talked about the way that a load balancer works and the four necessary components that we have to set up the front and I p the rule health probe and the back and pool so that the load balancer works
and we also talked about maintaining session. State encased session is going to be important to be maintained through the load balancer as well. Coming up next, we're gonna talk about a similar concept to a load balancer. But that's gonna be traffic manager and traffic manager is a way for us, his administrators to
intelligently route where in the azure environment or even outside of the azure environment, our public Internet traffic ultimately terminates.
Thanks so much for joining me today, and I'm looking forward to the next episode.