Welcome back to Microsoft Azure Fundamentals.
This is Module eight,
as your networking services
in this module will learn about what as your virtual networks are and how you can use them in your application designs.
How to provide high availability and resiliency using azure load balancers and application Gateway
and how to reduce latency for your global customers. Using Azure Traffic manager,
we will also discuss a few of the network and concepts like subnets and DNS and how you can use them in Azure.
in this video. We'll take a look at two more network services in Azure that allow you to deploy highly available applications.
Load balancer and application Gateway.
We've talked about availability and resiliency before
in our video about zealous, we discussed how you can increase the availability of your application by adding an additional VM to serve Web requests. Well, how do you configure those two VMS to respond to the user's requests without the user knowing anything about them,
you use a device called a load balancer.
A load balancer is a device that distributes the traffic between systems in a pool or cluster.
In our particular case, we have two identical Web servers and use the load balancer to distribute the traffic between them.
This is very helpful because if one of the machines fails, the other one can continue to serve the traffic to the users.
You can have as many machines as you want behind a load balancer. They don't even have to be identical, although having identical machines is the most common approach, as we saw when we discuss the VM scale sets,
there are also different algorithms to distribute the traffic. But the one that you'll see most often is called round Robin, where each new request is sent to the next machine in the pool.
Then, once all the machines are iterated, the request is sent to the first one in the pool, and it starts all over again.
The load balancer is a very specialized device that does one simple thing
This allows it to handle high traffic, load and connect many servers behind it.
The load balancer can be exposed to the Internet with a public IP address,
then users request to get forwarded to it. Then the load balancer forwards the request to the Web server using their private IP address.
This is called the public load balancer.
You can replicate the same approach for the other tiers of your application.
The only difference is that the load balancers for the application and data tears do not have public IP addresses, but private. Only
those are called private load balancers.
One important thing to know about the load balancers is that you can specify the port for the traffic you want to balance.
For instance, for the public Web to your load balancer,
you can configure Port 80 for the application tier Port 80 80 and Port 3306 for the data tier.
Then, lastly, you can use another service called Domain Name Service, or DNS, to map friendly names to the IP addresses.
Using the DNS, you can map the IP address of the Web to your load balancer to a friendly name.
This way, the users don't need to remember numeric ip addresses, but use the friendly DNS name to access your application.
Azure has two services that you can use for balancing traffic.
The first is as your load balancer
as your load balancer is a fully managed service that you can use to balance transmission control protocol TCP or user data Gram protocol. UDP traffic.
This means that you can balance not only Web traffic, but other traffic like database traffic, Ssh traffic and so on
as your load balancer supports both inbound scenarios where you balance traffic sent to your application, as well as outbound scenarios where you balance traffic sent from your application to external systems.
It also provides low latency and high throughput and scales. With your needs,
you can support millions of user requests with azure load balancer.
The most important part is that you don't need to maintain any infrastructure.
You just configure the service, define the rules and, as your manages the rest.
The other service that you can use to balance traffic in azure is called Application Gateway.
As your application, Gateway is designed for Web applications only, which means that you can load balance only http or https traffic.
the APP Gateway provides additional functionality that is very useful for Web applications.
This includes building Web application firewalls that monitor and lock traffic and protects your application from malicious attacks like SQL injections, cross site scripting and denial of service attacks.
You can terminate SSL on the application gateway level to reduce encryption overhead or configure it to provide end to end encryption for highly secure applications.
You could configure customer routes based on your l paths and add, remove or rewrite the http headers of the request.
You can also configure a session affinity and send the request from the same client to the same server and maintain the session.
Keep in mind, though, that this may impact the performance of your application,
and it is typical for legacy applications but discouraged for the modern ones.
Now you know how you can scale your application and provide availability and resiliency, using the load balancing services from azure.