Welcome back. In this episode, we're gonna take a look at our application gateways with an application. Gateway Demo.
Our objectives include creating an application gateway, and our second objective is to configure some of the features that we learned about in last episode. Let's jump back to our azure portal
here in our azure portal. Let's go ahead and create a resource
search for application Gateway
the screen should look pretty familiar.
First, let's select a resource group.
Let's give our application gateway and name.
Select our region.
Let's go take a look at what our options are for our tears.
We have the standard and standard V two and the Waf Waf version to remember the version to supports auto scaling. Whereas if we select the standard or waf version one, we have to specify a number of instances we want for the application. Gateway. Let's take a look at what that looks like real quick
when we choose the standard. We just have an instance count and then a skew size that we need to choose.
Remember, from our slides, we have small, medium and large.
Let's go back and select a standard V two
And if we want to enable auto scaling, we have to select the minimum number of instances we want to start with and then our maximum instances.
If we select zero as our minimum, that means it's always gonna be auto scaling based on demand for a demo. Let's go ahead and select one,
and we'll just go with a max of two.
And then we also have the option. If we want to put our application gateway into an availability zone for hire redundancy for a demo, I'm just gonna leave the options as they are right now.
Next, we need to configure the virtual network that the application Gateway is gonna reside in. Let's select our virtual network.
I'm gonna choose the one from our load balancer demo
and I have a sub net specific for our application gateways.
Next, let's configure our front end
here. We can choose who we want a public I p address or a private I P address, or if we need both.
Since this is gonna be fronting our Web servers, let's go ahead and choose public,
and we need to create a new public i p address resource
when creating this public I p address. You can see our skit was gonna be standard, and it's going to give us a static i p address for application Gateway.
This means it doesn't matter if we stop or start the application Gateway. We're going to retain the same public i p address for it.
Next, we need to select the back in pool, and I'm gonna choose our Web servers that we have configured
now we could create, are back in pool and not put any resource is and it just yet. But since we're here and we have the Web service available, let's go ahead and add them.
Let's continue with our configuration
now for a configuration through this wizard. We already created our front end I p address and are back and pulls. Let's go ahead and create a routing rule
much like our load balancing rules were creating a rule for the incoming traffic and determining how we're gonna route it to our back in target. Let's give our ruler name.
Let's go find the listener, and this is where we're going to specify. A port in an I P address for the incoming traffic
will specify our public front and I p address.
We'll stick with http and Port 80.
And for right now, on our listener type, we're just gonna choose basic for a single site. But do remember, application gateways can be configured to host multiple sites and we can create different writing rules for each site.
Let's go double check our back in targets.
We'll choose our back in pool that we created.
And we need to configure the http settings that we're gonna use one sending traffic to the back end.
We don't have any created right now, so let's click on, create new.
We'll continue to leave this at http for the protocol and port 80.
Instead of using the hash mechanism from our load bouncer, we have the option of using cookie based affinity and this will keep clients on the same servers throughout the session. We also have the option of connection draining where we can stop new connections from happening on the backend server,
wait until existing connections are gone, and then take down a back end server for maintenance.
And we can also configure our request time out
Application gateways also have the option of overriding the host name as it passes through to the back end surface.
We don't need this for a demo, but let's go and click on Add and create our settings here.
We can also configure path based routing. Remember this from our slides, where we had our domain name slash picks and we could write that to the server that hosted our pictures
and then a slash documents or docks, and we could host that to the and then we could redirect that traffic to the servers hosting our documents.
Let's go and click on add to finish creating our ratting rule.
I don't have any tags that I'm gonna apply. So let's review and create.
Now that our deployment is completed, let's go take a look at a new application Gateway.
And over here we have the front end public I P address. Let's go copy it
and real quick. Let's go take a look at our I P addresses for two virtual machines inside the back in pool,
we have Web 02 with 1 10 that 1 73 at the end
and then Webb 03 with 94.2 52 at the end.
And if we hit the public i p. Address of our Web application. Gateway.
You can see that we hit Web. 03 Let's go back to our application Gateway.
Let's go under settings and are back in pools to remove V M 03 from our back and pool
with their back and pull updated. Let's refresh
Our target's changed a one.
If we go back to our tab over here and refresh,
we can now see it redirects us to Webo, too. So I did that just to show that Web 02 and Web 03 were both answering from the back end pool. But as soon as we take one out, Webo, too will start accepting the rest of the connections.
Let's take a look at the other settings we have for our application Gateway.
Here we can switch between the standard V two
or the W F. VE to to get our Web application firewall.
We can also configure our auto scale settings. We can configure the app gateway to auto scale or set manual minimum and maximum instances, just like when we created it
under http settings. This is where we configured how we wanted the incoming network requests to be sent to our back in pool
here we can configure our front in I p configurations. Right now we have a public set, but we can also configure a private I p address.
Next we have our listeners. Listeners are on the front end of the server accepting the incoming traffic. And we remember from when we created it we did have the option of creating a multi site listener. Let's go check that out
here. We can give the list surname. Let's say we're adding one for a Z Tech 300 website.
We can choose the existing front and I P address that we already have configured.
We can also use the same port.
Then we just need to enter the host name for our website.
We also have options for configuring the cipher sweets that we want to use for SSL inside of our Web app listeners,
we also have our routing rules which make the connection between our listener and are back in the pool as well. A czar http Settings
remember former slides. We also have the ability to rewrite http ur Els
and we can configure our custom health probes
we would choose a name for the health probe,
the host that we're looking for
a path to test. And then we configure our custom interval and time out, as well as the unhealthy threshold for how many times the protest to fill before the backend server is marked as unavailable and unhealthy.
We can also monitor back and health.
Right now, our Web 02 is listed as healthy.
What I did is I went ahead and stopped
eyes on our web. Oh, to server, which is the only server left in our back in the pool right now.
And those health probes have failed. So it is marked that particular virtual machine as unhealthy.
That does it for a demo. Let's jump back to the slides so we can wrap this up.
Coming up next, we're going to jump into another more advanced networking topic with how do we integrate with our own premises networks. See you in the next episode.