next. We have fault tolerance now. Fault Tolerance is essentially the act of having fail over devices available now. Fault tolerance means that our network devices are we have felt fault tolerant devices or we have fault yet tolerant network.
That means that just because something becomes unavailable or something goes off line, we can still recover from that. We're tolerant to that. Felt felt. Being a device, having an air of advice, getting unplugged or going off line and tolerant being we're still good. We still have up time. We still have availability, so
but tolerance maybe fail over devices. We may have additional routers or switches or servers that if one device
gets pushed out of window, then we have another device that can take over for it. We have another device that can pick it up and can keep running, and our network is still up and it's still available. While we try to fix the original device, we go down and we get it and we put it back in the office or we set up a new device in order to maintain that fail over
our fail over device, maybe less robust. Maybe it may be ableto handle, less connections or less has less available. Resource is at one time that our primary device, but it's still can maintain on it, still can maintain our network connections. Our network is still up.
It's just a bit degraded,
so it gives us a little bit of Gruber room. It gives us a little bit of emergency response time to get our network back up to back up to 100%.
Now we won't have felt tolerance to be preferably automatic
if we had about If we have
a fault tolerant network and we say, Oh, well, we have a secondary rather, But it's in some John's drawer, and if I probably routed goes off line, we have to take the second Miller router and hook it up. Well, that's, um, that's a little bit of technical that's technically fault tolerant. But you're not going. You're not gonna have that automatic stale over.
If nobody's at the office at the time, then either somebody's gonna have to drive out there, or you're gonna have to wait until wait until morning,
so you need to make sure or you preferably very strongly, probably want that fail overt and you want that felt tolerance to be automatic. You want that fail over to be automatic, especially if it's not something that you're checking on 24 hours a day.
If you're not constantly walking into the network room and making sure that that routers up or the switches up
or the servers up, you may want to have an automatic fail over so it actually fixes itself before anybody notices it. And then you may get a warning. Are you making a load to your computer that says, Hey, by the way, I just have to fail over or I just came active because I was failed over, too, because the primary device had an issue
and that allows you time to go out,
fix the primary device, go back over to the primary device and then just keep rolling right along. So you want this to be preferably automatic.
So we're talking about devices that can have felt tolerances of points on our network that could be fault tolerant. We want to consider our different routers. We want to consider a different network devices such as our routers and switches and cables and network interface cards so
all of the actual devices and all of the points that a network connection needs to go through
in order to get from point A to point B. We want to consider those. We also want to consider the actual back end to our network devices. If, uh,
if we have backup routers and fail over switches and additional cables in the dish and secondary network interface cards, that's great. But what if our power goes out? Then we may need a backup ups. We may need a backup power supply.
What if some of our server stale or but if they lose data where we want to have data backups available available so we can pull that data back really, really quickly and get back up online?
What if you have a natural disaster and our entire site Those is not not accessible, maybe not technically destroyed, but maybe everyone had to be evacuated. The power outage came through our back Battery backup three and out, and now this site is no longer functional.
Well, then we may have a cold or hot backup sites.
Now a backup site is an entire additional location that can serve as a backup to our primary site. So say we're providing file hosting on the Internet were providing file transfer servers that compete, but people are able to connect to and get files, or we service different websites for people
when we may actually have an entire different backup site
in a different geographical location. In case our primary site goes out, we turn on that backup site and then it gets rolling again. We have a bit of a difference between our cold. Warm and hot sites are cold sites would be sites that they aren't ready to be up and running at the flip of the flip of a switch.
practically empty, building that we have a security guard go check on once or twice a day, and then when we need to activate it, we actually the drag servers and drag switches and dragged routers there and get it up and running a cold site.
The warm site. We may have a couple servers there. We may have a couple of network devices there and maybe almost ready, but we still need to start
pushing people into it. We still need to actually go in and turn everything on for it to be good to go and then last way have our hot sites. And a hot site is sort of the dream of backup sites. It's a fairy staffed, fully functional,
ready at the flip of switch type site where if we have a primary site, go down,
it doesn't matter because we have fun of staff devices that this backup hot site that we could just flip a switch and everything's good to go. But again,
every all of that comes with a cost backup sites coming to cost. And the more devices and the more staff we have a backup site, obviously, the more costs we're gonna have associated with that.
Then last night we have carp, which seems for common access. Redundancy protocol. Now carpe, essentially allows us to have multiple hosts with the same I p address. So if
this could be used for, fail over and provide transparent fault tolerance because if one device goes down, then we have a second device which has the same I P address, which is going to start servicing those requests. So if we have, say, a router that's utilizing common access redundancy protocol. Then if one rather goes down,
then we have a secondary router with the same exact I P address, which is just going to start picking up the traffic
where the primary were the primary router left off. Now
each device not only has a shared I P address has a common i p address, but it also has a private i p address. It has its own additional I p address that we can use for managing that device If both devices only had the shared I p address. Well, if we were trying to act and we were trying to quickly one device, we won't know which one we were talking to.
So on the management in we have a Private
Singh senior i p address The shared I P address is also a private private domain i p address. But we also have a management i p address that we can use to access those devices. If we need to
direct a single request that we need to do some management on a single device, we use that management i p address. But again, that management I p address wouldn't just be shared with everybody. We wouldn't make it common knowledge that should I P address would be what are in users would see. So if we had to fail over to that device than it would be very transparent.
Now carp can be used not only for fail over, but it can also be used to help with low balancing. If we have two devices with the same I p address, we may have low balancing between those two devices so that all of our network traffic is able to go through one device or the other device
so we can load balance between those devices. We can share the workload waken say
We want to make sure that we have, ah, back of device. We want to make sure we have felt tolerance. So instead of doing load balancing, we're just going to set up 11 of these devices to be a fail over. So this device will only become active if the primary device goes off line
with fault tolerance with their different fault tolerant devices with our different back of devices, as well as with our different carp devices, we need to make sure that those devices work, especially if their devices that we don't have failed over. We don't have failures all the time. We don't have faults all the time.
It doesn't help us if we have an entire secondary backup network.
But then our primary network goes down, only to find out that we have several issues in our backup network that are preventing it from coming up. We need to have scheduled outages that help us to see if our backup network will come online. And we need to have those staff doing those time
at least enough staff to be able to manage up
fake outage. And we need to make sure that those steak outages air done doing off hours so that we can say we can identify different issues if they do arise. If way schedule our fake outage, we run the fake outage and then our backup network doesn't come online. Then we can start trouble shooting a backup network to see what went wrong.
We don't wanna have ah backup network fail. We don't want to.
We don't want our first hit, but our backup network has something wrong with it to be when we actually need it. so make sure you perform. You prevent periodic tests on your back of network. Depending on the size of your enterprise, the size of the company. You may actually have a staff
that just maintains the backup network so that it stays online
again. It just depends on the cost effectiveness. It just depends on how high oven availability percentage that you need and how larger network is as to how much and how, how much effort, how much time and how much resource is you put into making sure that you have fault tolerance.
They had this backup network and that you have high availability on your network.
So thank you for joining us today on Cyber Today we talked about some of our different methods and rationales for network performance. Optimization essentially talked about reasons that we need to optimize our network and ways that we do it.
We talked about our legacy sensitive applications as well as our high bandwith applications,
and then we finish off by talking about our up time in our high availability. We talked about Ray's that we can manage Leighton see sensitive applications such as quality of service are high bandwith applications such as traffic shaping and load balancing. That's where I was making sure that we have a strong up time. We have a good high availability. Standing with are different
with our different components, such as our fault, tolerance and are
our fail overs as well as are the common access redundancy protocol. So we need to understand what each of these different rationales mean. How we utilize them in our net are how they're utilizing our network and how we can help mitigate them, how we can help manage them in our network by optimizing our network. So hopefully this
video is in for a minute.
Hopefully, what helps you to better analyze your own network better and understand how to better optimize your own network or how to do better on the network plus exam, which this series is geared toward. And we hope to see you here next time on cyber