okay. The data center itself in the physical infrastructure. When we talk about infrastructures of service, this is what we are leasing from our cloud service provider. We're getting away from hosting our own data on Prem because data centers are expensive to run
costs, rent, a lot of space, heating and cooling. That's a lot of money.
The maintenance. And we decide if we go for infrastructures of service, we're gonna outsource that. So I mentioned you know, one of the issues, uh, heating and cooling when you're cooling.
Ah, 100 servers. That's a lot of money. So we get the benefit of outsourcing that with infrastructures of service to our cloud service provider, not to mention the cost of all those servers. Right.
So just some ideas. It's important within a data center that we have a certain degree of humidity. Usually you want that right around 50%
cooling between 64 80 degrees. You know, 80 degrees is starting to get hot. But one of the things a lot of cloud service providers have found is they save enough money by running temperatures a little hotter,
even if they do have to replace processors from time to time or other elements. It's still
winds up being cost efficient to run those centers a little hotter. I'm not talking 110 degrees, but, you know, creeping up 80 85 degrees, I am gonna lose processors. But when we have the type of redundancy that we would happen a CSP, maybe that's not an issue.
All right, so, temperature and humidity that too much humidity, you're gonna get rust. If you don't have enough, you're gonna have static electricity. Those calls, you know that both of those are obviously problems with hardware.
Um, for the h fact, we have to make sure that it does cool consistently. We have to make sure that it's not bringing in impurities from the outside. Have to make sure that it were able to shut it down, making sure that it's cost efficient
Not my problem. When I use infrastructures of service, um, air management gotta have air circulation, right? Those systems cool themselves by pulling in cool air and pushing hot air out the back so that air has to be able to move freely. I don't want one computer breathing in
what the other computer is exhaling, so to speak.
Cable management. And we have all seen nightmares of cables unlabeled, just a big mass. That's a multi tentacled beast. We have to have good cable management. Various types of cable can have issues with cross top can have interference issues.
We want to make sure that, um,
just in orderly and organized cable layout in arrangement. Sometimes things, says cable management. Sometimes we run cables underneath the floor. Sometimes this should say overhead, not overheard. Sometimes we running through the ceilings.
The bottom line is we want to make sure
that we don't have hot spots. Make sure that we don't have areas that are susceptible to interference. You know, running twisted pair cabling by, uh, fluorescent lighting or heavy equipment, you know, And it's all about the type of cable that we used to.
I'll separation containment when I talk about hot and cold isles.
So we've got cold aisles and all the computers in hell, or pull in, so to speak, and then the hot pile. All the computers exhale, so we want to make sure that we have that separation in that isolation again. This is not when I configure with infrastructures of service.
But I count on my service providers
to have all of these elements planned for unaccounted for. They've got to make sure that the infrastructure itself the physicality is very flexible, right? We need that ability toe add devices to remove devices.
Uh, when you're looking at this amount of expense for hosting one of these major data centers,
every penny counts. We have to make sure that it runs efficiently. We've got the ability to grow or to shrink toe add to maintain. Um, we want to minimize our service levels. We sometimes talk about orchestration or automation.
Uh, security is the responsibility of the cloud service provider for the physical facility. So, you know, talk to me about, you know, not just on boarding and off boarding employees, but talk to me about your physical security.
CCTV has talked to me about monitoring devices.
Let me know what elements are in place. Then, of course, other elements of a data center, the servers themselves. We may have dedicated storage devices. There's network equipment, backup and recovery in the structure. You know, again, all of these elements air kind of month together
under physical security,
Physical security goes to the cloud service provider.
Now, when we're evaluating cloud service providers infrastructure, the up Time institute has, ah, gadison or site infrastructure evaluation. And basically they have four tiers, and each tier is an evaluation of the redundancy
up time institute right off the redundancy of the data center,
where Garrison or one is the least redundant, where his gadison or four is fully redundant. It's a mirrored facility, and each step in the middle gets a little bit more redundant.
Okay, so if you move over and, um, look at tier one, we have the capacity to support the island. I t load as is so we don't have built in redundancy.
But at Tier two, we have the additional capability. This idea of the active Lou plus one,
uh, you know, so each area we increase, and as you see, we start getting the tier three. We have sites that are multiple, E or or concurrently available, and then if we go all the way to for these two are running simultaneously. So obviously tier four's most desirable.
But what's the trade off?
Tradeoff is always money, right?
All right. So, um, you know again physical infrastructure is the responsibility of the cloud service provider. We have to be knowledgeable. We have to know what we need. So physical infrastructure, we think about security. We think about monitoring. We think about the H vac system and we think about physical,