all right. So regardless of whether or not you're using I scuzzy fibre channel, whether you're using a network attached storage device or storage area network, they're always considerations that we have to think about when bringing additional storage onto our network redundancy, location, redundancy.
Again, One of the chief tenants of security is availability.
And if we only have one place where our data stored, of course, if we have damage to that one facility or that one location, we're gonna lose our data. So we always have to think about, of course, backups. And not just the fact that we're gonna back up our data, but also where we're going to store the data
and on top of the server that we've just backed up. It's probably not a good solution.
So offsite storage of backups. Sometimes we talk about using database shadowing. If we're writing information to a database, we may be right. It toe another diet database simultaneously. So we have different copies of the same amount of data, often will send it to do two different types of storage media
just for additional redundancy. So we always have to think about redundancy.
Really, That's the main way that we're gonna achieve availability. Multi path. How do I get to my data? You need more than one destination, more than one link in case there's any sort of failure that just goes again with the idea of redundancy. But multi path is always about the link
for access. Sometimes you'll hear multi path. I owe input output,
making sure there's more than one path across which we can traverse The access D duplication duplication is important. It's not necessarily a word. I would work into conversation all that frequently, but D duplication is exactly what it sounds like. We want to eliminate duplicates. So the idea is this. Let's say I have an office in New York,
San Francisco in Washington, D. C.
So I've got all these different offices and I have a file that's crucial. And that file is very frequently used. And it's accessed here in D. C in San Francisco and wherever else you know across the world,
Well, the issue with that is how do I keep that file current in all three locations? Well, I could just have a single instance of the file, but that meant it means every time somebody in San Francisco wants to access that file. They have to go across a slow when link to access the file,
and that might be hundreds of times a day. That's not really practical.
So a lot of times we have network utilities or tools. Windows has a built in utility called distributed file system. Ah, and the idea is we take this single document and we hosted a multiple physical locations, but we ensure there's some mechanism in place to synchronize
so that we don't have 10 different versions of the documents.
Certainly, versions need to be tracked, but I need some mechanism to keep it in sync. Otherwise, I'm going to just have a single copy stored in one location and now make people access it across the slow handling. It's not my preference, but I have to make sure
that everybody has a current and up to date copy of the file. So that's under the idea of D duplication.
And regardless of where we're storing the data, whether it's NASA or sand or regular file server, that's always a concern. Regular San snapshots. So the idea about snapshots having recovery points ah, very comparable in idea to a backup, but usually as much
quicker to restore than doing a full backup.
We'll take a snapshot of the contents and of the configuration settings. And if there's a failure, will restore from the previous snapshots. You get that with virtual machines. You get that with databases again. Some of these concepts are just really good security principles, not anything unique to storage, but they're certainly very, uh
in the realm of storage.