Lesson 1 Part 1 - Storage
Lesson 1, part 1. Storage This lesson discusses storage concepts, including storage protocols: Fiber channel Fiber Channel over Ethernet (FCoE) ISCSI NFS Direct Attached Storage (DAS) All of these options have good capabilities and require shared storage. Each host can attach to multiple storage types at one time so they are very flexible.
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
Already have an account? Sign In »
14 hours 13 minutes
Lesson 1, part 1. Storage This lesson discusses storage concepts, including storage protocols:
Fiber Channel over Ethernet (FCoE)
Direct Attached Storage (DAS)
All of these options have good capabilities and require shared storage. Each host can attach to multiple storage types at one time so they are very flexible.
Hello, I'm Gene Pompey. Leo. Welcome to Cyber Eri.
We're now in module six. It will be covering lesson one. We talk about all of our different storage concepts. We'll start with describing our storage technology and how data stores work.
We'll move on to
describing our storage devices
and the different naming conventions
and following up with storage maps.
So first of all, if we think about our different storage protocols, have a lot of choices. Here
we have five or channels I mentioned previously runs over fiber. The ability to do multi path connections is supported.
We also have fibre channel or Ethernet,
so we can use fibre channel commands
for your stores device. But you're actually going over your either at network instead of fiber,
so this gives you a lot of advantages or reduced cost.
Since you're using the existing Ethernet network infrastructure
and depending on how your network has built,
you have some multi path options built in because you have redundancy within your network infrastructure.
Our other option is I scuzzy,
is using the Ethernet network. But this time we're sending scuzzy commands
that would normally go straight to the drive. Now we're going over the network and stuff.
Then we have NFS
and finally desk or direct attach storage.
So if we can see from our different capabilities here
ah fibre Channel fibre channel over Ethernet and I suppose you could all boot from the sand device. That is a really handy,
feature tohave because now you're
you're reducing the amount of reliance on local disc for booting,
and you can keep one master boot image on your sand device, which all of your hosts can utilize
for of emotion support. All these technologies will support the motion,
which means I'm moving
a virtual machine from one host to another. As far as where is registered where you can access it.
We have a chay support, which is high availability. That's something we'll see later in the class.
Hi Vale Building lets me designate a
fail over host when I've got a cluster, for instance,
so one of'em can if it's burning on host one. If that host goes down and automatically gets moved to host number two
as long as it's properly configured for that,
then we have DRS, which is distributed resource scheduler,
and this is a low bouncing
mechanism, tech or technology that works at the BM level.
So if I've got a cluster,
GRS can allow me to
load balance. The PM's running on each note in that cluster so that the average load remains even or as even as possible.
And then we have raw device mapping.
And this is a way, as I mentioned earlier, to interface directly with a London that's on a sand without having to go through a virtual ization layer
of the V. M. F s file system.
So you can see from this matrix
we have some good capabilities
NFS and direct attacks stores do not
support booted from san. That makes sense.
And then we can't do high availability for direct attached storage or DRS.
We do have to have shared storage for all of the distributed service is that we see here.
Okay, so, looking at our
overview of storage, we have r V EMS
going through the virtual ization layer, or V Colonel
on the host.
Each host can connect to multiple different storage types at the same time,
I could be connected to direct attach disc, which is inside the host itself,
or some partitions could be connected through fibre channel fibre channel over Ethernet or ice cosy. And I can still support connections to NFS all simultaneously.
And this is, Ah, very flexible way to do things because, for instance, your direct attach disc is probably going to give you the best performance. Overall,
the host is connected to advise just the cable. It's all internal. You're not going over the network, so you don't have any latent see issues that you might experience with some of the network based options.
But direct attached disc is limited. As we saw here, I can't do. My distributed service is like the motion and DRS.
So this is best suited for things like I so images or templates
or other files that you need
for bulk storage requirements. But you're not going to put your VM is there for the most part
if you want to take advantage of it. Distributed service is,
however, the Fibre Channel fibre channel. Everything that ice cosy in as could be used for any of those purposes I SOS
BM files and templates
and with support for distributed service is that gives you a lot of flexibility
for how to configure your hosts storage requirements.
You just have to really understand
what what the organization is trying to do with their applications and what kind of fail over tech technology you need to employ.
That being said a little bit here on
when you get a
ah ice cosy Lund or a fibre channel Lund allocated to you your storage administrator?
Well, generally you asked them. I said I had two gigabyte lung
and they will provide that to you. And then you can re scan your adaptors to see if the store's shows up.
But before you do that, you might want to consider some of these different factors. How big of a lunch size can you request?
What kind of band with will it support from an i o perspective.
How many ill requests per second will the device support
what kind of dis cashing is available?
This cash could very widely in size,
and therefore you get different kinds of performance characteristics, depending on what kind of this cashing is available in your environment.
Zoning and masking is a reference to some of the ways that
ones could be presented or hidden from different hosts
that way, you can prevent
host that you don't want to attach to storage from doing so by masking them
and then you create zones. Tiu allow those hosts. Which do you want
to attach to storage, to be able to see the story when they scan their adapters.
When you got identical Lund's being presented to a host. You need to understand how that works in a shared storage context,
and then whether or not the storage is or the act of love itself is active or passively connected.
That's a conversation you have with your storage administrator when the time comes.