Lesson 1 - Resource Management and Monitoring

Video Activity

Resource Management and Monitoring This lesson examines the following virtual CPU and memory concepts: Describing CPU and RAM in a virtualized environment Describing RAM over commitment Identifying technology that improves memory utilization Describing symmetrical virtual multi-processing (SMP) RAM reclamation Hyper threading CPU load balancing

Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
14 hours 13 minutes
Difficulty
Intermediate
Video Description

Resource Management and Monitoring This lesson examines the following virtual CPU and memory concepts:

  • Describing CPU and RAM in a virtualized environment

  • Describing RAM over commitment

  • Identifying technology that improves memory utilization

  • Describing symmetrical virtual multi-processing (SMP)

  • RAM reclamation

  • Hyper threading

  • CPU load balancing

Video Transcription
00:04
Hello, I'm Dean Pompilio. Welcome to Cyber Eri were in the virtual ization installation configuration Management course beginning in module 10.
00:12
Lesson number one deals with resource management and monitoring.
00:17
We'll have five lessons in this module,
00:20
so starting out with less than one we're looking at virtual CPU and memory concepts.
00:27
We want to know how your CPU and RAM works in a virtualized environment. The CPU ram on your host that iss
00:35
also gonna talk about what over commitment means
00:39
and identify a technology that lets you
00:42
better utilize
00:44
the ram that your host actually contains.
00:47
And then we'll look lastly at the symmetrical multi processing configuration of your host and how that applies to the virtual machine environment.
00:57
So we think about the way that your your host has its own physical ram. Right?
01:03
Maybe you have eight Big 12 gig 32 wherever you wherever you can afford to put in there
01:08
within your virtual machine.
01:11
I've got my application.
01:12
So I've got the guest operating system.
01:15
It has its virtual ram that's used by the Guest OS, whether it's Server 2012 or Window seven or Lennox,
01:26
the operating system of the VM itself
01:30
also has, ah, physical ram that's allocated to the guest operating system.
01:34
This all comes from the hosts physical ramp. So the way that we subdivide this finite resource among each of the V EMS that are being used
01:44
is what we're getting at here.
01:48
And because
01:49
VM, where allows for over commitment of memory resource is you can get away with
01:56
actually allocating more memory to the virtual machines
01:59
that you've configured, then they can actually support if they were all using their maximum allocation of RAM.
02:07
So this is an interesting concept. Is doesn't work this way. In the rial
02:09
physical world,
02:12
it's only possible in the virtual world.
02:15
So, for instance, I could,
02:16
uh, say that I'm my vm needs a reservation of 256 megabytes,
02:23
But maybe I can allocate up to 512 megabytes to that virtual machine. So the way that works
02:30
is,
02:32
for instance, the VM has to have the reservation met in order to boot up in order to start.
02:39
So I've got some minimum level off memory that's required for the virtual machine to actually boot,
02:46
and then I can set
02:47
a maximum or a limit on that amount of RAM so that I don't go beyond 5 12 mag. In this case,
02:57
if you do exceed the allocated amount of RAM, you might start swapping. Swap file gets created.
03:02
And if you remember,
03:05
swap file had a file extension of dot V S
03:09
W P.
03:10
So there's only gets created
03:13
when you're your allocation exceeds
03:17
what was what was designed when you created the V M or when you edited its configuration settings.
03:24
If you start to swap, obviously that's going to have an effect on your performance. So
03:30
it's best to pick your your reservation in your limits for your V M's memory requirements very carefully. Or at least start with numbers that are rather small
03:43
and only increase
03:45
if it's required based on your performance monitoring or other tasks.
03:50
Okay,
03:51
now we can think about how how do we get,
03:53
uh, to use more physical
03:58
are utilized the physical rant the host has,
04:00
and actually over allocate that for virtual machines.
04:04
So if my host had,
04:06
let's say, eight gigabytes of ram,
04:13
I could actually create enough virtual machines that would say that I really need 12 gigabytes of RAM
04:19
because all those V M's aren't using all their available memory allocation at the same time, this is possible.
04:27
If they did try to use all their memory at the same time, and it really did need 12 gigabytes of RAM, then I would have to start swapping and taking advantage of other technologies which help better utilize
04:39
the hosts physical memory
04:42
so we can reclaim memory in various different ways. One of the ingenious ways of doing that is known as TPS or transparent paid sharing.
04:51
And the way this works is
04:54
if I have
04:56
several V EMS running it on my host,
04:59
there's an excellent chance that if they're running very similar operating systems or similar applications that I will have pages of memory that are actually the same between those different virtual machines.
05:13
So if I have three virtual machines, let's say they're all running window seven.
05:17
There's gonna be some portion of the memory
05:21
in this virtual machine.
05:24
Let's let's say it's like three pages of memory here, three pages of memory here
05:30
and three pages of memory here.
05:32
They might be identical because they're all running the same operating system.
05:36
What I can do is Aiken transparently share these memory pages
05:41
so these get de allocated, they're no longer needed, and all I've got is pointers
05:47
to the actual pages.
05:49
So now three virtual machines can actually share
05:53
the same pages of memory that one virtual machine was using
05:57
if the if the page of memory changes. If there's some update, then of course, we go back to using the memory pages individually.
06:04
But if they are identical that were basically de duplicating the memory pages and sharing those pages that are identical among several machines,
06:15
it's a great way to economize your use of the hosts memory.
06:19
Another feature is the balloon driver.
06:23
So when memory starts to get scarce, before you actually begin
06:28
swapping, for instance,
06:30
I could borrow memory from other V EMS so I might have
06:35
a requirement for joins the green again, and I have a requirement
06:41
for an extra
06:43
four pages of RAM here. Let's just say they were given by each are sorry mega by each.
06:49
Maybe I've got another requirement for four more.
06:54
I can't satisfy all eight pages of memory that's needed by this VM. So what I can do? Let's say I've got some allocated on these V EMS. I've got eight here,
07:03
and I've got eight here.
07:06
If this PM needs 12 pages of memory and these two only needs six, I can steal, too,
07:15
from each of these and Adam
07:17
to this BM.
07:19
So the balloon basically inflates
07:23
the memory on the VM that needs it the most
07:26
and the V EMS that had some of their memory moved over to be used by this bm
07:31
uh, more or less don't suffer for that.
07:35
If if the memory requirement now
07:39
eyes removed and this B M no longer needs that memory,
07:43
then that memory goes back to the original eight and it gets returned to the VM that it was borrowed from. So the balloon inflates to get more memory bios it, and then it deflates and gives the memory back.
07:57
So it's another neat feature for economizing your memory usage.
08:01
Another thing that's available is the compression of memory.
08:05
So if I've got pages of memory stored in the hosts, physical Ram
08:11
compression algorithms could be used to actually condense the amount of space that those memory pages take.
08:18
So I might have,
08:20
you know, dozens and dozens and dozens of
08:24
pages of ran here.
08:26
Let's just say it's 20 pages of ramp.
08:28
By using compression, I might be able to reduce this to only using nine or 10 pages of RAM, or maybe 11 or 12
08:37
still storing the same amount of information. But it's being compressed the same where you compress images into a J pedophile or you compress a file from a CD into an MP three format. You're not losing any information. You're just using sub some advanced mathematics to have a take last storage space.
08:56
Another option is to provide slop space on the host using an SSD drive
09:05
so you can have an additional disk on the host
09:07
using SSD technology, which is the fastest storage available, currently
09:13
much faster than spindle disc.
09:16
And this SSD drive could be dedicated just for swapping.
09:20
So in the event that swapping does happen, why not put it on the fastest possible dr that you can? That's the idea here,
09:28
huh?
09:28
And ultimately, if these techniques do not provide enough memory for the V EMS that need it,
09:35
we will have to
09:37
have each individual virtual machine page. It's memory to its own storage, just the way that that a Windows machine will have a page file created or limits machine has a swap file created. The same thing will happen with the V M. This is the worst possible scenario. That means that all these other methods
09:54
have have been exhausted as far as what they can provide for
09:58
utilizing memory more efficiently. And now I'm down to my last possible option,
10:05
which means that the performance will really suffer. Although the memory requirements may be met, the virtual machines will continue to run, will run very slowly to that memory pressure gets alleviated,
10:16
and we can go back to using some of the other technologies that give a more efficient and smooth performance for the virtual machines.
10:26
So one of the things that we're looking at here is the symmetrical multi processor configuration of your host and how that relates to the V EMS and how they work.
10:37
We can see I've got a uni processor VM, a dual processor VM and acquired processor VM.
10:43
These air virtual processors, the black boxes.
10:46
So in the case of a single core dual socket system, I've got two sockets to CPU chips.
10:56
Each one has one core each,
10:58
so you know Ah ah, single processor VM would just arbitrarily get scheduled to use one of those two cores. The CPU scheduler decides which one
11:09
and it might stay with that were move around as needed.
11:11
However, if I have a V M that's got to virtual CP use,
11:18
I might be better off using a dual core.
11:22
So two cores, but only one socket, meaning there's one physical chip with two cores
11:26
in order to more efficiently use that processor architecture. Each of the Virtual CP use would map to each of the course
11:35
that will you get the best possible performance.
11:39
Same thing would apply if I had a dual core. I'm sorry. Quad core single socket
11:43
with VM with four virtual si pues Each one would get assigned its own core within that, sock it to get the best performance.
11:54
Now this is the basic way that this works. If you have a processor architecture that allows for hyper threading, now you've got some other options.
12:03
Now we have
12:03
the ability for each court. Execute two threads or two instructions at the same time,
12:09
and if we think about each of these cores having two threads running one of the top eggs, one at the bottom edge.
12:16
I could have a dual core
12:18
to course, one socket
12:20
with hyper threading enabled
12:22
so my single processor BM would use,
12:26
um, one thread.
12:28
The duel VM
12:30
would use the second threat on the first core and a single thread on the second court.
12:35
So this way I'm still using one processor for each of the virtual CP use. But I'm
12:39
I'm allowed to do too
12:43
threads on each court the same time. So this spreads out the load a little bit more evenly
12:48
if we think about CPU load bouncing in the bigger picture,
12:52
The Super scheduler is trying to figure out a way to use each virtual CPU independently based on the available cores and sockets that the host provides.
13:05
It's trying to spread that out,
13:07
And if I've got a multi virtual see PVM, it should try to use two different cores if it can't
13:15
so here I've got a dual core dual socket,
13:18
so two cores each to sockets,
13:20
hyper threading architecture.
13:22
My, my single processor VM can use any of these two course. However, the duel one might use the second threat here in the first threat here,
13:33
or another scenario could be that one of these might actually be used on a different process altogether. Different processor socket.
13:43
It just depends on what they're,
13:45
um,
13:46
CPU architecture, supports and what you're
13:48
configuration settings might dictate.
13:52
So, to review,
13:54
we talked about
13:56
how the virtual memory gets used
13:58
from the physical host memory of the of the host,
14:03
and then we know that we can overcome it, that memory
14:05
because of the way that we can, uh,
14:09
get better use of it with these reclamation technologies.
14:13
And we also know that the VM could only power up if the swap
14:16
file ah
14:18
is not exceeding the difference between allocated and the reserved ramp. So if I've allocated 5 12 for of'em, I need 2 56 to boot. That means my swap file can't be bigger than 2 56 in this particular example,
14:33
And the reservation means that's a minimum, that I need to get the machine to power up, and I can set a limit for how much it will actually allowed to use.
14:41
If the limit get succeeded, then we start using these other technologies to try to make more efficient use of the hosts, physical Ram
14:50
and We also cover how multi CPU, multi virtual CPU PM's will utilize the sockets and cores in a particular host environment.
15:01
And then with hyper threading enabled, we've got two threads per
15:05
per a cool.
15:07
And that can help with spreading out the load more evenly as we've seen some of these other examples.
15:13
Okay, that concludes lesson one. See you in less than two. Thank you.
Up Next