right in this block will talk about microprocessors.
Micro processors are most commonly referred to as CP use.
They're the brains of the computers. They do all the they do all the heavy lifting everyone all the other devices really serve. The CPU
handles billions of numerical numerical calculation for second
in today's common PC desktop
landscape. We have two main
vendors for si pues Intel on a M D.
In the next couple size, we're gonna go over. A lot of the different characteristics include the speed of si, pues how the speed is determined,
what it means, what it means to have multiple cores, what cash size and types there are for si pues, what is hyper threading
32 bit versus 64 bit? That's, um, that's
pretty common Now his process is being 64 bit on what that requires
virtual ization. That's, Ah, that's a big one. Now to be on a run. Guest OS is on your
We're being run. Multiple OS is without having a reboot
and what it means to have an integrated graphics processing unit.
So first we'll talk about the memory controller ship chip O R M. C. C.
This allows the CPU in the ram to actually communicate so
thinking of it from the very basic all your data stored on a hard drive. So then
you want to perform an action like opener program so that Daddy gets loaded into RAM and then it gets sent to the CPU
over the address bus and then the external data bus. Now the MTC coordinates all that activity
between the various devices.
So we just mentioned the external data bus. What is that? That is the actual interface to the CPU. So think of all little pins that come off a CPU or you have a seat chips, salt or two
a motherboard. You'll see little pins with that chip connects those air little pathways. Those pathways go back to the
MCC and those pathways coming off the CPU or called the external data buses that communication from the CPU to the rest of the system.
So when a command is being ready to get processed, it comes from RAM and gets sent over the external the MCC season and sent it over the external data post to the c p. U on the CPU itself for it to do work. It has what's called registers, and that's the workspace that it uses the store
data for instructions of this, that it's processing
so time keeping. So how does
how did the processor? No, it's time to do something. What, What makes it move? What's the rhythm?
So it's called a clock wire
clock wire informs a CPU that the new information is waiting to be processed.
It's like a clock every tick.
The processor does something, and it does this via the clock wire. It's a voltage signal sent
to the C P. U on A. It's like a metro gnome.
So on every cycle it's gonna check for data coming over the
which again was sent from the MTC.
And this kind of keeps everything rolling.
So a single charge from the clock wires there was a clock cycle, so that's gonna get us into clock speeds eventually.
So now each CPU is capable of doing a certain number of clock cycles
which is not to be confused, the *** speed which controls
the speed of the peripherals, not
the speed of the processor.
we look at how many can process per cycle cycles per second. So born Hertz equals one cycle per second.
One megahertz equals one million cycles per second. That's how many on each cycle.
How many? Uh, how many clock cycles doesn't run that one gigahertz. We got one billion cycles per seconds.
So the more hurts, the fast, the CPU.
So now the bus speed is separate, but we're not talking about the dust,
the e D. B anymore. We're talking about the rest of your devices like your peripherals.
They offer it at a certain bus speed.
Now what we do is we increase the internal CPU speed by multiplying the bus speed, time, the multiplier. So we have a multiplier that we do. So the bus speed will be 100.
And then there's a multiplier that we set that says, OK, we multiply the bus, be this
our clock speed on the CPU. Now,
this has to be set to not be over the maximum clock speed that the CPU can support.
Well, it's because damage to seep you
Now what drives all this? So where do we get the clock speed from it? You can't just come out. There's gonna be something generating this rhythm
now, just like on your watch, you have a quartz that controls your wash to keep time. There is the same kind of technology inside a motherboard, and that is really driving
that rhythm. It's like a beat,
and so that quartz is programmed to the port. Certain
descended on a certain being, and that sends it to the clock modified the clock multiplier that we just talked about
and that clock multiply on. The motherboard will multiply that beat to match the system speed.
And then we go that system speed,
we'll determine our can't be higher than the maximum cycles per second on
they say. What kind of came from the top down to? It's all really German from that system. Krystal.
you might have heard. This term is where we're gonna try to make the CPU go faster than it was intended.
This could be dangerous because it could cost CPU to overheat because it's performing outside speck.
I can usually avoid definitely a void the warranty, in most cases,
unless it's with, um, some specialty shop, that special that
But the whole goal is, it's to make it go faster. It's like tuning your car tow, maybe go faster than it's supposed to go.
Usually see it more with the PC enthusiasts hobbyists.
It's very uncommon to see in a corporate environment over clocking,
um, the gains were usually minimal, and a lot of times the gains air, usually to achieve faster performance in and video games.
You're not going to see a huge increase from over clocking in how fast your outlook opens
or how fast your word opens. You're not really going to see that in over clocking.
But what we're doing is we're going and changing that multiplier
to be higher than what
the specs say they should be.
So process. When they first came out very linear, they couldn't do one task at a time. Now they're capable of pearl execution, which makes him a lot more efficient
and some of the way they perform this multiple. This peril execution to be more efficient,
are through using what's called multiple pipelines and day K cash
and then also multiple threats. And we'll talk about what each one of those means
so a pipeline. So we said multiple pipelines so What is the pipeline? A single pipeline, So
think of as conveyor belt. So the processor Excuse it.
The processor gets an instruction
and it goes through with conveyor belts. The first that would be fetch. Okay, so we're gonna get the data off the E D. B off the external data bus.
So now we've got the data. The instructions. Now we got decode the instructions. What are we supposed to do? Are we doing math?
drawing a picture on the screen? What? What are we doing? What's so then when we determine what kind of function we're going to do,
they were gonna actually execute the instruction because we know what kind of function we're gonna do,
and then we're gonna send it back out to whoever requested it
back out over the E. D. B.
And each one of these steps happens on one of the cycles that we talked about
sent from the clock.
So what happens is if the execute section
takes a long time. So it's a lot of math. It's a complicated excel sheet. You do a lot of math in the excels in the execute section.
Now you're convinced now your process has stopped. Your pipeline has stopped. Know if we have a single pipeline, No other instructions are gonna come through the CPU because they're all gonna back up on execute.
And the execute is gonna take multiple clock cycles. Each one of these steps will take a minimum of one, but they can take multiple cycles if
it requires it for the instruction.
So new processors have multiple pipelines now,
so that Okay, we have a stall and execute for this instruction, we're gonna start another pipeline for another instruction
so we don't get backed up.
And now they have even gotten to the point where they start specialising some of the pipelines to perform specific functions. So we're going to say we know these kind of functions, like math or inner Jer.
Uh, arithmetic is gonna take longer. We're gonna send them through their own
pipeline and said everything else to a different set of pipelines.
So we'll send the trucks on the Super highway because we know we're gonna wreck the roads and we're gonna vert what else? Through the back road, it'll be faster.
Another way for another
optimization in modern day seep using what's known as cash.
So when a program opens, do you say you start word That's not that's That's not what gets sent to the CPU. It's not just word. There's a lot of other multiple programs and multiple instructions that have to happen in the CPU forward to run.
And so these multiple instructions broken up into what's called threats,
which they saw small sequence of instructions for a specific purpose.
Now a lot of things might perform it. A lot of programs might perform a certain similar task, or a single program might be broken to threat. That performer similar task. So on a very basic it's not that simple for me to say.
This instruction is two plus two equals four.
Hey, we remember we already We're gonna put that in our cash, which is a short term memory on the CPU itself called static Graham. And we're gonna store that two plus two is four instruction there. We're gonna
Okay, we already did. The math for two plus two equals four. We know the answers for well saving in memory and look oh, coming down the pipelines. Someone else's said Hey, two plus two equals four away I don't need to go waste time executing that and grab that from cash and send it right
right down the pipeline again. I don't have to waste in extra cycles doing that math. I've already cashed it.
So now we have different level of calf were talking at the CP level.
going from the very basic you've got hard drives,
which stores dad all the time, whether the computers on her off and then from the hard drive, Where you going to RAM,
which will store memory while the computer's writing. But it won't save it after computers turned off and then from Ram,
we're going to go to the CPU. So now that we're at the CPU, we have
cash. So that's the fastest. The closer are too steep you, the faster it is for the CPU to grab that information and use it. So so having So L one cash is actually stored on the CPU itself,
so that's the fastest.
That's the first place that looks for our story instruction to see if it's already been done.
It's the fastest, but it's also usually smaller because it's expensive. Si, pues or tiny. We don't have a lot of room to put
static memory on there, but it's also fast cause it's a short trip.
So L two's next place
that's usually stored not on the actual core, but it's part of the CPU itself, so it's a little further away and it's usually a lot larger.
These are statues. See commonly when looking at CPS will say the number will help the size of l one cash number of all to cash.
It's the second restored. It's bigger and it slower.
So we're talking about her l one l two cache or the ones on the CPU. But
we have what's called the Backside buses, which is the CPU talking to l two cache. And that's still kind of all behind
yell, too, in the L one that's called the Backside Bus. And then we have what's called the frontside bus, which is everything else coming out. So that's our talking with Seep, you talking to the MCC and talking to the ram.
So the backside buses, where that communication gonna go on between the l two cache and then the frontside bus is where we'll go looking at RAM
then we also have multi threading or hyper threading, which is
virtually creating another processor
to increase proficiency. So if you virtually create another processor, you're virtually creating more pipelines.
But since reverse really creating it, we're not really getting. We're not getting more speed where we might be being more efficient,
but we're still at the same limitations of that original processor. We're just kind of like giving the
making. It came to doing more work, but probably a little slower speed.
The advantage of this, comes is if the software is written to take advantage of multiple processors
work to split process between actual cores. Then the program wouldn't run more efficient but has the software in the operations and need to support hyper 30. So an early version of hyper threading
it was always support of Windows, and it won working Lennox
because the operating system was around to support it,
now multicourse so back in the early days of servers and even some high end desktops,
we wanted to get more processor part. What we do is we couldn't make process of small enough so you'd actually have multiple processors on the motherboard.
It's called S and P symmetric multiple processing.
Now the technology has gotten good enough that we can actually put
the core, which is the actual processing part of the CPU along because there's lots of sea few.
We don't put multiple cores on one CPU chip.
So instead of hyper three, we were simulating it. We're actually having multiple processors
on one see few chip, those called course.
So the two cores are often referred to his dual core four course quad core, six scores, hex cores and for all intent on 10 purposes. They are multiple, their physical
separate processing units on one CPU.
So you gonna shoot It works in months processors
and more efficient if the sulfur supports people execution. So
that's a big part of it is
say, you're running. Here's a Here's a good example. When we talk about multiple cores and
being take advantage of multiple processes,
a program like a word processing program is probably not ready to support multi processes it's gonna send. It's all its processes. Toe one CPU. It's going to say, you know this this word processor programs going No. One See pew,
and it's gonna take up. It's a huge documents gonna take up all that seep you processing power.
So then you wanna start up,
Excel or we'll say, Spread, she editor,
You put you start that up. Well, it's gonna see that that the operating system going to see that first processors full. It's full up. It doesn't have time to do anything else. It's going to send that task
to another to the next core
instead of having it Wait
So your system's not
you're not. You're not making a single program faster, but you're able to run more things at once because it can distribute the load
Now, some programs, especially
graphics editors and video reading programs and high computational programs, are written to support multiple processors. And so when that program runs, it will distribute itself
across all the available course
and got that. But you need to have a
it's hard to program for, because you need to distribute the work across multiple cores,
gather that result, piece it back together
to make it usable. So there's not a lot of programs except usually high and softer, that actually supports multiple cores. But you see it when
you really see it when you're in a lot of programs at once, especially CPU and, uh, CPU intensive one, because it'll just each corps will get a list of programs to run and save it, just maxing out one
So the two major competitors in our desktop consumer PC area is Intel on a M D.
Probably one of the biggest manufacturer of desktop PCs
They created the modern X 86 architecture that's used today.
We talked x 86. That's the
the instruction set for how a processor works.
It's the most common right now.
So am de advanced medical devices. They start off as
a clone for Intel. Intel could make enough processors, so they came up with a licensing agreement agreement with a M D,
help them make chips,
and then they eventually parted ways and became competition
in competitions, goods they've provided. They started releasing
processes that were a little faster. They were the first to come out with best top available exit. He's 64 bit cable processors so that sprint intel, too,
get moving on 64 bit. So competitions go because until had the market for a while and it really spurring more growth. They're both still very active.
So we're talking about just like with cars we have model names we're talking about
are different kinds of CPU. So Intel Name D don't make just one processor and sell that to everyone. We have various kinds of processors and to keep the different kinds of processors straight,
they give them names,
and so we'll have a class and
and then we'll have specific model numbers, and we'll go into that a little bit.
But they usually targeted to a certain audience. So
the most common targeting for their names and their processor of lineup are your budget computers, where we're looking at, like a $300 machine that we just want to use toe, browse the Web,
desktop computers. That's more like what you're going to see in a corporate environment. Or if you're in every day user, you do a lot of other stuff. This could also include your gaming systems to a certain degree, although those could be even more a high end. But
those are gonna be a little more expensive. The process will be capable of doing more
mobile devices. These are gonna focus on
low power usage low heat,
because when we're talking mobile, it's all about the battery life.
And then server systems servers are gonna have a toy different choir mint than any of these other ones because they're processing large numbers. Instructions from multiple users
usually have large amounts of RAM large hard drives and usually do a lot of specialized functions. So
servers are using a line of their own, and they're usually the most expensive line of processors also.
So in the mainstream area, right now, we're gonna talk about
the current lineups that air out on the market.
So in the mainstream area, we're trying our desktop PCs. You're gonna see the core I five core I 73.
Uh, the numbers indicate pretty much the power levels. The IE seven's your best went out there right now,
I three would be a lot cheaper.
It really depends what you gonna use it for, but it's basically get what you pay for.
If you're gonna do a lot of performance, you're gonna pay more. It's like with everything in life,
a nd they have quite a few more names for There's the fee Nome to the A series vino math lot Next to
the budget level, we have the Intel Pentium in the Cella, Ron's
Goebbels tie, the I three
and some of the lower models of the high fives in the budget level. Also
for a MD, we have some prawn and Ethel on two.
Now the mobile area. It's interesting we have
Tori, Tori and Frame D. But for until you see this I 75 I for they made a mobile version of the X 86
chips and you see those in, um, you start seeing those in tablets
Some of the new surface tablets coming out by Microsoft.
the mobile version by 594 So there is using the same X 86 chipset, but
using a lot less power or a lot less energy. Usual slower but staying instruction set.
Ah, the atom processor,
which is very common in a lot of tablets nowadays,
They've been using this pretty same names for the last
5 to 6 years. Ziana Titanium have been around for a long time. They're not the same zone Itanium. They're around
five or six years ago, or even 10 years ago. It's updated models. Of course,
they're gonna match different socket types, which we'll talk about in the next slides and then the AMG arena spend Opteron.