Hello. This is Dr Miller, and this is Episode 2.2 of Assembly.
Today we're gonna talk about logical operators and the memory hierarchy.
So we have four logical operators that we're going to talk about today. And these all operate on binary data.
And this is how computers will do different types of operations that a computer needs to use.
So we have. And so with an if both bits are one, then one is a result and zero otherwise,
or if either *** a one than one is the result and zero otherwise. So if they're both zero, then you get a zero.
Not every zero becomes the one and everyone becomes zero
and then X, or if the bits are different, the result is a one and then we have a zero. Otherwise
so let's do an example for each one of these. You should try and figure out what it is before I go through it.
It's a bit wise. And so if both of them are one, our result is a one.
If both of them are a zero or one of them is a zero. Sorry,
then our results is going to be a zero, so we can see that these two should be one. And the rest of these should all be zeros
bit wise. Or
again try it on your own.
So with a bit wise, or if either one of them is the one that our result is one. So we have a one. A one a one, a one, a one,
a zero, a one and a zero
for not every zero becomes the one and everyone becomes a zero. So here is our number and we just have one that we're knotting.
And so everything has been flipped just like we did when we talked about two's complement
and then x alor So with X or if they are different than we get a one. So they're not different. They're not different. They are different right
there. See, these two are different. These air not different. These air different and these are not different.
And so there's our results.
One of the interesting things with X or is that if you take and do the reverse, then we end up with the same results. So if I take
a and I explore that with result,
I end up with B,
and so one and zero becomes 11 and zero becomes 10 and one becomes one. So these air different
and then these are all the same.
And so the reason why we talk about X or is X R is used encryption in encryption.
So if a is our key and B is our text that we want to encrypt, we explore them together and we end up with are encrypted result
and then to reverse the operation. We take our key and are encrypted result, and we explore them together and we end up with our plain text data.
So let's go ahead and try and do an example with this
so I'll go ahead and
build a project
so we'll just move some data into some of these registers.
Well, let's try this
our example. Before we had a
and then we can try some zeros,
Then we can go ahead and do a logical operator on it. So let's, for example, do X or
So this will do an X or of yaks and EBX and in store the result in the X
and then I can print out when my registers are. So let's go ahead and print them before
go ahead and write and quit my file.
Oh, I made a mistake.
these start with letters, it thinks that they are identifiers. So if you have ah, number that starts with letters,
you need to put a zero in front of it.
All right. And then I had to put a
h at the end to tell that these tell the assembly that these were Hexi decimal numbers. It was getting confused, thinking they were
for a decibel memories.
so we've built our project. If we do in l s,
we can see that
we have our resulting binary
and so we can see what a looked like, right? It has the value that we moved into it.
We can see what be look like.
And then it did the X or and we ended up with the result in E X.
And so we got
sixes and then bees, right, Exploring something with zero. You end up with that thing again. But if we excell er
a and C,
we get the number six
and so you should go ahead and try that on your own. If you get out a pen and paper
and try a different result and then put it into your program, build it and then run it and you should see your result.
All right, the memory hierarchy.
So with X 86 processors we've talked about that there is a arithmetic logic unit or Alieu, that can do mathematical operations. We also have our floating point unit,
and then we have things like the data bus. So this is how data gets moved from one location to another.
It is a shared communication media, meaning that
multiple processors might have to use the same medium.
We've have registers which we've talked about these air small and very fast. They run at the clock rate of the sister and the system, and the clock is
it has water called cycles. Right. So a cycle is the smallest amount of time that you can execute a single instruction.
Now there are some instructions that take more time than others. And so they might be a repeat instruction my go over and over again. And so it might take more than one operation or some of the multiplication
takes longer than doing, for example, shifting.
And then recently we've It's come to light about
branch prediction and how that has been used to do different attacks. And so branch protect prediction means that
the processor is gonna basically guess which branch. When we talk about branching, which branch is the most likely to
be used, and then it will automatically execute that before it actually knows the result of the previous operation.
And so we have, ah, hierarchy of of access times for our memory. So again, registers are in the one nanosecond to two nanoseconds Elise, as fast as we can go.
And then we have these caches in here. And so these caches are faster than main memory, but they're slower than the registers. And so they act as a buffer to
where you can read data into the cash and then re read it over and over again without having to go to memory.
So one of things to note here is that for example, if you're looking at two nanoseconds for a register, access and main memory is 90 nanoseconds,
then it is much, much slower right than a register
and so loading data from main memory is very slow in comparison to loading it from the registers.
And then you start getting into things like hard drives, which are quite slow, or tape back up and they don't list. Even here,
it could be on the network. Right. So when you're pulling code from the network, it has to go all the way out to the Internet and come all the way back.
And so, in order to make our processors as efficient as possible, we want to use the fastest of amount of memory that we can at any time.
And so it's fundamental to try and keep these caches cohesive and having the right data in them such that our result is going to go as fast as possible. We can do as much processing is possible in that
So again, this cycle is we're gonna fetch Well, it actually has to load it from somewhere. So
you're executed Wall might be on the Internet, so you download it to your hard disk and your hard disk is gonna be where you'd cash it for a little while. And then when you want to run it, you might load it into ram or main memory. And then, as it's executing on the processor, the OS will load different parts into these different caches. Level one, level two and level three.
And so that way, if it just keeps going back to that level one cash, I don't have to go all the way to May memory or all the way to the hard disk, which are extremely slow.
And then, after you fetched it, then you go ahead and decoded and executed and then you store it, and that storage might be in cash. You might be in RAM. It might be on your hard disk or the Internet,
right? And so we have to go through all of these different hierarchies when we're executing instructions on the front end and then on the back end in the middle. It's just all running on the processor,
and then when you have a program like we did in our example, right, that's actually stored on disk,
and so the operating system is going to search what's called the path and the path is going to tell it. Where do you look for programs? So, for example, most Lennix systems have slash been or slash user slash been
And so the OS will look in those places for inexcusable. So when you say ls it's gonna look in slash been and see. Is there an L s command in the bin
folder? If it exists, then what it does is it takes that program, it loads it into ram, and then it creates a process for it. It's a process as a p I D or process identify air
that says which process that is. So if your program goes haywire and is not working, then you can use things like kill in order to stop that process.
So that way you know which process it is. And there might be 20 people running bash, right?
And so if they're all running bash, I need to know which process identify air of bash. It is that I want to kill.
And so once it's loaded, that processing created it. Then it starts executing it,
and every process thinks that they are unique. So they think that they're the only one running on the processor.
So they think they have access to all the memory and all the instructions. Now that's not really true. but that's the way we conceptualize it.
And then, generally the operating system will handle. The resource is like disc Io. So reading and writing from disk reading data and writing data reading data from the keyboard, moving graphics into the display, or at least onto the graphics cards that then they can be put on the display
and then switching between different tasks that are running.
So when we switch between task, the OS sort of automatically does this. It switches between processes really fast. So back in the day there were only single core processors. And so in order for you to be watching a video and typing on a word processor at the same time,
the U. S. Would switch rapidly back and forth Opinion pong right back and forth
and then processes might be waiting, and so then other processes can run. So, for example, a process might try to read data from a disc or right data to a network or wait for the user to type in something.
And so if those air waiting, then it goes ahead and execute some other process that has some work that needs to be done,
and then the OS will generally have a priority list saying What processes air high priority, what processes a low priority.
And so if something high priority comes in, it will automatically execute that and pause low priority.
So, for example, if you're watching a video, you probably want that to be high priority. But if your computer is doing an A V scan, you might want that to be low priority. So when I'm not using my computer, great run a Navy scam. But if I'm watching a four K video
on Netflix, I want that to be very high priority.
And when it switches these tasks, it saves water called the context. So the context is basically the state of all of the different registers.
So we've talked about registers like EA x ebx ccx. So it saves those values and then loads a different processes,
registers into memory, and then it starts running of that new process.
And then there's been a lot of talk in the past about Risk versus Sisk. So risk is the reduced instruction set computer and Sisk is the complex instruction set computer.
So generally we see a Intel processor as the complex right so we have larger op codes. We have a lot of instructions, but we have fewer instructions
that do things right, so there are lots of complicated instructions. And so, for example, there's lots of multimedia instructions or multiple data instructions, and so we can run one instruction instead of running 20.
But the reduced. We have smaller op codes, and we have more instructions. And so this tends to be armed.
And so a lot of our desktop computers are Intel, and they use a lot more power than these Sisk arm processors, which have smaller op codes, which means they have less transistors, which means that they actually use less power.
And so that's why cell phones and I pads have really good battery life because they're running a reduced instruction set
that consumes less power per instruction that it has to execute.
So in summary today, we talked about logical operators and then the memory hierarchy.
Looking forward, we're going to talk about segments and functions and calling functions that are built into our libraries.
If you have questions, you can contact me at Miller MJ at you and Kate I e. To you
and on Twitter. I'm at Milhouse 30