13 hours 15 minutes
Hello. This is Dr Miller and this is Episode 1.2 of Assembly
today. What we're gonna learn about is assembly architectures and data representation, including binary, hexi decimal and two's Complement.
So what is? Assembly
Assembly is the second lowest level program, like language that we have four computers.
It is written for a specific architecture, including X 86 arm spark or even the cell processor.
And the basic assembly commands will create binary code by an assembler. And so it'll create the execute herbal that will get run on one of these computers. The raw binary code.
There are several different assemblers that we could use. There is the Microsoft assembler Orme as, um,
Nazem organ new assembler. In this course, we will learn about the Nazem assembler, and will you also use the canoe assembler for arm assembly?
Soas faras. The architectures go we have at the lowest level. We have the digital binary logic
that each of the processor makers create, and they do this by
creating or gates and and gates and using transistors. On top of that, the creators of the instruction set will create an instruction set, architecture or I s a
And this is a specifications that any one of those processors would use in order to, for example, move data from main memory into a register or do in addition.
And so, from the instruction set architecture, we create assembly language.
And so assembly language is going to give us the,
um, commands that are somewhat generic. So you can say add one registered to another
in a language that is understandable by humans, as opposed to understandable by computers.
The instruction set architecture is defined by these processor makers, and so it tends to be
op codes, which are binary in nature
and then high level programming languages like C Java PERL. They're built on top of assembly language, and so these languages will end up generating assembly languages or building interpreters that will run in assembly language
So computers store data as we should know using ones or zeros.
But reading ones and zeros is a difficult process. And so people who do a lot of programming a lot of low level programming will use different representations in order to store the data, because it's easier for us to read. So some of these representations are hex, a decimal asking and two's complement.
So binary data we have each one of these
represents a bit. So I have a zero bit in a one bit, and we have them in a particular order, meaning that this bit represents either the number one or having it or not having it, or the number two having it or not having it.
each one of these bits represents whether or not that thing is on or off.
And so we can see here that I have eight bits and a bets is known as a bite.
If I have four bits, that's also called a nibble or half of a bite,
and you can use powers of two as you can see 12 for eight each time it doubles. And that is probably the easiest way for most students to learn how to convert something from binary into decimal.
So, for example, if we have the two bit on the 32 bit on in the 64 bit on, if we add all those together, this binary number represents the decimal number 98
and you can play with it by changing some of these bits and then seeing what the ending result is for this number.
Now, Hexi Decimal is a way to take each nibble right and represent it by using one digit.
The digits go from 0 to 9 and then a through F.
And so if you have the letter D, what that represents in binary is 1101 It also represents the number 13 in decimal.
And so this is a way where you could write one bite by using just to hex digits.
And reading to hex digits is, for example, a F is a lot easier than reading. 10101111
And so, ah, lot of times very low level programmers will use Hexi Decimal so that they can
understand what is happening but not get lost in all of the ones and zeros that air going across the screen.
It's a lot of things like assemblers and dis. Assemblers will use Hexi Decimal in order to represent
two's complement. So this is a way that computers will use to store data that is either positive or negative, so we can have either a positive number and negative number
in twos complement the most significant bit. So if I have eight bits the top bit, if it's a zero, then we have a positive number.
And then if it's a one, we have a negative number
and in a later lecture will learn how to convert a number in two's complement.
If the uppermost bit is a one, then we flip all of bits. So all the zeros become ones and all the ones become zeros.
And then we would add one to the result that we have.
We also have characters so characters can be asking or the American standard for coded information interchange.
And this was an American standard created a long time ago that used just eight bits or characters, and it just represented the American characters of letters you know, a three Z and,
um, punctuation. Anything that you would generally see on an American keyboard
and then for the letters or things that you would be able to see. The most significant bit is going to be a zero,
But assembly can very easily convert from characters two bites, because in the end we have to have bites in order to
um, save our data inside of our files.
But since ask e was introduced, we've also added the Unicode standards. So this is one standard for all of the world.
And so some of these are utf 8 16 and 32. And this basically he means how Maney bits were going to use for that.
And so utf 32 has the ability to store every single character that we could want an emoji as well. And so this standard allows anybody, even some of the pictographic languages, to have characters representing all the different words that they have inside of their languages.
So in summary, we briefly discussed discussed the assembly architecture. We discuss the different binary data representations, including binary, Hexi, Decimal, two's complement
and even the different types of methods for storing characters
looking forward will look at different architectures, registers and then protected moon.
If you have any questions, you can contact me at Miller, M J u n k dot edu, or on Twitter at Milhouse 30
How to Use GDB (BSWJ)
The GNU Debugger (GDB) is one of the most commonly-used debugging tools in the world. ...
Certificate of Completion Offered
How to Use IDA (BSWJ)
The IDA is used throughout the IT and cybersecurity industries by exploit developers, vulnerability analysts, ...
Certificate of Completion Offered