Desirable Qualities of Algorithms and Keys

Video Activity
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
or

Already have an account? Sign In »

Time
7 hours 50 minutes
Difficulty
Beginner
CEU/CPE
8
Video Transcription
00:00
>> Hello. We've got an initialization vector.
00:00
We've got an algorithm,
00:00
and we've got our keys.
00:00
But it's probably worth mentioning
00:00
some desirable qualities of these things.
00:00
I don't have any desirable qualities of
00:00
an initialization vector written here.
00:00
But when we talked about that earlier,
00:00
we want our initialization vectors to be long enough.
00:00
That's true of keys as well.
00:00
Long enough means that based on the data,
00:00
we provide adequate security without training
00:00
too much performance or other costs.
00:00
That decision is based on risk management.
00:00
For our algorithms, we want the following.
00:00
First of all, confusion,
00:00
which means good, strong,
00:00
complex math in substitution.
00:00
That's what confusion is about.
00:00
If you look at my math functions,
00:00
they are not the most sophisticated functions,
00:00
so we need confusion.
00:00
Next is diffusion.
00:00
This means that interspersed with the ciphertext,
00:00
we had plain texts so that it's more
00:00
difficult to reverse and get an easy result.
00:00
It just adds to the complexity.
00:00
There's also something called chaining.
00:00
Sometimes you'll hear about cipher block chaining.
00:00
Chaining simply means output from
00:00
one function is used to input to the next.
00:00
But keep in mind that with the chaining algorithm,
00:00
you can have errors that are cascading or propagated
00:00
throughout because each output
00:00
from one function effects the next.
00:00
[NOISE] If we go back to our formula,
00:00
plain tags plus initialization factor
00:00
plus algorithm plus key equals ciphertext.
00:00
Let's say that I take the data of
00:00
the plain text and I chunk
00:00
the first block into 256 bit block,
00:00
and my ciphertext that gets
00:00
produced is the ciphertext for block 1.
00:00
Now, we chunk data into block 2.
00:00
But what is interesting is that block
00:00
2 is about to go through this same process.
00:00
But the initialization vector that is used on block 2
00:00
is actually the ciphertext
00:00
that was produced from block 1.
00:00
The output from the first block
00:00
is used as input for the second block.
00:00
That produces ciphertext from block
00:00
2 that uses input for block 3.
00:00
That is the chaining process.
00:00
Another desirable characteristic that
00:00
helps with the complexity is permutations.
00:00
This is where you put the data
00:00
through multiple permutations arounds.
00:00
DS, which we'll talk about
00:00
later scenes for data encryption standard.
00:00
It would chunk data into 64 bit blocks,
00:00
and each block would go through
00:00
the encryption process 16 times.
00:00
When we say they're 16 permutations with DES,
00:00
now DES was broken and it's
00:00
actually used a 56 bit key for its encryption process.
00:00
When that happened, we went to Triple DES.
00:00
Which Triple DES?
00:00
That means instead of 16 permutations,
00:00
each block went through 40 permutations
00:00
before it was encrypted,
00:00
we moved on to the next block.
00:00
In that case, Triple DES was a processing hog.
00:00
You wouldn't really want to use triple DES
00:00
normally because of the sheer
00:00
>> overhead of the processing.
00:00
>> That's why AES became much more
00:00
popular and so much less resource intensive.
00:00
But the number of permutations as the complexity.
00:00
You have to ask, how much is enough?
00:00
The answer is just enough.
00:00
Then the last bullet point here,
00:00
undesirable qualities of an algorithm is that we
00:00
want our algorithms to be
00:00
an open source and publicly known.
00:00
There's a big discussion in the privacy and
00:00
software development community about what is better,
00:00
open source or closed source?
00:00
Let's look at Unix versus Windows.
00:00
Unix is open source code.
00:00
You can look at the code and you can
00:00
manipulate it to work for your environment.
00:00
You can test it. There are no secrets.
00:00
Why? Well, this way the community can examine,
00:00
tear it apart, and put it back together stronger.
00:00
With more people working to create a better product,
00:00
the better the product will be.
00:00
Well, if you can look at Microsoft,
00:00
their philosophy is that if
00:00
other people can't see the code,
00:00
then they can't break the code.
00:00
But obviously people have
00:00
been able to break the Microsoft code.
00:00
It hasn't worked as well as they probably intended.
00:00
Their philosophy is one of
00:00
the security through obscurity.
00:00
If you can't see it, you can't break it.
00:00
But it doesn't work.
00:00
Our preference on this exam and
00:00
most other exams you might take is to have openness.
00:00
You should have a publicly known,
00:00
tried and true algorithm as your choice.
00:00
That is better than a proprietary secret algorithm
00:00
and is based on a principle
00:00
>> called Kerckhoff's principle.
00:00
>> I may not have had his name
00:00
spelled exactly right on the side.
00:00
But that's what the principle is called.
00:00
The principles as the algorithm he open.
00:00
The cryptography community will examine
00:00
your algorithm and make it better.
00:00
Now the desirable qualities
00:00
>> for keys are just as follows.
00:00
>> We want them to be long,
00:00
and that's just long enough.
00:00
We want them to be random,
00:00
and we want our keys to be secret.
00:00
If our algorithm is going to be open,
00:00
then the key has to be secret.
00:00
Once we have all this,
00:00
it's the basis upon which will build
00:00
everything that we'll do in the rest of the class.
Up Next