Video Description

DES, HMA and CHAP Our next lesson explores a series of additional encryption standards including DES, HMA RSA and CHAP. In this lesson, we discuss what each data encryption standard is and how it works within the encryption process. For example, we discuss how the HMA key system works and demonstrates how it identifies the missing key for successful encryption transaction. In another example, you'll learn why the more rounds of computation your encryption algorithm has, the more complex network is to breach and more secure your environment is. [toggle_content title="Transcript"] The next algorithm we look at now is the data encryption standard, DES. This standard is a duplicated 56 bit block cipher. It's no longer in use. It's been termed obsolete because there have been multiple attacks against DES. It's been compromised and as a result the inventor for DES came up with triple DES. With triple DES we take DES through trial runs of computation to improve on its security. This is one factor we are going to see going forward in terms of the rounds of computation, the more the runs of computation, the stronger the algorithm is. We triple DES as a follow up to DES, the algorithm is increased in the terms, the number or rounds of computation, by ensuring that its computed 3, it goes through 3 rounds of encryption. We will see that DES will go through, will use 56 bits blocks but triple DES results in 56 bit blocks in 3 places which give you 168 bit key. We also have the hash base message authentication code, which is HMAC. Here, this uses a secret key with the hashing algorithm to generate a MAC value. The MAC value is then applied to the message. The key is never applied to the message, it's the MAC value is applied to the message and sent with the message. We should know that the key is never sent with the message, neither is it applied to the message. On the receiving end, the receiver Would also know the key. They need knowledge of the key so that they could decipher the message that has been sent. The RXA is another algorithm that employs a padding scheme, with this strategy we can take information that has nothing to do with the data we want to encrypt. We add this information to further vary the confusion, should information fall into the wrong hands. By adding information that has no bearings to what you want to encrypt you increase the complexity of anyone trying to decipher the messages, once they've been encrypted. RC4 is vulnerable to what we call the initialization vector attack. RC4 was lightly used with WEP, as a result of the vulnerability to the initialization vector attack, we don't use RC4 anymore. The key vulnerability in RC4 was that it was limited in its key generation. The likelihood at which keys would be reused was very high. This allowed RC4 easily be compromised and as a result we moved away from WEP. That depends largely on RC4 to WPA, which depends on TKIP, even that was easily compromised and we moved away again to WPA2, this relies on CCMP.RC4 is a string cipher. The one time part encryption algorithm, this combines plaint text input with a random key or part of equal length and is used only once. The one time part provides perfect secrecy whenever the generated key is truly random, is kept secret and is never reused. These are the conditions with which we best use the one time part. The key must be truly random. The key should be used only once. There must be only 1 instance of the key at every point and time. The key should never be reused. Challenge handshake authentication protocol, CHAP. This is a protocol that allows users to securely connect to a system and typically used with point to point connections. CHAP was created to replace PAP, which is the password authentication protocol, due to its vulnerability to eavesdropping and passing of authentication credentials in clear text. This is the problem we have with the password authentication protocol. It is a protocol with which our passwords are validated on the servers. The problem with PAP is that PAP will move the passwords in clear text and this is very vulnerable to eavesdropping. Malicious persons looking onto our network communications can decipher what our passwords are. The next item we look at now is the comparative strength of algorithms. Actually, there are 2 things to review when we discuss the comparative strength of algorithms, the first is, the key space. The size of the key space determines the number of keys that could be generated when you are producing your keys with which you do your cryptography. If you are using algorithms that have a large key space, the likelihood of generating more keys is very high. This would be very good because you have many keys to use before you run out. However, if you use algorithms that have a smaller key space, the likelihood of you repeating keys will be very high because you only have a few sets of keys. By the time you've sent a couple of messages, you start reusing your keys and the malicious persons are able to easily detect that keys are being reused. Once they decipher 2 of your keys, it is possible that they will decipher every other message that has been encrypted with such keys. The next item we consider is the number of rounds of computation. I typically would give an example here. Say you have your car, you want to paint your car. The garage on the left hand side says, "For $50 you get one coat of paint." The garage on the right hand side, "For$50 you get 3 coats of paint." We will see that for the same amount of money they give a different number of coats. 3 coats of paint would give better protection, the same idea applies to our encryption algorithms, the more rounds of computation your algorithms take the data, the more complex and the higher the work value to compromise this information if malicious persons are to try to decipher. The numbers of rounds of computation increases the complexity or what we call the work value to decipher this information. Finally we have key stretching. Key stretching techniques strengthen a weak key usually, a password against brute force attacks by increasing the time for testing each potential key. The malicious persons will have to spend so much time having to test these keys. This increases the frustration for the malicious persons. The strategy employs techniques that create enhanced keys or to put the block ciphers being used in a loop. This would theoretically be impossible to crack if we expand the keys. We have 2 strategies that could be employed, 2 common functions that use key stretching, password based key derivation functions. We have the PBKDF2 and the B crypt. [/toggle_content]

Course Modules

CompTIA Security+

Instructor Background

Instructed By

Instructor Profile Image

CISSP CISM CISA CHFI CSXF CEH, Cyber Security Specialist & Trainer



Subscribe to become an Insider Pro and get access to premium content such as:

Unlimited access to 700+ apps, including virtual labs, practice tests, capture-the-flag challenges, and more
Industry certification preparation
Guided Mentor
Premium support