Did you know Cybrary has FREE video training? Join more than 2,500,000 IT and cyber security professionals, students, career changers, and more, growing their careers on Cybrary.
Applications, Security Controls and Techniques Welcome to Chapter for on Application Security Controls and Techniques. In this lesson we explore and explain several key terms and how they help you determine they types of security controls concepts you deploy and when. For example, we'll introduce you to Fuzzing, error exception handling and input validation and explain what you learn from the results of those testing tools. You'll observe a demonstration of Cross Site Scripting, the relations ship between the programmer & end user, and learn how it goes about redirecting otherwise reliable resources on the network or website. [toggle_content title="Transcript"] This deals with explaining the importance of application security controls and techniques. Application security controls and techniques the first item we'll look at is Fuzzing. Fuzzing is the practice of testing your servers to see how they respond to errors. You want to use a fuzzing tool you throw – it is the practice of throwing random information at your servers to see how your servers respond. So that maybe the error messages that are given out by the systems, or the behavior of the application, or the server operating system you could then ask your programmers to fix or suppress the messages that are given out. Malicious persons could use the error messages to deduce what needs to be done. So by fuzzing, you are throwing random information at the server. You throw scripts, you throw codes, you throw some data at the system to see how it responds. That way you can better protect yourselves against attacks like SQL injections attacks, you can protect yourself against buffer overflows and XML attacks. Because this way you already know how the system responds and your programmers and system architectural persons can better respond to these error messages or the behavior of the servers to prevent malicious persons gaining further knowledge as to how to exploit these flaws should they exist on your systems. We should also follow secure coding concept. These involve error exception handling and input validation. When we follow secure coding concept we ensure that security is built into the code from the beginning. Security considerations are given to the applications while they are being designed by the programmers. We don't want to put security at the end; this way, it will be very obvious, it will be very easy to compromise or bypass. If security is built into the application from the start, it becomes transparent to anybody trying to use it. We should employ error exception handling. You want to suppress certain error messages. You don't need the system disclosing the full message, even if access was not granted. Say for example a malicious person is trying to gain access to your server, they key in the ID and also they key in the password. Should it be the ID is correct, you don't want the system to telling them your password is wrong. Rather you want the system saying either your password or user ID is wrong. That way the malicious person still does not know what end is wrong. But where the error message reveals one item as wrong only, that way we already now know that a part of a guessed (or guest) ID is correct. So this allows the malicious person to further focus on information that is unknown. But if we practice error exception handling, we are able to suppress certain errors. In some systems, even if the attempt is unsuccessful, maybe the system will be programmed to show nothing. Just show a blank page, but you don't want a system that would reveal the error to the user or the malicious person whereby they can take further action to compromise the system. So this is what we do if we follow error exception handling. We also need to do input validation. We need to ensure that we understand what can be keyed into every field. Malicious persons would like to inject code within these fields so that they could carry out malicious activities on the servers. If our programmers insure input validation everything keyed into the fields would be properly validated to see that they meet the requirements of the organization that is putting this server on the Internet. So, once the requirements are met, it will mean that no codes, no scripts, no commands can be keyed into these fields. And even if they are keyed into these fields before the system executes them on the server; before it pushes this information to the server, the entries will be validated to see that unnecessary scripts are thrown out. That way we nullify the code that is being pushed to the server. Cross site scripting prevention is another method with which we could mitigate the attempts of malicious persons trying to attack our servers. A cross site scripting attack is carried out by the attacker injecting code into the pages of unsuspecting victims. So the cross site scripting attack can provide a platform for further attack, such as phishing or browser exploit, redirection and misdirection are major components of this attacks. They can be used on active web sessions as well and they can be used to snoop on private postings. Methods to prevent these attacks - the cross site scripting attacks, occur on two sides, that of the programmer and that of the end user. So the end users can implement security controls on their work stations to ensure that maybe they can detect and prevent cross site scripting attacks such as running anti-malware solutions on their systems, or anti-spyware solutions on their systems. These systems would have this anti-malware anti-spyware and they ensure that they regularly update the signatures so that they can detect intrusions onto their systems. Programmers also play a larger picture in cross site scripting attacks. They can validate input and address vulnerabilities by releasing security patches in a timely manner. Cross site request forgery prevention could be carried out. Cross site request forgery attacks can be very difficult but our end users can install add-ons for their web browser, empty their temporary files. Their temporary internet files could be emptied periodically, they need to keep their browsers patched up to date. The browsers be it Internet Explorer, Mozilla Firefox, Google Chrome or any other browsers that are out there. These are applications that get updates. Our end users should ensure that they have all these updates in a timely fashion and they run these updates on their browsers to ensure that there are no flaws present within these applications. Our administrators can also prevent cross site scripting attacks, request forgery attacks by following best practices by deploying web application firewalls. These web application firewalls will filter traffic that is being pushed at the servers to enable them scan the headers of the packets moving on the network. Application configuration baseline. These are application proper settings. The administrators on our networks should ensure that applications are properly set so that malicious persons could not use these applications in ways they are not meant to be used. By locking down applications to meet the specific roles of the users on the network, it is possible that the portions of the applications that could be used maliciously are not made available. This way, we are able to keep the applications to a specific baseline. Application patch management. Our software, our applications we use, they are not perfect. Our administrators should periodically scan the Internet to seek the patches that are released, as they are released by the manufacturers. These patches have to be validated. We need to authenticate that yes these patches have been released by the vendors. Then the patches should be tested in a test environment to see is it robust? Does it do what they say it does? Once these patches have been properly tested, we then migrate them to our systems and apply them appropriately so that our systems are best kept up to date. By doing proper patch management we can validate, and we have the assurance that our systems are working as we desire. Careful testing has to be done for the patches. You don't just download the patches from the Internet and install on our computers. Malicious persons might also craft their malicious payloads to look like patches. So you want to validate the source of the patch. You want to test the patch on your test servers, then migrate it to your production environment. This should be done by the administrators; the testing and the validation. You don't leave testing to your end user. They don't know what to look for. They might be careless with their testing procedures. So our systems administrators should take care of application patch management. Server-side verses client-side validation. There are some entries we could leave for the server-side to validate. There are also some entries we allow, or prefer validation on the client-side because it is much faster. The client can easily detect that there is an error and quickly correct it rather than we wait for the error be detected by the server. That puts too much load on the network or on the network servers. So some input if improperly done could be validated at the client-side. That way the client who is there sees 'oh, I made a mistake' or that was a typo they can quickly fix it. So, some inputs are better validated on the client-side than relying on the server-side. However, the server-side should also be configured that, could it be that the mistake is omitted or is not detected by the client, the server can also detect these mistakes and validate the input. [/toggle_content]