A lot of effort goes towards securing networks and the resources they host, but when it comes to the seven layers of the ISO OSI stack, it’s the application layer where a lot of the trouble begins and ends. Gartner places 90% of the blame for security vulnerabilities on the tippy-top layer (7). This highly-vulnerable top layer provides the low hanging fruit that criminals and other bad actors love to target. The onus is now on developers and everyone responsible for building, testing, and releasing software to step up their game when it comes to creating secure code.To assist developers in writing more secure code, processes and tools have been added to the software development life cycle (SDLC) for this purpose. These break out into manual and automated processes and essentially amount to scanning code for vulnerabilities. The push to make this process more efficient has led to the development of automated code scanning tools. Let’s examine both the manual and automated varieties and see how they stack up against one another.Manual code scanning takes several forms. Regardless if the scanning is manual or automated, the earlier any type of defect – functional (bugs) or security vulnerabilities – is discovered, the easier and less costly they are to fix. Defects of either kind discovered late in the development process, especially after release, are expensive to remedy. It’s great if developers can catch security vulnerabilities while they’re coding, however, this doesn’t typically happen as the pressure to code and release quickly often preempt such caution. This is why most organizations that create software institute design and code reviews.Design reviews take place before a single line of code is written. The development team gathers and reviews a high-level design plan presented by the developer whose code design is under review. This is an initial manual scan of the architecture of the code consisting of the basic UI if present, class design if using an object oriented programming language, and basic control and logic flow along with inputs and outputs. Any issues with either security or best coding practices are flagged at this point and corrected by the developer before moving on to the coding phase.Another manual review occurs after the coding is completed. Again, the development team is convened, but this time they examine the source code line by line looking for security issues as well as areas of inefficiency or methods and classes requiring refactoring. Manual reviews are vital to any software team and they provide a great forum for collaboration and the mentoring of junior developers, but it’s been my experience that these reviews focus on efficiency and coding best practices at the expense of looking for security vulnerabilities. At this point, a more discerning eye is required and that often takes the form of automated code scanning.Automated code scanning breaks out into two forms: static and dynamic. Static code analysis functions similarly to anti-virus software in that it relies on a set of rules defining known vulnerabilities. These rules are often based on those set forth by the Open Web Application Security Project (OWASP). A popular static code analysis tool is Fortify from HP.In addition to static scanning there is also dynamic code analysis which scans executing code. This is essentially pentesting and can uncover vulnerabilities that static scanning is blind to by launching attacks and then observing the results. Static analysis falls under "white box" testing where the testing entity has knowledge of the code under test. "Black box" testing is the approach employed by dynamic code analysis. The only knowledge the testing entity has about the code under test in black box testing is the programming language in which the application is written. Dynamic code scanners require some sort of instrumentation of the code under test in order to gain access to its inputs and outputs. This is similar to the instrumentation used in automated unit testing of code under development. Tools such as Veracode offer both static and dynamic code analysis.As handy as automated code scanning tools, they aren’t without their shortcomings. Common problems are false negatives (missed vulnerabilities) and false positives (flags stuff that really isn’t a problem). False negatives are the most serious drawback and underscore the need for supplemental manual testing in the form of some degree of pentesting. Ironically, false positives are often viewed as the more serious issue by developers because they are annoying and waste time - sort of like that obnoxious kid that always cried “wolf!” Tools such as Fortify provide rules and filters that can suppress false positives, but turning to my experience again with code analysis, a lot of time is wasted on suppressing false positive notifications with Fortify.To answer the question which is better, you’ve probably already figured out that the answer will point to using both. Manual code analysis is a must for any software development team worth its salt for the reasons mentioned previously. Automated tools are a tougher call and sometimes require being traded off against cost. Smaller organizations can replace their functionality with enhanced manual testing contributed by the QA team. But ideally, both should be employed to achieve maximum coverage as well as efficiency. Integrating automated scanning tools into the build process in what’s become known as “continuous integration” is a smart approach and the sign of a mature development organization.If you’re building code today you cannot afford to be lazy or succumb to management pressures to just get it out the door. There’s too much at stake to do otherwise. A great place to start exploring this vital subject is with the courses on OWASP, Pentesting, and Secure Coding Practices right here on Cybrary.it.