Application security is frequently found next to web application security in conversations, policies, news reports and annual cyber security trend articles. This word association can be attributed to our increasing reliance on mobile computing and use of Java-enabled content. Taking a page out of the web site content playbook, software applications are being built as needed, a technique known as “dynamic coding” or “disposable designing.” Either way you look at it, the idea is quick and compartmentalized. That is one area where security falls through the cracks. The United States-based security research organization Ponemon stated that over “90% of all software code is reused,” in a recent report. Since efficiency is the hallmark of any large business, software component reuse isn’t anything new. The practice of cleaning, debugging or reviewing that used code is another question though. Web applications can be tested rather quickly using tools such as the Open Source Security Testing Methodologies Manual or even OWASP. If the Hong Kong VTech company had just conducted a simple SQL Injection
test on their own Learning Lodge database, they should have been able to identify the ease with which outsiders could gain access to customer’s data. Many of those customers were children, since VTech sells educational products linked to the Internet. Software vulnerabilities can be a bit more difficult to locate. Some organizations offer bounties or rewards for those who notify the company about a new security flaw. Web vulnerability rewards are not as easy to cash in on because code can be changed on the server quicker than it can be on each consumer’s computer. One metric rarely tracked outside of the security community is the length of time it takes any company before it eliminates a publicly known security or privacy flaw. This same metric can usually be found next to the number of days that company takes to notify the public about potential data breaches. An example of missing metrics is Security Magazine’s 500 “corporate rock star” list dated November 2015. The end-of-year list includes the proclaimed rising security star Sony Pictures Entertainment under the title of Information Technology/Communications/Media. This could be missing metrics, just a huge memory loss or the fact that the massive data breach occurred on 22 through 24 November 2014 instead of 2015. What's slightly bothersome is the same list includes 500 names of those Security Magazine members for each rock star. Of the 500 names, only 26 are female. Luckily, software applications have been around for much longer than web applications. One can only imagine how many punch cards it would take to load up a typical web page like the nonprofit, www.nfl.com
. An often-overlooked security strategy is the most basic countermeasure: separation of the asset from the threat. The reason why this simple technique is so easily forgotten is because so few security professionals actually know what their assets are. So few CEOs know what their assets are. These assets go far beyond tangible entries on the company tax records. Software is both an intangible and material asset because it falls under intellectual property and products. This leads us to the inevitable question, “if software is such an important asset to an organization, why don’t they do a better job of protecting and refining those assets?” The Y2K scare of the turn of the century was one of the only times effort was ever put into scrubbing software for one particular vulnerability. Since the end of the world did not come, maybe we just got lazy and decided all useable programs should be beta tested at the user level instead of the developer level. A business-oriented person would see this idea as being efficient because it reduces development costs of the software. Yet, a security minded individual would see this as the disaster it really is. We see the effects of this production concept almost every day in the news headlines. You can also see it every second Tuesday of the month with “Patch Tuesday.” No other product in the world is deliberately shipped to a customer with known defects, except software. What would happen to the automobile industry if the vehicle braking system needed to be updated once a month or they just didn’t work at all on gravel curves? How well would a garment company hold up if the buttons on their products fell off in the rain? Consumer protection is there for a reason even though there doesn’t seem to be much enforcement when it comes to software products. We're perfectly adjusted to the idea that the software we buy has problems or vulnerabilities in it. We would not tolerate this one bit with any other product or service out there though. As security professionals, we do have some say in how much this impacts our business. If we are producing software products, we can ensure our code is carefully reexamined before being reused or sent out the front door. We can lean on quality assurance or the legal department to show the additional costs associated with lawsuits from bad code, data breaches cause by our products or loss of customers from reputation downgrades. If your organization doesn’t build software and is a consumer itself, we can demand independent testing for all products that are near our highly sensitive assets. We can research the background of the software manufacturer to determine where or who wrote the software. This includes subcontractors as well, err Target. Have you ever doxed a product manufacturer before? You might want to make this standard practice based on the level of trust required to protect that asset. If questioned, tell them that you are conducting a test drive.