By: Declan O’Riordan Head of Security Testing, T&VS
Prologue: It was the best of security, it was the worst of security and based on true events…
Project A had a team that learned how to design, code, and test security into their application from start to finish. The secure application provided all the functionality customers wanted, and none of the vulnerabilities hackers aim to exploit.
Project B hoped for the best. The designers assumed users would only submit data that could be trusted, and anyone using the system was a trusted user. The developers decided there was no point in trying to build self-defending applications – “because hackers will always get in anyway”…
The testers didn’t know how to test for security vulnerabilities and assumed the firewall provided all the protection they needed at the network perimeter. The project manager didn’t like spending time or money on anything except functionality the customer could see. Auditors didn’t understand security issues in detail, nevertheless ‘IT security’ was a compliance box that needed ticking at the end of development. To achieve audit compliance the project manager told an external penetration tester to “find out if hackers can get in” and to complete a report by Friday because users wanted the system to go live on Monday. The schedule was set without any security threat assessment.
The Project B penetration tester wasn’t told anything about the application, forcing ‘black box’ security testing to be adopted. This approach made it impossible to know which data elements used by the system needed encryption, so testing sensitive data exposure was skipped.
Because the time allocated to penetration testing was very small, the security testing had to rely heavily on automated tools. As usual, the static and dynamic analysis tools found some obvious Cross Site Scripting (XSS) problems automatically, but every application builds output pages differently and many vulnerabilities were missed. A manual code review and manual white-box penetration testing could have provided complete coverage, but those options were placed out-of-scope by the project manager’s time constraint.
Without an understanding of Project B, the automated tools could not determine which resources needed protection, nor what mapping to references were safe. This meant the tools could not effectively detect insecure direct object references when the application failed to verify if users were authorized to access the target objects. A code review could have determined if direct and indirect references were implemented safely, but this option was already out-of-scope due to the last-moment testing schedule.
Project B vulnerability scanning was unable to determine which hidden pages should be allowed for each user, and the static code analysis tool was unable to link the customised presentation layer with the business logic. Unlike Project A, code reviews and security testing of the access control mechanism were skipped, leaving access control vulnerabilities at the function level.
A few days after the projects went live, the differences became more apparent. Both systems were scanned and analysed by hackers, who gave up attacking Project A when they tired of finding defences in place for every attack vector. Hackers who analysed Project B quickly sold their findings onto the next level in the criminal supply chain, the attackers.
According to MITRE there are potentially 700 coding errors developers can make that lead to vulnerabilities, and (according to IEEE Centre for Secure Design) 50% of software security issues are due to design flaws. Without built-in security, Project B was vulnerable by default, and most breaches only took seconds or minutes to gain access once the attacks began. Since the applications were not self-defending the attackers only had to circumvent the network defences with obfuscated attacks to then escalate privileges and install malicious code.
After six months of data exfiltration, the law enforcement agencies started to notice Project B’s organization was a common factor in thousands of fraudulent transactions reported by the public. Forced by external pressure, Project B had to start finding and resolving vulnerabilities in the live system, testing and applying fixes, then removing malicious code and reconciling data. It was hard and expensive work, especially while under scrutiny by the national press and compliance authorities. The manager of Project B was sacked, then the CIO was sacked by the shareholders, but it was too late. The business had suffered critical damage to its finances and brand reputation.
Like so many projects that failed to build security into their applications from the start, it was impossible to add security as an afterthought because the company had been destroyed. Fortunately, Project A’s organization was there to receive the defecting customers!
Epilogue: TVS would like to thank Code Spaces, Mt. Gox, Flexcoin, Poloniex, and Bitcoinica for providing material useful to this story, and our commiserations on being destroyed by hackers in 2014. If only security had been built in….
TVS has already described how to start building and testing secure applications – download the free white papers now.