Boy Baukema
March 21, 2013
"If a tree falls in a forest and no one is around to hear it, does it make a sound?"
Likewise if a software project is delivered and no one has looked at security, can it be said to be secure?
When a customer commissions Ibuildings for a new application, he usually has plenty of functional demands (I need it to do X and also Y and Z… oh and can I get A?). And maybe some thoughts have been given to performance metrics, but security? Well… it “needs to be secure”.
What is ‘secure’ anyway?
It is said, conveniently enough mostly by software engineers, that building software is perhaps one of the most complex activities that humans have ever undertaken. Looking at some of the questions one might ask an engineer on secure coding, you get the idea that these software engineers may have a point:
- Have you encoded all output? What is output anyway? Is what we send to our own database output? And Redis?
- Have you filtered all input? What is input anyway? Is what we get from our own database considered input ? Is what we get from a third party news feed or SOAP service?
- Is every unsafe action protected from Cross Site Request Forgery?
- Is every AJAX call that requires authorisation protected from access by unauthorised users?
- Are all your expensive operations rate limited to prevent Denial Of Service attacks?
- Are passwords stored ‘securely’? Securely being salted (with user specific salts) Blowfish encrypted values with significant rounds?
- How safe is third party software used? Who applies security patches?
I would say ’etcetera’ here, but it doesn’t quite cover how many questions one can ask of engineers when they build an application. To help our engineers we currently do the following:
- Hold regular Web Application Security workshops where we remind old staff and inform new staff of application threats.
- Hold frequent code reviews, and will be working with the GitHub Pull Request system to make sure no unverified code makes it to master.
- Have a Secure Software Specialist dedicate time to helping the company release secure software.
But even with that, I have a saying:
Repeat after me: “If it hasn’t been tested, it doesn’t work.” And: “If it hasn’t been documented it doesn’t exist” — relaxnow (@relaxnow) February 2, 2010
Security isn’t a checkbox, it’s a dropdown
Remember all those questions we asked the engineer on secure coding? Turns out engineers don’t have a fixed set of questions they ask themselves. Team Leads don’t have a specific set of questions they ask their engineers. Customers hire security auditors (like us) and they all have their own set of questions. In order to help organisations come to an agreement on what should be the ‘minimal’ required set of questions to ask, several people got together and created the Application Security Verification Standard (ASVS) project and donated it to the Open Web Application Security Project (OWASP). From the introduction:
The standard provides a basis for testing application technical security controls, as well as any technical security controls in the environment, that are relied on to protect against vulnerabilities such as Cross-Site Scripting (XSS) and SQL injection. This standard can be used to establish a level of confidence in the security of Web applications.
ASVS gives the customer, and us, a negiotable ‘Security Minimum’ that can be verified. It defines 4 levels of rigor:
- Automated verification, a ‘quick’ automated (dynamic and static) check of only of the custom code.
- Manual verification, a ’normal’ check, verify all custom code and security relevant third party code manually.
- Design verification, a ‘business-critical’ check, verify all custom and all third party code.
- Internal verification, a ’life-or-death-critical’ check, every piece of code that touches the application (including build tools).
And 14 chapters:
2 Authentication, it’s okay to cheat with the OWASP Cheat Sheet.
3 Session Management, session timeouts are a good thing.
4 Access Control, if I upload a private picture, can only I access it?
5 Input Validation, rule number one.
6 Output Encoding, rule number two.
7 Cryptography, “Applications should always use FIPS 140-2 validated cryptographic modules, or cryptographic modules validated against an equivalent standard”
8 Error handling and logging, should someone get through your defenses, can you track what they did?
9 Data protection, how do you protect sensitive information?
10 Communication Security, have you properly implemented SSL?
11 HTTP Security, “a set of requirements that can be used to verify security related to HTTP requests, responses, sessions, cookies, headers, and logging”
12 Security Configuration, have you stored your configuration in a safe place?
13 Malicious Code Search, was anyone able to inject malicous code?
14 Internal Security, it should be hard to mess up.
Every chapter defines it’s own set of rules and the levels pick and chose from these (level 1 will only pick the first simple requirements from each chapter, while level 4 will require them all). At Ibuildings we pick a level with the customer at the start of a project. So far, we haven’t seen too many medical or life-critical projects, so they tend to be level 1 or 2. And we make sure to verify that the project complies before the final go-live. Now let me make this clear, as an engineer myself, we don’t look at a requirement like:
V5.7 Verify that all input validation failures are logged.
and say “Well, that’s a 2B requirement that is. This project is Level 1 only, so no logging for this app!”. Where feasible, engineers will always hold themselves to the highest standards. But we certainly won’t rewrite a third party application to support this rule, unless the customer explicitly wants us too.
Security still is not a solved problem
While ASVS is a wonderful addition, it has it’s issues:
- Verification and reporting, done to it’s fullest extent, can take a significant amount of time.
- Verification rules are not specific enough in use of tools and techniques.
- It is slightly outdated (2009).
The first and second issue are rather big issue for us. We’re commercial company making software for other (mostly) commercial companies. Budget is always an issue. Fortunately this is where automation and a bit of templating can come in quite handy! It’s still early, but you can find a sneak peak of what we’re doing with ASVS at our GitHub repo: ibuildingsnl/ibuildings-owasp-asvs-report-generator (warning, still very alpha). More on that later! As for the third issue, ASVS 2.0 is in progress and looking for volunteers!
Tl;dr
Security is hard. We’re constantly improving, we already:
- Do code reviews
- Have a senior engineer spend R&D time
And now we’ve added OWASPs ASVS, which is pretty cool, check it out! Let us know how you’ve embedded Software Security in your Software Development LifeCycle in the comments below or on the PHP Security Technical Group.