Skip to main content

https://securityprofession.blog.gov.uk/2014/06/11/what-heartbleed-can-teach-us-about-software-security-2/

What ‘Heartbleed’ can teach us about software security

Posted by: , Posted on: - Categories: Hot security topics

Even people who’ve never heard of Open Source now probably know all about it, thanks to the Heartbleed bug.

But in case you don’t, Open Source software is usually developed collaboratively, often in public, and the source code is available and can be changed and distributed. Mozilla Firefox, Linux and the Android OS are examples of widely software developed in an Open Source manger and distributed with an Open Source license.

One of the most debated questions at the moment is whether the Heartbleed bug might be the end of Open Source. Heartbleed is a vulnerability in the Open SSL library (which, rather ironically, is intended to provide security to internet protocols), allowing pretty much anyone who knows what they are doing to read the systems that are protected by it and steal data such as user names, passwords and all sort of other useful things.

Analysis, punditry and assignments of blame will continue to surround the Heartbleed bug for some time, but one of the most amazing things to come to light is how few people looked at the bit of programming in question before incorporating it into their own software. It seems everyone is keen to use Open Source material, but not very many people are prepared to put in the effort required (visit The Register for more information).

The big question is - does Heartbleed change the game?

Let’s bust some myths about the relative security of open and ‘closed’ (i.e. commercial) software.

Myth no. 1: ’Closed’ software means code is more secure because large organisations are investing a lot of money in testing.

Serious security flaws are still frequently discovered in commercially developed software (visit The Register for a recent example).

Myth no. 2: Open software is more secure because ‘given enough eyeballs, all bugs are shallow’  (more information).

Just because Open Source is open for everyone to see, it doesn’t necessarily mean that many people look at it closely (as we’ve seen with the Heartbleed example).

Myth no. 3: The best way of protecting your in-house developed software is to keep the code a closely guarded secret.

Hiding your code so that others cannot see its flaws (‘security through obscurity’) only works until someone finds them using another way, and if you’re unlucky, that might happen very quickly indeed (more information).

So what does this mean for software security?

Mostly, it means that security is not about choosing between ‘open’ or ‘closed’.

Every organisation needs a software approval process that doesn’t just cover business and services requirements, infrastructure compatibility, performance etc., but also includes a thorough security review. The approval process also needs to apply to any software that is going to be deployed on the estate, whether it’s closed, open or developed in-house.

If you do your own development, the only reliable way of protecting your code is not hiding it but building in security from the start. Implement secure development life cycle practices into your development processes, whether you are going agile, waterfall, or something in between. Be clear up front about what you are prepared to share as Open Source and what you want to keep to yourself. It means your programmers structure their code accordingly, and your approval mechanisms are geared up for it. Put into place good version control and release management so that you know what’s what, and where it is. Invest in the wider developer community by keeping your code fresh and release patches quickly, so you will get the ‘many eyeballs’ return. Act quickly if someone finds a problem with your code. Do not rely on a single layer of protection but implement layered security so if something goes wrong you have a fall back.

And finally, have an incident response plan for when something goes wrong, because it will.

So what does this mean for security in government?

It means that if we want secure software, we have to take responsibility for our own security by putting into place, maintaining and continuously improving professional, robust development and approval processes.

 

Sharing and comments

Share this page