Security software…or insecure software?

Henry Harrison

By Henry Harrison

“Protect against even the most advanced threats. Protect against even the most advanced attacks. Protect against even the most advanced APTs”.

Sounds good, doesn’t it? Who wouldn’t want a security product that could do all that?

And yet, on 21st March, a British pen testing company (www.pentestpartners.com) wrote a blog entry (https://www.pentestpartners.com/security-blog/remote-command-injection-through-an-endpoint-security-product/) explaining how the very same product described in that way had presented , for about a year, an almost trivial vulnerability through which an attacker could remotely inject commands onto any machine that was running the product.

In other words, at least for the course of that year, machines with the product deployed were probably overall less secure than they would have been without the product. How’s that for security ROI?

This is just the latest incident in what’s starting to become a bit of a trend. Back in June 2018, some fairly hairy vulnerabilities were discovered in Sophos SafeGuard (see for example https://www.theregister.co.uk/2018/06/26/sophos_safeguard_flaws/). And of course, there’s a long list of CVEs for other security vendors (e.g. https://www.cvedetails.com/vulnerability-list/vendor_id-76/cvssscoremin-7/cvssscoremax-7.99/Symantec.html)

The Heimdal example does seem to be particularly embarrassing. But I actually have a lot of sympathy for all these vendors. Their business is developing software: and the simple fact is that developing vulnerability-free software is astonishingly, incredibly difficult – if not downright impossible.

Here at Garrison, we develop security products. We have a very serious internal culture about security. We develop lots of software. And I can absolutely guarantee you that our software contains vulnerabilities. So, what is to be done?

Our approach has been twofold. Firstly, to put as much as possible of the security of our product into the hardware design of our product so that the security is maintained even if the software gets thoroughly compromised. In addition to physical hardware layouts, we use “soft” hardware approaches based on FPGAs (I’ve published recently about this at https://www.sciencedirect.com/science/article/pii/S1361372319300272 and I’ll be writing a blog post about it soon).

For some of our deployments, all the critical security controls are in hardware. But we also have customers who want to use our product in the “cloud” (i.e. remotely, as a service), and that means relying on software for cryptography. Our approach here I think is best described as a mixture of honesty and paranoia.

Firstly, honesty: the security that’s implemented in software will never be as strong as the security that’s implemented in hardware. There will be a vulnerability in there somewhere. But secondly paranoia: given that’s the case, what can we do to minimise the risk?

Some of the answers are standard: use well-tested libraries, use code analysis tools, get independent pen-testing done, respond rapidly to reported vulnerabilities… But the other part of our paranoia is architectural: combine the hardware and software architecture to ensure that the number of lines of code that are actually security-sensitive is as minimal as possible, so that all the attention can be focused there. In our case, that means we use crypto to establish a tunnel to a service that’s secured by hardware. That way, it’s only the crypto and networking software that’s security-sensitive: as long as we can trust the crypto tunnel, we can trust the rest of the service.

Some really security-conscious organisations will never trust software for any security: national security organisations typically rely on hardware crypto rather than software crypto. But for the moment – until high-quality hardware crypto is available in mainstream devices – that can be extremely constraining, and so the risk is usually worth taking.

For those who do rely on software for security, remember how hard it is to do right – unless there’s a very good reason to believe otherwise, you can probably assume that it hasn’t been done right. Any security software you implement is going to bring new vulnerabilities with it. You had better be sure that the benefits outweigh that downside: at the very least, try to avoid implementing software designed to improve your security that actually ends up making your security worse.