2016/12/31

Soft Guarantees

In "The Need for Strategic Security" Martyn Thomas considers some of the risks today in the the way systems are designed and built and some potential solutions to address the concerns raised. One of the solutions proposed is for software to come with a guarantee; or at least some warranty, around it's security.

Firstly, I am (thankfully) not a lawyer but I can imagine the mind bogglingly twisted legalese that will be wrapped around such guarantees. So much so as to make them next to useless (bar giving some lawyer the satisfaction of adding another pointless 20 paragraphs of drivel to the already bloated terms and conditions..). However, putting this aside, I would welcome the introduction of such guarantees if it is at all possible.

For many years now we've convinced ourselves that it is not possible to write a program which is bug-free. Even the simple program:
echo "Hello World"

has dependencies on libraries, the operating system; along with the millions of lines of code therein, all the way down to the BIOS and means we cannot be 100% sure even this simple program will always work. We can never be sure it will run correctly for every nanosecond of every hour of every day of every year.. for ever! It is untestable and absolute certainty is not possible.

At a more practical level however we can bound our guarantees and accept some risks "compatible with RHEL 7.2", "... will work until year-end 2020...", ".. needs s/w dependencies x, y, z..." etc. Humm, it's beginning to sound much like a software license and system requirements checklist... Yuck! On the whole, we're pretty poor at providing any assurances over the quality, reliability and security of our products.

Martyns point though is that more rigorous methods and tools will enable us to be more certain (perhaps not absolutely) about the security of the software we develop and rely on allowing us to be more explicit about the guarantees we can offer.

Today we have tools such as SonarQube which helps to improve the quality of code or IBM Security AppScan for automated penetration testing. Ensuring such tools are used can help but these tools need to be used "right" if used at all. All too often a quick scan is done and only the few top (and typically obvious) issues are addressed. The variation of report output I have seen for scans on the same thing using the same tools but performed by different testers is quite ridiculous. A poor workman blames his tools.

Such tools also tend to be run once on release and rarely thereafter. The ecosystem in which our software is used evolves rapidly so continual review is needed to detect issues as and when new vulnerabilities are discovered.

In addition to tools we also need industry standards and certifications to qualify our products and practitioners against. In the security space we do have some standards such as CAPS and certification programmes such as CCP. Unfortunately few products go through the certification process unless they are specifically intended for government use and certified professionals are few and in-demand. Ultimately it comes down to time-and-money.

However, as our software is used in environments never originally intended for them and as devices become increasingly connected and more critical to our way of life (or our convenience), it will be increasingly important that all software comes with some form of compliance assurance over its security. For which more accessible standards will be needed. Imagine if in 10 years time when we all have "smart" fridges some rogue state sponsored hack manages to cycle them through a cold-warm-cold cycle on Christmas eve.. Would we notice on Christmas day? Would anyone be around to address such a vulnerability? Roast potatoes and e. coli turkey anyone? Not such a merry Christmas... (though the alcohol may help kill some of the bugs).

In addition, the software development community today is largely made up of enthusiastic and (sometimes) well-meaning amateurs. Many have no formal qualification or are self-taught. Many are cut'n'pasters who frankly don't get-IT and just go through the motions. Don't get me wrong, there are lots of really good developers out there. It's just there are a lot more cowboys. As a consequence our reliance on security-through-obscurity is deeper than perhaps many would openly admit.

It's getting better though and the quality of recent graduates I work with today has improved significantly - especially as we seem to have stopped trying to turn arts graduates into software engineers.

Improved and proven tools and standards help but at the heart of the concern is the need for a more rigorous scientific method.

As architects and engineers we need more evidence, transparency and traceability before we can provide the assurances and stamp of quality that a guarantee deserves. Evidence of the stresses components can handle and constraints that contain this. Transparency in design and in test coverage and outcome. Traceability from requirement through design and development into delivery. Boundaries within which we can guarantee the operation of software components.

We may not be able to write bug-free code but we can do it well enough and provide reasonable enough boundaries as to make guarantees workable - but to do so we need a more scientific approach. In the meantime we need to keep those damned lawyers in check and stop them running amok with the drivel they revel in.

2016/12/24

Not all encryption is equal

Shit happens, data is stolen (or leaked) and your account details, passwords and bank-account are available online to any criminal who wants it (or at least is prepared to buy it).

But don't panic, the data was encrypted so you're ok. Sit back, relax in front of the fire and have another mince pie (or six).

We see this time and again in the press. Claims that the data was encrypted... they did everything they could... blah blah blah. Humm, I think we need more detail.

It's common practice across many large organisations today to encrypt data using full-disk encryption with tools such as BitLocker or Becrypt. This is good practice and should be encouraged but is only the first line of defence as this only really helps when the disk is spun down and the machine powered off. If the machine is running (or even sleeping) then all you need is the users password and you're in. And who today really wants to shutdown a laptop when you head home... and perhaps stop for a pint on the way?

In the data-center the practice is less common because the risk of disks being taken out of servers and smuggled out of the building is lower. On top of this the disks are almost always spinning so any user/administrator who has access to the server can get access to the data.

So, moving up a level, we can use database encryption tools such as Transparent Data Encryption to encrypt the database files on the server. Great! So now normal OS users can't access the data and need to go through the data access channel to get it. Problem is, lots of people have access to databases including DBAs who probably shouldn't be able to see the raw data itself but who generally can. On top of this, service accounts are often used for application components to connect and if these credentials are available to some wayward employee... your data could be pissing out an open window.

To protect against these attack vectors we need to use application level encryption. This isn't transparent and developers need to build in data encryption and decryption routines as close to exposed interfaces as practical. Now having access to the OS, files or database doesn't do enough to expose the data. An attacker also needs to get hold of the encryption keys which should be held on separate infrastructure such as an HSM. All of which costs time and money.

Nothings perfect and there's still the risk that a wayward developer siphons off data as it passes through the system or that some users have too broad access rights and can access data, keys and code. These can be mitigated against through secure development practices, change management and good access management... to a degree.

Furthermore, encrypting everything impacts functionality - searching encrypted data becomes practically impossible - or may not be as secure as you'd expect - a little statistical analysis on some fields can expose the underlying data without actually needing to decrypt it due to a lack of sufficient variance in the raw data. Some risks need to be accepted.

We can then start to talk about the sort of ciphers used, how they are used and whether these and the keys are sufficiently strong and protected.

So when we hear in the press that leaked data was encrypted, we need to know more about how it was encrypted before deciding whether we need to change our passwords before tucking into the next mince pie.

Merry Christmas!

Voyaging dwarves riding phantom eagles

It's been said before... the only two difficult things in computing are naming things and cache invalidation... or naming things and som...