Equifax Data Breach Due to Failure to Install Patches

“the Equifax data compromise was due to their failure to install the security updates provided in a timely manner.”

Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog

As simple as that apparently. Keep up to date with patching.

It’s an older code..

(let’s assume we’re talking about encryption keys here rather than pass codes though it really makes little difference… and note that your passwords are a slightly different concern)

Is it incompetence to use an old code? No.

For synchronous requests (e.g. like those over HTTPS) there’s a handshake process you go through every few minutes to agree a new key. Client and server then continue to use this key until it expires then they agree a new one. If the underlying certificate changes you simply go through the handshake again.

For asynchronous requests things aren’t as easy. You could encrypt and queue a request one minute and change the key the next but the message remains on the queue for another hour before it gets processed. In these cases you can either reject the message (usually unacceptable) or try the older key and accept that for a while longer.

Equally with persistent storage you could change the key every month but you can’t go round decrypting and re-encrypting all historic content and accept the outage this causes every time. Well, not if you’ve billions of records and an availability SLA of greater than a few percent. So again, you’ve got to let the old codes work..

You could use full disk/database encryption but that’s got other issues – like its next to useless once the disks are spinning… And besides, when you change the disk password you’re not actually changing the key and re-encrypting the data, you’re just changing the password used to obtain the key.

So it is ok to accept old codes. For a while at least.

An empire spread throughout the galaxy isn’t going to be able to distribute new codes to every stormtrooper instantaneously. Even if they do have the dark-side on their side…

Bearer v MAC

I’ve been struggling recently to get my head around OAuth2 access tokens – bearer and MAC tokens specifically…

Bearer tokens are essentially just static tokens valid for some predefined period before they need to be refreshed. They can essentially be passed around willy-nilly and will be accepted by a resource server so long as they can be validated. If a 3rd party manages to hijack one then they can use it to perform whatever the token is authorised to do just by submitting it in the correct manner. Consequently these tokens need to be looked after carefully. Shuffled over encrypted channels and protected by the client. They’re arguably even less secure than session cookies since there’s no “HTTP Only” option on an access token so preventing malicious access to tokens from dodgy code on clients is something the developer needs to manage. And given the number of clients around and quality of code out there we can pretty much assume a good chunk will be piss poor at this.

So Bearer tokens. Not so great.

MAC tokens aren’t just static strings. A client secret and nonce is combined with some request data to essentially sign tokens. So if you hijack a request in flight you can’t replay the token – it’s valid only for the original request. This is good but really only protects against snooping over the wire which SSL/TLS does a pretty good job of managing without all the additional complexity. Beyond this a MAC token seems to make very little difference. The client needs to know the secret in the same way it would need a Bearer token. If someone manages to snatch this we’re done for regardless and the false sense of security MAC tokens give isn’t worth a damn.

The client application is often the weak point in OAuth since it’s often an untrusted device – mobile phones and web-browsers (single page applications) etc. If the “client” is a downstream server (and this poor terminology by the way has caused way too much confusion and argument) then we’ve a reasonable chance to secure the server and data but ultimately we’re still going to have a client ID and secret stuffed in memory just like we would have with a Bearer token. Ok, so we’re adding some hoops to jump through but really it’s no more secure.

So if we don’t have transport level encryption (SSL/TLS) then MAC tokens offer some reasonable value over Bearer tokens. But if we do have transport encryption then MACs just add complexity and a false sense of security. Which I’d argue is a bad thing since increased complexity is likely to lead to increased defects… which in security is a very bad thing!

Besides, neither option allows me to identify the user or ensure the client calling the service is authorised to do so… Just that they appear to be in possession of a token saying the user has granted them authority (which may have been hijacked as per above).

p.s. One of my many failed new years resolutions was to post a new article every week of the year. This being the first four weeks in isn’t a good start…

eDNA – The next step in the obliteration of privacy online?

How your electronic DNA could be the secure login of the future (but let’s hope not).

More big-brother stuff over at The Guardian with eDNA (Electronically Defined Natural Attributes). I suspect that the NoMoreCaptchas product isn’t terribly strong as it feels like it could easily be subverted by introducing more natural delays into bots but it should help  slow them down which can often make it not worth the bother.

More worryingly though is that it’s all too easy for a website to capture this information without you knowing. Many sites already capture key presses, mouse movements, hover overs etc., some for legitimate functional use but many are just to help the marketing guys spy on you. What if the next time you google something they can tell if you’re drunk, stoned, had sex or are just plain tired… Do you really think they wouldn’t want to use that information to push more targeted ads at you?

Drunk => Show ads for porn sites.

Stoned => Show ads for local pizza companies.

Had sex => A combination of babies clothing, pharmacies and the most direct escape route… And of course Facebook would just auto post “Jamie just shagged Sally” in some pseudo scientific experiment on social behaviour.

As I said before, if you want your privacy back then lie! To subvert eDNA we’ll need something to inject noise between fingers and keyboard. Joy.

UK’s security branch says Ubuntu most secure end-user OS (maybe)

Kind of late I know but I’ve recently completed a new desktop rollout project for a UK gov department to Windows 7 and found it interesting that CESG supposedly (see below) think that Ubuntu 12.04 is the most secure end-user OS. There was much discussion on this project around the security features and CESG compliance so I find this topic quite interesting.

They didn’t look at a wide range of client devices so other Linux distributions may prove just as secure, as could OSX which seems a notable omission to me considering they included ChromeBooks in the list. It was also pointed out that the disk encryption and VPN solutions haven’t been independently verified and they’re certainly not CAPS approved; but then again, neither is Microsofts BitLocker solution.

The original page under gov.uk seems to have disappeared (likely as result of all the recent change going on there) but there’s a lot on that site which covers end user device security including articles on Ubuntu 12.04 and Windows 7.

However, reading these two articles you don’t get the view that Ubuntu is more secure than Windows – in fact, quite the opposite. There’s a raft of significant risks associated with Ubuntu (well, seven) whilst only one significant risk is associated with Windows (VPN). Some of the Ubuntu issues look a little odd to me; such as users can ignore cert warnings since this is more a browser issue than OS related unless I’ve misunderstood as the context isn’t very clear, but the basic features are there, just not certified to any significant degree. This is and easy argument for the proprietary solution provides to make and a deal clincher for anyone in government not looking to take risks (most of them). I doubt open-source solutions are really any less secure than these but they do need to get things verified if they’re to stand up to these challenges. Governments around the world can have a huge impact on the market and use of open standards and solutions so helping them make the right decisions seems a no-brainer to me. JFDI guys…

Otherwise, the article does have a good list of the sort of requirements to look out for in end-user devices with respect to security which I reproduce here for my own future use:

  • Virtual Private Network (VPN)
  • Disk Encryption
  • Authentication
  • Secure Boot
  • Platform Integrity and Application Sandboxing
  • Application Whitelisting
  • Malicious Code Detection and Prevention
  • Security Policy Enforcement
  • External Interface Protection
  • Device Update Policy
  • Event Collection for Enterprise Analysis
  • Incident Response

IE AppContainers and LocalStorage

IE’s EPM (Enhanced Protected Mode) mode provides separate containers for web storage between desktop and Metro mode when using the Internet Zone. There’s a page which discusses the detail but never really states why it behaves like this. It seems to me that this is unnecessarily complex and will lead to user confusion and angst – “why does switching to desktop mode lose my session/cookies/storage?” or more simply – “why do I have to login again?”. It’s also arguably a security risk since users will have multiple sessions/cookies active so could inadvertently leave themselves logged in or could lead to duplicate transactions because items may be placed in the basket in separate containers etc. It would be less of a concern if users couldn’t easily switch, but of course they can because MS has kindly put a menu item on the Metro page to “View in the Desktop”!? It all seems to be related to providing enterprise users with the ability to maintain and configure a setup to provide greater access/functionality to  intranet sites than you would want for untrusted Internet sites (enabling various plugins and the like).

To a degree, fair enough, but it’s mostly as a result of intranet sites adopting features that weren’t standardised or hardened sufficiently in the first place (ActiveX, Java etc.). These need to be got rid of though this will cost companies dearly – replacing existing functionality with something else but with no significant added value to the business bar adherence to standards/security compliance etc. is  a hard sell.

So MS is; from one viewpoint, forced into this approach. The problem is it just adds more weight to my view that MS is so dependent on the enterprise customer and supporting the legacy of cruft they (MS & corporate intranets) have spawned over so many years that MS are no longer able to provide a clean, consistent and usable system (some would say they never were…).

Violation of rule #1 – Keep it Simple!

 

Is lying the solution to a lack of privacy online?

I do wish social networking sites like G+ and FB would stop advertising peoples birthdays. Your birth date is one of those “known facts” used by many organisations (banks, government departments etc.) to verify your identity. Providing this data to social networking sites can result in information leakage and contribute to identity theft and security incidents. Combine this with all the other bits of information they capture and it would be quite easy for someone to bypass those security questions every call centre asks as a facade to security – they only need to gleam a little info from many sources.

This morning G+ asked me if I wanted to say happy birthday to Peter. I know Peter slightly but not well enough to be privy to such information and I have no idea whether it really is his (or your) birthday today, if it is… Happy Birthday! If it’s not then congratulations on lying to Google and Facebook – it’s good practice (so long as you can remember the lies you tell).

In a world where privacy is becoming impossible, lying may be our saviour. What a topsy-turvy world we’re living in…

birthday

Entropy

Entropy is a measure of disorder in a system. A number of years ago I was flicking through an old book on software engineering from the 1970’s. Beyond being a right riveting read it expounded the view that software does not suffer from decay. That once set, software programs, would follow the same rules over and over and produce the same results time and again ad infinitum. In effect that software was free from decay.

I would like to challenge this view.

We design a system, spend many weeks and months considering every edge case, crafting the code so we’ve handled every possible issue nature can throw at us including those “this exception can never happen but just it case it does..” scenarios. We test it till we can test no more without actually going live and then release our latest most wondrous creation on the unsuspecting public. It works and for a fleeting moment all is well with the universe… from this moment on decay eats away at our precious creation like rats gnawing away on the discarded carcass of the sunday roast.

Change is pervasive and whilst it’s seems reasonable enough that were we able to precisely reproduce the starting conditions the program would run time and again as it did the first time, this isn’t correct for reasons of quantum mechanics and our inability to time travel (at least so far as we know today). However, I’ll ignore the effects of quantum mechanics and time-travel for now and focus on the more practical reasons for change and how this causes decay and increasing entropy in computer systems.

Firstly there’s the general use of the system. Most systems have some sort of data-store; if only for logging, and data is collected in increasing quantities and in a greater variety of combinations over time. This can lead to permutations which were never anticipated which leads to exposure of functional defects or increase volumes beyond the planned capacity of the system. The code may remain the same but when we look at a system and consider it as an atomic unit in its entirety, it is continuously changing. Subsequent behaviour becomes increasingly unpredictable.

Secondly there’s the environment the system exists within – most of which is totally beyond any control. Patches for a whole stack of components are continually released from the hardware up. The first response from most first-line support organisations is “patch to latest level” (which is much easier said than done) but if you do manage to keep up with the game then these patches will affect how the system runs.

Conversely, if you don’t patch then you leave yourself vulnerable to the defects that the patches were designed to resolve. The knowledge that the defect itself exists changes the environment in which the system runs because now the probability that someone will try to leverage the defect is significantly increased – which again increases the uncertainty over how the system will operate. You cannot win and the cost of doing nothing may be more than the cost of addressing the issue.

Then there’s change that we inflict ourselves.

If you’re lucky and the system has been a success then new functional requirements will arise – this is a good thing, perhaps one for later but a system which does not functionally evolve is a dead-end and essentially a failure – call it a “panda” if you wish. The business will invent new and better ways to get the best out of the system, new use cases which can be fulfilled become apparent and a flourish of activities follow. All of which change the original system.

There’s also non-functional requirements change. Form needs a refresh every 18 months or so, security defects need to be fixed (really, they do!), performance and capacity improvements may be needed and the whole physical infrastructure needs to be refreshed periodically. The simple act of converting a physical server to virtual (aka P2V conversion) which strives to keep the existing system as close to current as possible; detritus and all, will typically provide more compute, RAM and disk than was ever considered possible. Normally this makes the old application run so much faster than before but occasionally that speed increase can have devastating effects on the function of the system within time sensitive applications. Legislative requirements, keeping compliant with latest browsers etc., all bring more change…

Don’t get me wrong, change is a good thing normally and the last thing we want is a world devoid of change. The problem is that all this change increases the disorder (entropy) of the system. Take the simple case of a clean OS install. Day 1, the system is clean and well ordered. Examining the disk and logs shows a tidy registry and clean log and temporary directories. Day 2 brings a few patches, which adds registry entries, some logs, a few downloads etc. but it’s still good enough. But by Day n you’ve a few hundred patches installed, several thousand log files and a raft of old downloads and temporary files lying around.

The constant state of flux means that IT systems are essentially subject to the same tendency for disorder to increase as stated in the second law of thermodynamics. Change unfortunately brings disorder and complexity. Disorder and complexity makes things harder to maintain and manage, increasing fragility and instability. Increased management effort results in increased costs.

Well, that’s enough for today.. next week, what we can do about it