Equifax Data Breach Due to Failure to Install Patches

“the Equifax data compromise was due to their failure to install the security updates provided in a timely manner.”

Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog

As simple as that apparently. Keep up to date with patching.

It’s an older code..

(let’s assume we’re talking about encryption keys here rather than pass codes though it really makes little difference… and note that your passwords are a slightly different concern)

Is it incompetence to use an old code? No.

For synchronous requests (e.g. like those over HTTPS) there’s a handshake process you go through every few minutes to agree a new key. Client and server then continue to use this key until it expires then they agree a new one. If the underlying certificate changes you simply go through the handshake again.

For asynchronous requests things aren’t as easy. You could encrypt and queue a request one minute and change the key the next but the message remains on the queue for another hour before it gets processed. In these cases you can either reject the message (usually unacceptable) or try the older key and accept that for a while longer.

Equally with persistent storage you could change the key every month but you can’t go round decrypting and re-encrypting all historic content and accept the outage this causes every time. Well, not if you’ve billions of records and an availability SLA of greater than a few percent. So again, you’ve got to let the old codes work..

You could use full disk/database encryption but that’s got other issues – like its next to useless once the disks are spinning… And besides, when you change the disk password you’re not actually changing the key and re-encrypting the data, you’re just changing the password used to obtain the key.

So it is ok to accept old codes. For a while at least.

An empire spread throughout the galaxy isn’t going to be able to distribute new codes to every stormtrooper instantaneously. Even if they do have the dark-side on their side…

Soft Guarantees

In “The Need for Strategic Security” Martyn Thomas considers some of the risks today in the the way systems are designed and built and some potential solutions to address the concerns raised. One of the solutions proposed is for software to come with a guarantee; or at least some warranty, around it’s security.

Firstly, I am (thankfully) not a lawyer but I can imagine the mind bogglingly twisted legalese that will be wrapped around such guarantees. So much so as to make them next to useless (bar giving some lawyer the satisfaction of adding another pointless 20 paragraphs of drivel to the already bloated terms and conditions..). However, putting this aside, I would welcome the introduction of such guarantees if it is at all possible.

For many years now we’ve convinced ourselves that it is not possible to write a program which is bug-free. Even the simple program:

echo "Hello World"

has dependencies on libraries, the operating system; along with the millions of lines of code therein, all the way down to the BIOS and means we cannot be 100% sure even this simple program will always work. We can never be sure it will run correctly for every nanosecond of every hour of every day of every year.. for ever! It is untestable and absolute certainty is not possible.

At a more practical level however we can bound our guarantees and accept some risks “compatible with RHEL 7.2”, “… will work until year-end 2020…”, “.. needs s/w dependencies x, y, z…” etc. Humm, it’s beginning to sound much like a software license and system requirements checklist… Yuck! On the whole, we’re pretty poor at providing any assurances over the quality, reliability and security of our products.

Martyns point though is that more rigorous methods and tools will enable us to be more certain (perhaps not absolutely) about the security of the software we develop and rely on allowing us to be more explicit about the guarantees we can offer.

Today we have tools such as SonarQube which helps to improve the quality of code or IBM Security AppScan for automated penetration testing. Ensuring such tools are used can help but these tools need to be used “right” if used at all. All too often a quick scan is done and only the few top (and typically obvious) issues are addressed. The variation of report output I have seen for scans on the same thing using the same tools but performed by different testers is quite ridiculous. A poor workman blames his tools.

Such tools also tend to be run once on release and rarely thereafter. The ecosystem in which our software is used evolves rapidly so continual review is needed to detect issues as and when new vulnerabilities are discovered.

In addition to tools we also need industry standards and certifications to qualify our products and practitioners against. In the security space we do have some standards such as CAPS and certification programmes such as CCP. Unfortunately few products go through the certification process unless they are specifically intended for government use and certified professionals are few and in-demand. Ultimately it comes down to time-and-money.

However, as our software is used in environments never originally intended for them and as devices become increasingly connected and more critical to our way of life (or our convenience), it will be increasingly important that all software comes with some form of compliance assurance over its security. For which more accessible standards will be needed. Imagine if in 10 years time when we all have “smart” fridges some rogue state sponsored hack manages to cycle them through a cold-warm-cold cycle on Christmas eve.. Would we notice on Christmas day? Would anyone be around to address such a vulnerability? Roast potatoes and e. coli turkey anyone? Not such a merry Christmas… (though the alcohol may help kill some of the bugs).

In addition, the software development community today is largely made up of enthusiastic and (sometimes) well-meaning amateurs. Many have no formal qualification or are self-taught. Many are cut’n’pasters who frankly don’t get-IT and just go through the motions. Don’t get me wrong, there are lots of really good developers out there. It’s just there are a lot more cowboys. As a consequence our reliance on security-through-obscurity is deeper than perhaps many would openly admit.

It’s getting better though and the quality of recent graduates I work with today has improved significantly – especially as we seem to have stopped trying to turn arts graduates into software engineers.

Improved and proven tools and standards help but at the heart of the concern is the need for a more rigorous scientific method.

As architects and engineers we need more evidence, transparency and traceability before we can provide the assurances and stamp of quality that a guarantee deserves. Evidence of the stresses components can handle and constraints that contain this. Transparency in design and in test coverage and outcome. Traceability from requirement through design and development into delivery. Boundaries within which we can guarantee the operation of software components.

We may not be able to write bug-free code but we can do it well enough and provide reasonable enough boundaries as to make guarantees workable – but to do so we need a more scientific approach. In the meantime we need to keep those damned lawyers in check and stop them running amok with the drivel they revel in.

Not all encryption is equal

Shit happens, data is stolen (or leaked) and your account details, passwords and bank-account are available online to any criminal who wants it (or at least is prepared to buy it).

But don’t panic, the data was encrypted so you’re ok. Sit back, relax in front of the fire and have another mince pie (or six).

We see this time and again in the press. Claims that the data was encrypted… they did everything they could… blah blah blah. Humm, I think we need more detail.

It’s common practice across many large organisations today to encrypt data using full-disk encryption with tools such as BitLocker or Becrypt. This is good practice and should be encouraged but is only the first line of defence as this only really helps when the disk is spun down and the machine powered off. If the machine is running (or even sleeping) then all you need is the users password and you’re in. And who today really wants to shutdown a laptop when you head home… and perhaps stop for a pint on the way?

In the data-center the practice is less common because the risk of disks being taken out of servers and smuggled out of the building is lower. On top of this the disks are almost always spinning so any user/administrator who has access to the server can get access to the data.

So, moving up a level, we can use database encryption tools such as Transparent Data Encryption to encrypt the database files on the server. Great! So now normal OS users can’t access the data and need to go through the data access channel to get it. Problem is, lots of people have access to databases including DBAs who probably shouldn’t be able to see the raw data itself but who generally can. On top of this, service accounts are often used for application components to connect and if these credentials are available to some wayward employee… your data could be pissing out an open window.

To protect against these attack vectors we need to use application level encryption. This isn’t transparent and developers need to build in data encryption and decryption routines as close to exposed interfaces as practical. Now having access to the OS, files or database doesn’t do enough to expose the data. An attacker also needs to get hold of the encryption keys which should be held on separate infrastructure such as an HSM. All of which costs time and money.

Nothings perfect and there’s still the risk that a wayward developer siphons off data as it passes through the system or that some users have too broad access rights and can access data, keys and code. These can be mitigated against through secure development practices, change management and good access management… to a degree.

Furthermore, encrypting everything impacts functionality – searching encrypted data becomes practically impossible – or may not be as secure as you’d expect – a little statistical analysis on some fields can expose the underlying data without actually needing to decrypt it due to a lack of sufficient variance in the raw data. Some risks need to be accepted.

We can then start to talk about the sort of ciphers used, how they are used and whether these and the keys are sufficiently strong and protected.

So when we hear in the press that leaked data was encrypted, we need to know more about how it was encrypted before deciding whether we need to change our passwords before tucking into the next mince pie.

Merry Christmas!

MzE1NjE5NTI0ODYyfjIxMTAxNjI4NzMxNn5VMkZzZEdWa1gxOHorQVFYL2NQRFkwUVgyRm82anNJRGJHOW0vV1ZjdVEvVUR0cjZRZ0t3Y21WYWdCalo0dEZW

The title of this post is encrypted.

This page is also encrypted (via TLS (aka the new name for SSL)).

Anyone sniffing traffic on the wire must first decrypt the TLS traffic and then decrypt the content to work out what the message says.

But why bother with two layers of encryption?

Ok, so forgive the fact that this page is publicly accessible and TLS is decrypted before your eyes. It’s possibly a poor example and in any case I’d like to talk about the server side of this traffic.

In many organisations, TLS is considered sufficient to provide security for data in-transit. The problem is TLS typically terminates on a load-balancer or on a web-server and is forwarded from there to another downstream server. Once this initial decryption takes place data often flows over the internal network of organisations in plain text. Many organisations consider this to be fine practice since the internal network is locked down with firewalls and intrusion detection devices etc. Some organisations even think it’s good practice so that they can monitor internal traffic more easily.

However, there is obvious concern over insider-attacks with system-admins or disgruntled employees being in a good position to skim off the data easily (and clean-up any trace after themselves). Additionally requests are often logged (think access logs and other server logs) and these can record some of the data submitted. Such data-exhaust is often available in volume to internal employees.

It’s possible to re-wrap traffic between each node to avoid network sniffing but this doesn’t help data-exhaust and the constant un-wrap-re-wrap becomes increasingly expensive if not in CPU and IO then in effort to manage all the necessary certificates. Still, if you’re concerned then do this or terminate TLS on the application-server.

But we can add another layer of encryption to programmatically protect sensitive data we’re sending over the wire in addition to TLS. Application components will need to decrypt this for use and when this happens the data will be in plain text in memory but right now that’s about as good as we can get.

The same applies for data at-rest – in fact this is arguably far worse. You can’t rely on full database encryption or file-system encryption. Once the machine is up and running anyone with access to the database or server can easily have full access to the raw data in all its glory. These sort of practices only really protect against devices being lifted out of your data-centre – in which case you’ve got bigger problems…

The safest thing here is to encrypt the attributes you’re concerned about before you store them and decrypt on retrieval. This sort of practice causes all sorts of problems in terms of searching but then should you really be searching passwords or credit card details? PII details; names, addresses etc, are the main issue here and careful thought about what really needs to be searched for; and some constructive data-modelling, may be needed to make this workable. Trivial it is not and compromises abound.

All this encryption creates headaches around certificate and key management but such is life and this is just another issue we need deal with. Be paranoid!

p.s. If you really want to know what the title says you can try the password over here.

Channel 4 in France

Slight obsession some would say, but I enjoy F1… not that much that I’m prepared to pay Sky whatever extortionate fee they’re come up with today though so I tend to watch the highlights only on C4. Nice coverage btw guys – shame to lose you next year.

Anyway, I have a VPN (OpenVPN) running off a Synology DiskStation to allow me to tunnel through home when I’m abroad. Works a treat… normally. Channel 4 does not.

Initially I thought it was DNS leakage picking up that name resolution is from french servers. You can see this by visiting www.dnsleaktest.com and running the “standard test”. Even though I’m reported as being in the UK, all my DNS servers are in France… Humm, I smell a fish…

Screen Shot 2016-09-02 at 12.30.40 Screen Shot 2016-09-02 at 12.30.51

Am I in the UK or France?

To work around this I setup a proxy server on the DiskStation and the same test now reports UK DNS servers as everything goes through the proxy.

Screen Shot 2016-09-02 at 14.53.35

Definitely looks like I’m in the UK… But still no luck on C4…

Finally, I set the timezone I was in to UK rather than France and this seemed to do the trick. Note that you need to change the timezone on the laptop, not the time itself or you’ll have all sorts of trouble connecting securely to websites including C4.

In the end, the proxy doesn’t seem necessary so they don’t appear to be picking up on DNS resolution yet though it’s the sort of thing that they could look at adding (that, and device geolocation using HTML5 geo API  though for this there are numerous plugins for browsers to report fake locations).

Incidentally, BBC iPlayer works fine and does so without fiddling with timezone.

The net wasn’t really designed to expose your physical location and IP to location lookups such as MaxMind are more of a workaround than truly identifying your location. Using TOR as a more elaborate tunnel makes you appear to be all over the place as your IP address jumps around and corporate proxies; especially for large organisations, can make you appear to be in all sorts of weird places. Makes you wonder.. All these attempts to limit your access based on an IP address to prop up digital rights management just doesn’t work. It’s all too easy to work-around.

p.s. Turns out that whilst France doesn’t have free-to-air F1 coverage, most places have some form of satellite TV via CanalSat or TNT which includes the German RTL channel. It’ll do nothing to improve my French but at least I get to watch the race on the big screen…

Bearer v MAC

I’ve been struggling recently to get my head around OAuth2 access tokens – bearer and MAC tokens specifically…

Bearer tokens are essentially just static tokens valid for some predefined period before they need to be refreshed. They can essentially be passed around willy-nilly and will be accepted by a resource server so long as they can be validated. If a 3rd party manages to hijack one then they can use it to perform whatever the token is authorised to do just by submitting it in the correct manner. Consequently these tokens need to be looked after carefully. Shuffled over encrypted channels and protected by the client. They’re arguably even less secure than session cookies since there’s no “HTTP Only” option on an access token so preventing malicious access to tokens from dodgy code on clients is something the developer needs to manage. And given the number of clients around and quality of code out there we can pretty much assume a good chunk will be piss poor at this.

So Bearer tokens. Not so great.

MAC tokens aren’t just static strings. A client secret and nonce is combined with some request data to essentially sign tokens. So if you hijack a request in flight you can’t replay the token – it’s valid only for the original request. This is good but really only protects against snooping over the wire which SSL/TLS does a pretty good job of managing without all the additional complexity. Beyond this a MAC token seems to make very little difference. The client needs to know the secret in the same way it would need a Bearer token. If someone manages to snatch this we’re done for regardless and the false sense of security MAC tokens give isn’t worth a damn.

The client application is often the weak point in OAuth since it’s often an untrusted device – mobile phones and web-browsers (single page applications) etc. If the “client” is a downstream server (and this poor terminology by the way has caused way too much confusion and argument) then we’ve a reasonable chance to secure the server and data but ultimately we’re still going to have a client ID and secret stuffed in memory just like we would have with a Bearer token. Ok, so we’re adding some hoops to jump through but really it’s no more secure.

So if we don’t have transport level encryption (SSL/TLS) then MAC tokens offer some reasonable value over Bearer tokens. But if we do have transport encryption then MACs just add complexity and a false sense of security. Which I’d argue is a bad thing since increased complexity is likely to lead to increased defects… which in security is a very bad thing!

Besides, neither option allows me to identify the user or ensure the client calling the service is authorised to do so… Just that they appear to be in possession of a token saying the user has granted them authority (which may have been hijacked as per above).

p.s. One of my many failed new years resolutions was to post a new article every week of the year. This being the first four weeks in isn’t a good start…

Letsencrypt on Openshift

If you really wanted to know you’d have found it but for what it’s worth, this site now runs on Redhats OpenShift platform. For a while I’ve been thinking I should get an SSL cert for the site. Not because of any security concern but because Google and the like rank sites higher up if they are https and; well, this is nonfunctionalarchitect.com and security is a kind of ‘thing’ if you know what I mean. But certs cost £’s (or $’s or €’s or whatever’s). Not real pricey, but still I can think of other things to spend £50 on.

But hello!, along comes letsencrypt.org. A service allowing you to create SSL certs for free! Now in public beta. Whoo hooo!

It isn’t particularly pretty at the moment and certs only last 90 days but it seems to work ok. For Openshifts WordPress gear you can’t really do much customization (and probably don’t want to) so installing letsencrypt on that looks messier than I’d like. Fortunately you can create a cert offline with letsencrypt and upload it to wordpress. Steps in a nutshell:

  1. Install letsencrypt locally. Use a Linux server or VM preferably.
  2. Request a new manual cert.
  3. Upload the specified file to your site.
  4. Complete cert request.
  5. Upload certificate to openshift.

Commands:

  1. Install letsencrypt:
    1. git clone https://github.com/letsencrypt/letsencrypt
    2. cd letsencrypt
  2. Request a new manual cert:
    1. ./letsencrypt-auto --agree-dev-preview -d <your-full-site-name> --server https://acme-v01.api.letsencrypt.org/directory -a manual auth -v --debug
  3. This command will pause to allow you to create a file and upload it to your website. The file needs to be placed in the /.well-known/acme-challenge folder and has a nice random/cryptic base-64 encoded name (and what appears to be a JWT token as contents). This is provided on screen and mine was called something like KfMsKDV_keq4qa5gkjmOsMaeKN4d1C8zB3W8CnwYaUI with the contents something like KfMsKDV_keq4qa5gkjmOsMaeKN4d1C8zB3W8CnwYaUI.6Ga6-vVZqcFb83jWx7pprzJuL09TQxU2bwgclQFe39w (except that’s not the real one…). To upload this to an openshift wordpress gear site:
    1. SSH to the container. The address can be found on the application page on Openshift.Screen Shot 2015-12-03 at 21.45.38
    2. Make a .well-known/acme-challenge folder in the webroot which can be done on the wordpress gear after SSHing via.
      1. cd app-root/data/current
      2. mkdir .well-known
      3. mkdir .well-known/acme-challenge
      4. cd .well-known/acme-challenge
    3. Create the file with the required name/content in this location (e.g. see vi).
      1. vi KfMsKDV_keq4qa5gkjmOsMaeKN4d1C8zB3W8CnwYaUI
    4. Once uploaded and you’re happy to contine, press ENTER back on the letsencrypt command as requested. Assuming this completes and manages to download the file you just created you’ll get a response that all is well and the certificates and key will have been created.Screen Shot 2015-12-03 at 21.48.09
    5. To upload these certs to your site (from /etc/letsencrypt/live/<your-site-name/ locally), go to the Openshift console > Applications > <your-app> > Aliases and click edit. This will allow you to upload the cert, chain and private key files as below. Note that no passphrase is required.Screen Shot 2015-12-05 at 13.36.59You need to use fullchain.pem as the SSL cert on Openshift.com and leave the cert chain blank. If you don’t do this then some browsers will work but other such as Firefox will complain bitterly…
    6. Save this and after a few mins you’ll be done.

Once done, you can access the site via a secure HTTPS connection you should see a nice secure icon showing that the site is now protected with a valid cert 🙂

Screen Shot 2015-12-03 at 22.00.33

Details of letsencrypt.org supported browsers are on their website..

Good luck!

 

ERROR-BT_SPORT:VC040-NFR:0xFF-FAIL

Screen Shot 2015-09-15 at 21.38.05BT’s Sports online player – which being polite is piss poor and with the UX design provided by a six year old – is a fine example of how not to deal with errors in user interfaces. “User” being the key word here…

Rather than accepting users are human beings in need of meaningful error messages, and perhaps in-situ advice about what to do about it, they insist on providing cryptic codes with no explanation and which you need to google the meaning of (ok, I admit I ignored the FAQ!). This will lead you to an appalling page; clearly knocked up by the six year olds senior sibling on day one of code-club, littered with links you need to eyeball which finally leads you to something useful telling you how to deal with the idiocy of BT’s design decisions.

In this particular case the decision some halfwit architect made which requires users to downgrade their security settings (e.g. VC002 or VC040)! So a national telecom provider and ISP is insisting that users weaken their online security settings so they can access a half arsed service?…

Half-arsed since when you do get it working it performs abysmally with the video quality of a 1996 RealAudio stream over a 28.8 kbps connection. This likely because some mindless exec has decided they don’t want people watching on their laptops, they’d rather they use the “oh-god-please-don’t-get-me-started-on-how-bad-this-device-is” BT Vision Box – I feel sick just thinking about it…

In non-functional terms:

  • Form – Fail
  • Performance – Fail
  • Security – Fail
  • Operability – Well all I know is that it failed on day one of launch and I suspect it’s as solid as a house of cards behind the scenes. Lets see what happens if the service falls over at champions league final kick-off!

Success with non-functionals alone doesn’t guarantee success – you need a decent functional product and lets face it, champions league football is pretty decent – but no matter how good the function, if it’s unusable, if it makes your eyes bleed, if it performs like a dog, if it’s insecure and if its not reliable then you’re going to fail! It’s actually pretty impressive that BT have managed to fail on (almost) every count! Right, off now to watch something completely different not supplied by BT… oh, except they’re still my ISP because – quelle surprise – that bit actually works!

Password Trash

We know passwords aren’t great and that people choose crappy short ones that are easily remembered given half the chance. The solution to which seems to be to ask for a least one number, one upper case char, one symbol and a minimum of 8 chars…

However, you don’t want to use the same password everywhere as the majority of sites aren’t trustworthy* so it’s foolish to use the same password on all of them. The result is an ever mounting litter of passwords that you can’t remember and either end up writing them down (which likely violates terms of service and makes you liable in the event of abuse) or rely on “forgotten password mechanisms” to log in as and when needed (the main frustration here being turnaround-time and the need to come up with a new bloody password each time).

Yet using 3 or more words as a passphrase is more secure than a short forgettable password and would make a website a damn sight easier to use – you still can’t use the same password everywhere though. It’s about time we started making minimumpasswordlength 16 characters and dismiss the crypto garbage rules that don’t help anyone.

Facebook, Google and the like would have you use their Open ID Connect services – this way they effectively own your online identity – and, if you do use them, the multi-factor authentication (MTA) options are well worth adopting. Personally I don’t want these guys to be in charge of my online identity though (and most organisations won’t be either) so whilst it’s ok to provide the option you can’t force it on people.

We need to continue to support passwords but we need to stop with these daft counterproductive restrictions.
* Ok, none of them are but some I trust more than others. I certainly won’t trust some hokey website just because the developers provide the illusion of security by making my life more complicated than is necessary.