Skip to main content

MzE1NjE5NTI0ODYyfjIxMTAxNjI4NzMxNn5VMkZzZEdWa1gxOHorQVFYL2NQRFkwUVgyRm82anNJRGJHOW0vV1ZjdVEvVUR0cjZRZ0t3Y21WYWdCalo0dEZW

The title of this post is encrypted.

This page is also encrypted (via TLS (aka the new name for SSL)).

Anyone sniffing traffic on the wire must first decrypt the TLS traffic and then decrypt the content to work out what the message says.

But why bother with two layers of encryption?

Ok, so forgive the fact that this page is publicly accessible and TLS is decrypted before your eyes. It's possibly a poor example and in any case I'd like to talk about the server side of this traffic.

In many organisations, TLS is considered sufficient to provide security for data in-transit. The problem is TLS typically terminates on a load-balancer or on a web-server and is forwarded from there to another downstream server. Once this initial decryption takes place data often flows over the internal network of organisations in plain text. Many organisations consider this to be fine practice since the internal network is locked down with firewalls and intrusion detection devices etc. Some organisations even think it's good practice so that they can monitor internal traffic more easily.

However, there is obvious concern over insider-attacks with system-admins or disgruntled employees being in a good position to skim off the data easily (and clean-up any trace after themselves). Additionally requests are often logged (think access logs and other server logs) and these can record some of the data submitted. Such data-exhaust is often available in volume to internal employees.

It's possible to re-wrap traffic between each node to avoid network sniffing but this doesn't help data-exhaust and the constant un-wrap-re-wrap becomes increasingly expensive if not in CPU and IO then in effort to manage all the necessary certificates. Still, if you're concerned then do this or terminate TLS on the application-server.

But we can add another layer of encryption to programmatically protect sensitive data we're sending over the wire in addition to TLS. Application components will need to decrypt this for use and when this happens the data will be in plain text in memory but right now that's about as good as we can get.

The same applies for data at-rest - in fact this is arguably far worse. You can't rely on full database encryption or file-system encryption. Once the machine is up and running anyone with access to the database or server can easily have full access to the raw data in all its glory. These sort of practices only really protect against devices being lifted out of your data-centre - in which case you've got bigger problems...

The safest thing here is to encrypt the attributes you're concerned about before you store them and decrypt on retrieval. This sort of practice causes all sorts of problems in terms of searching but then should you really be searching passwords or credit card details? PII details; names, addresses etc, are the main issue here and careful thought about what really needs to be searched for; and some constructive data-modelling, may be needed to make this workable. Trivial it is not and compromises abound.

All this encryption creates headaches around certificate and key management but such is life and this is just another issue we need deal with. Be paranoid!

p.s. If you really want to know what the title says you can try the password over here.

Comments

Post a comment

Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.