Good high level explanation of how Spectre and Meltdown work… and WHY RASPBERRY PI ISN’T VULNERABLE TO SPECTRE OR MELTDOWN
I was/am interested into the whole Equifax hack and how this happened. To this end I posted a brief link yesterday to the Struts teams response. A simple case of failing to patch! Case closed..
But then I’ve been thinking that’s not really very fair.
This was (apparently) caused by a defect that’s been around for a long time. The developers had reacted very quickly when the problem was identified (within 24 hrs) but Equifax – by all accounts – had failed to patch for a further 6 months.
What did we expect? That they’d patch it the next day? No chance. Within a month? Maybe. But if the issue is embedded in some third party product then they’re dependent upon a fix being provided and if it’s in some in-house developed tool then they need to be able to rebuild the app and test it before they can deploy. Struts was/is extremely popular. It was the Spring of its day and is still deeply embedded in all sorts of corporate applications and off the shelf products. Fixing everything isn’t going to happen overnight.
Companies like Equifax will also have hundreds, even thousands, of applications and each application will have dozens of dependencies any one of which could have suffered a similar issue. On top of this, most of these applications will be minor, non critical tools which have been around for many years and which frankly few will care about. Running a programme to track all of these dependencies, patch applications and test them all before rolling them into production would take an army of developers, testers, sys-ops and administrators working around the clock just to tread water. New features? Forget it. Zero-day? Shuffles shoes… Mind you, it’d be amusing to see how change management would handle this…
So we focus on the priority applications and the low hanging fruit of patching (OS etc.) and hope that’s good enough? Humm… anything else we can do?
Well, we’re getting better with build, test and deployment automation but we’re a long way from perfection. So do some of that, it’ll make dealing with change all the easier but its no silver bullet. And again, good luck with change management…
Ultimately though we have to assume we’re not going to have perfect code (there’s no such thing!)… that we’re not able to patch against every vulnerability… and that zero day exploits are a real risk.
Other measures are required regardless of your patching strategy. Reverse proxies, security filters, firewalls, intrusion detection, n-tier architectures, heterogenous software stacks, encryption, pen-testing etc. Security is like layers of swiss-cheese – no single layer will ever be perfect, you just hope the holes don’t line up when you stack them all together. Add to this some decent monitoring of traffic and an understanding of norms and patterns – at least something which you actually have people looking at continually rather than after the event – and you stand a chance of protecting yourself against such issues, or able to identify potential attacks before they become actual breaches.
Equifax may have failed to patch some Struts defect for six months but that’s not the worst of it. That they were vulnerable to such a defect in the first place smells like.. well, like they didn’t have enough swiss-cheese. That an employee tool was also accessible online and provided access to customer information with admin/admin credentials goes on to suggests a real lack of competency and recklessness at senior levels.
Adding insult to injury, to blame an open-source project (for the wrong defect!) which heroically responded and addressed the real issue within 24 hrs of it being identified six month earlier (!?) makes Equifax look like an irresponsible child. Always blaming someone else for their reckless actions.
They claim to be “an innovative global information solutions company”. So innovative they’re bleeding edge and giving their, no our!, data away. I’m just not sure who’s the bigger criminal… the hackers or Equifarce!
“the Equifax data compromise was due to their failure to install the security updates provided in a timely manner.”
As simple as that apparently. Keep up to date with patching.
(let’s assume we’re talking about encryption keys here rather than pass codes though it really makes little difference… and note that your passwords are a slightly different concern)
Is it incompetence to use an old code? No.
For synchronous requests (e.g. like those over HTTPS) there’s a handshake process you go through every few minutes to agree a new key. Client and server then continue to use this key until it expires then they agree a new one. If the underlying certificate changes you simply go through the handshake again.
For asynchronous requests things aren’t as easy. You could encrypt and queue a request one minute and change the key the next but the message remains on the queue for another hour before it gets processed. In these cases you can either reject the message (usually unacceptable) or try the older key and accept that for a while longer.
Equally with persistent storage you could change the key every month but you can’t go round decrypting and re-encrypting all historic content and accept the outage this causes every time. Well, not if you’ve billions of records and an availability SLA of greater than a few percent. So again, you’ve got to let the old codes work..
You could use full disk/database encryption but that’s got other issues – like its next to useless once the disks are spinning… And besides, when you change the disk password you’re not actually changing the key and re-encrypting the data, you’re just changing the password used to obtain the key.
So it is ok to accept old codes. For a while at least.
An empire spread throughout the galaxy isn’t going to be able to distribute new codes to every stormtrooper instantaneously. Even if they do have the dark-side on their side…
In “The Need for Strategic Security” Martyn Thomas considers some of the risks today in the the way systems are designed and built and some potential solutions to address the concerns raised. One of the solutions proposed is for software to come with a guarantee; or at least some warranty, around it’s security.
Firstly, I am (thankfully) not a lawyer but I can imagine the mind bogglingly twisted legalese that will be wrapped around such guarantees. So much so as to make them next to useless (bar giving some lawyer the satisfaction of adding another pointless 20 paragraphs of drivel to the already bloated terms and conditions..). However, putting this aside, I would welcome the introduction of such guarantees if it is at all possible.
For many years now we’ve convinced ourselves that it is not possible to write a program which is bug-free. Even the simple program:
echo "Hello World"
has dependencies on libraries, the operating system; along with the millions of lines of code therein, all the way down to the BIOS and means we cannot be 100% sure even this simple program will always work. We can never be sure it will run correctly for every nanosecond of every hour of every day of every year.. for ever! It is untestable and absolute certainty is not possible.
At a more practical level however we can bound our guarantees and accept some risks “compatible with RHEL 7.2”, “… will work until year-end 2020…”, “.. needs s/w dependencies x, y, z…” etc. Humm, it’s beginning to sound much like a software license and system requirements checklist… Yuck! On the whole, we’re pretty poor at providing any assurances over the quality, reliability and security of our products.
Martyns point though is that more rigorous methods and tools will enable us to be more certain (perhaps not absolutely) about the security of the software we develop and rely on allowing us to be more explicit about the guarantees we can offer.
Today we have tools such as SonarQube which helps to improve the quality of code or IBM Security AppScan for automated penetration testing. Ensuring such tools are used can help but these tools need to be used “right” if used at all. All too often a quick scan is done and only the few top (and typically obvious) issues are addressed. The variation of report output I have seen for scans on the same thing using the same tools but performed by different testers is quite ridiculous. A poor workman blames his tools.
Such tools also tend to be run once on release and rarely thereafter. The ecosystem in which our software is used evolves rapidly so continual review is needed to detect issues as and when new vulnerabilities are discovered.
In addition to tools we also need industry standards and certifications to qualify our products and practitioners against. In the security space we do have some standards such as CAPS and certification programmes such as CCP. Unfortunately few products go through the certification process unless they are specifically intended for government use and certified professionals are few and in-demand. Ultimately it comes down to time-and-money.
However, as our software is used in environments never originally intended for them and as devices become increasingly connected and more critical to our way of life (or our convenience), it will be increasingly important that all software comes with some form of compliance assurance over its security. For which more accessible standards will be needed. Imagine if in 10 years time when we all have “smart” fridges some rogue state sponsored hack manages to cycle them through a cold-warm-cold cycle on Christmas eve.. Would we notice on Christmas day? Would anyone be around to address such a vulnerability? Roast potatoes and e. coli turkey anyone? Not such a merry Christmas… (though the alcohol may help kill some of the bugs).
In addition, the software development community today is largely made up of enthusiastic and (sometimes) well-meaning amateurs. Many have no formal qualification or are self-taught. Many are cut’n’pasters who frankly don’t get-IT and just go through the motions. Don’t get me wrong, there are lots of really good developers out there. It’s just there are a lot more cowboys. As a consequence our reliance on security-through-obscurity is deeper than perhaps many would openly admit.
It’s getting better though and the quality of recent graduates I work with today has improved significantly – especially as we seem to have stopped trying to turn arts graduates into software engineers.
Improved and proven tools and standards help but at the heart of the concern is the need for a more rigorous scientific method.
As architects and engineers we need more evidence, transparency and traceability before we can provide the assurances and stamp of quality that a guarantee deserves. Evidence of the stresses components can handle and constraints that contain this. Transparency in design and in test coverage and outcome. Traceability from requirement through design and development into delivery. Boundaries within which we can guarantee the operation of software components.
We may not be able to write bug-free code but we can do it well enough and provide reasonable enough boundaries as to make guarantees workable – but to do so we need a more scientific approach. In the meantime we need to keep those damned lawyers in check and stop them running amok with the drivel they revel in.
Shit happens, data is stolen (or leaked) and your account details, passwords and bank-account are available online to any criminal who wants it (or at least is prepared to buy it).
But don’t panic, the data was encrypted so you’re ok. Sit back, relax in front of the fire and have another mince pie (or six).
We see this time and again in the press. Claims that the data was encrypted… they did everything they could… blah blah blah. Humm, I think we need more detail.
It’s common practice across many large organisations today to encrypt data using full-disk encryption with tools such as BitLocker or Becrypt. This is good practice and should be encouraged but is only the first line of defence as this only really helps when the disk is spun down and the machine powered off. If the machine is running (or even sleeping) then all you need is the users password and you’re in. And who today really wants to shutdown a laptop when you head home… and perhaps stop for a pint on the way?
In the data-center the practice is less common because the risk of disks being taken out of servers and smuggled out of the building is lower. On top of this the disks are almost always spinning so any user/administrator who has access to the server can get access to the data.
So, moving up a level, we can use database encryption tools such as Transparent Data Encryption to encrypt the database files on the server. Great! So now normal OS users can’t access the data and need to go through the data access channel to get it. Problem is, lots of people have access to databases including DBAs who probably shouldn’t be able to see the raw data itself but who generally can. On top of this, service accounts are often used for application components to connect and if these credentials are available to some wayward employee… your data could be pissing out an open window.
To protect against these attack vectors we need to use application level encryption. This isn’t transparent and developers need to build in data encryption and decryption routines as close to exposed interfaces as practical. Now having access to the OS, files or database doesn’t do enough to expose the data. An attacker also needs to get hold of the encryption keys which should be held on separate infrastructure such as an HSM. All of which costs time and money.
Nothings perfect and there’s still the risk that a wayward developer siphons off data as it passes through the system or that some users have too broad access rights and can access data, keys and code. These can be mitigated against through secure development practices, change management and good access management… to a degree.
Furthermore, encrypting everything impacts functionality – searching encrypted data becomes practically impossible – or may not be as secure as you’d expect – a little statistical analysis on some fields can expose the underlying data without actually needing to decrypt it due to a lack of sufficient variance in the raw data. Some risks need to be accepted.
We can then start to talk about the sort of ciphers used, how they are used and whether these and the keys are sufficiently strong and protected.
So when we hear in the press that leaked data was encrypted, we need to know more about how it was encrypted before deciding whether we need to change our passwords before tucking into the next mince pie.
The title of this post is encrypted.
This page is also encrypted (via TLS (aka the new name for SSL)).
Anyone sniffing traffic on the wire must first decrypt the TLS traffic and then decrypt the content to work out what the message says.
But why bother with two layers of encryption?
Ok, so forgive the fact that this page is publicly accessible and TLS is decrypted before your eyes. It’s possibly a poor example and in any case I’d like to talk about the server side of this traffic.
In many organisations, TLS is considered sufficient to provide security for data in-transit. The problem is TLS typically terminates on a load-balancer or on a web-server and is forwarded from there to another downstream server. Once this initial decryption takes place data often flows over the internal network of organisations in plain text. Many organisations consider this to be fine practice since the internal network is locked down with firewalls and intrusion detection devices etc. Some organisations even think it’s good practice so that they can monitor internal traffic more easily.
However, there is obvious concern over insider-attacks with system-admins or disgruntled employees being in a good position to skim off the data easily (and clean-up any trace after themselves). Additionally requests are often logged (think access logs and other server logs) and these can record some of the data submitted. Such data-exhaust is often available in volume to internal employees.
It’s possible to re-wrap traffic between each node to avoid network sniffing but this doesn’t help data-exhaust and the constant un-wrap-re-wrap becomes increasingly expensive if not in CPU and IO then in effort to manage all the necessary certificates. Still, if you’re concerned then do this or terminate TLS on the application-server.
But we can add another layer of encryption to programmatically protect sensitive data we’re sending over the wire in addition to TLS. Application components will need to decrypt this for use and when this happens the data will be in plain text in memory but right now that’s about as good as we can get.
The same applies for data at-rest – in fact this is arguably far worse. You can’t rely on full database encryption or file-system encryption. Once the machine is up and running anyone with access to the database or server can easily have full access to the raw data in all its glory. These sort of practices only really protect against devices being lifted out of your data-centre – in which case you’ve got bigger problems…
The safest thing here is to encrypt the attributes you’re concerned about before you store them and decrypt on retrieval. This sort of practice causes all sorts of problems in terms of searching but then should you really be searching passwords or credit card details? PII details; names, addresses etc, are the main issue here and careful thought about what really needs to be searched for; and some constructive data-modelling, may be needed to make this workable. Trivial it is not and compromises abound.
All this encryption creates headaches around certificate and key management but such is life and this is just another issue we need deal with. Be paranoid!
p.s. If you really want to know what the title says you can try the password over here.
Slight obsession some would say, but I enjoy F1… not that much that I’m prepared to pay Sky whatever extortionate fee they’re come up with today though so I tend to watch the highlights only on C4. Nice coverage btw guys – shame to lose you next year.
Initially I thought it was DNS leakage picking up that name resolution is from french servers. You can see this by visiting www.dnsleaktest.com and running the “standard test”. Even though I’m reported as being in the UK, all my DNS servers are in France… Humm, I smell a fish…
Am I in the UK or France?
To work around this I setup a proxy server on the DiskStation and the same test now reports UK DNS servers as everything goes through the proxy.
Definitely looks like I’m in the UK… But still no luck on C4…
Finally, I set the timezone I was in to UK rather than France and this seemed to do the trick. Note that you need to change the timezone on the laptop, not the time itself or you’ll have all sorts of trouble connecting securely to websites including C4.
In the end, the proxy doesn’t seem necessary so they don’t appear to be picking up on DNS resolution yet though it’s the sort of thing that they could look at adding (that, and device geolocation using HTML5 geo API though for this there are numerous plugins for browsers to report fake locations).
Incidentally, BBC iPlayer works fine and does so without fiddling with timezone.
The net wasn’t really designed to expose your physical location and IP to location lookups such as MaxMind are more of a workaround than truly identifying your location. Using TOR as a more elaborate tunnel makes you appear to be all over the place as your IP address jumps around and corporate proxies; especially for large organisations, can make you appear to be in all sorts of weird places. Makes you wonder.. All these attempts to limit your access based on an IP address to prop up digital rights management just doesn’t work. It’s all too easy to work-around.
p.s. Turns out that whilst France doesn’t have free-to-air F1 coverage, most places have some form of satellite TV via CanalSat or TNT which includes the German RTL channel. It’ll do nothing to improve my French but at least I get to watch the race on the big screen…
I’ve been struggling recently to get my head around OAuth2 access tokens – bearer and MAC tokens specifically…
Bearer tokens are essentially just static tokens valid for some predefined period before they need to be refreshed. They can essentially be passed around willy-nilly and will be accepted by a resource server so long as they can be validated. If a 3rd party manages to hijack one then they can use it to perform whatever the token is authorised to do just by submitting it in the correct manner. Consequently these tokens need to be looked after carefully. Shuffled over encrypted channels and protected by the client. They’re arguably even less secure than session cookies since there’s no “HTTP Only” option on an access token so preventing malicious access to tokens from dodgy code on clients is something the developer needs to manage. And given the number of clients around and quality of code out there we can pretty much assume a good chunk will be piss poor at this.
So Bearer tokens. Not so great.
MAC tokens aren’t just static strings. A client secret and nonce is combined with some request data to essentially sign tokens. So if you hijack a request in flight you can’t replay the token – it’s valid only for the original request. This is good but really only protects against snooping over the wire which SSL/TLS does a pretty good job of managing without all the additional complexity. Beyond this a MAC token seems to make very little difference. The client needs to know the secret in the same way it would need a Bearer token. If someone manages to snatch this we’re done for regardless and the false sense of security MAC tokens give isn’t worth a damn.
The client application is often the weak point in OAuth since it’s often an untrusted device – mobile phones and web-browsers (single page applications) etc. If the “client” is a downstream server (and this poor terminology by the way has caused way too much confusion and argument) then we’ve a reasonable chance to secure the server and data but ultimately we’re still going to have a client ID and secret stuffed in memory just like we would have with a Bearer token. Ok, so we’re adding some hoops to jump through but really it’s no more secure.
So if we don’t have transport level encryption (SSL/TLS) then MAC tokens offer some reasonable value over Bearer tokens. But if we do have transport encryption then MACs just add complexity and a false sense of security. Which I’d argue is a bad thing since increased complexity is likely to lead to increased defects… which in security is a very bad thing!
Besides, neither option allows me to identify the user or ensure the client calling the service is authorised to do so… Just that they appear to be in possession of a token saying the user has granted them authority (which may have been hijacked as per above).
p.s. One of my many failed new years resolutions was to post a new article every week of the year. This being the first four weeks in isn’t a good start…
If you really wanted to know you’d have found it but for what it’s worth, this site now runs on Redhats OpenShift platform. For a while I’ve been thinking I should get an SSL cert for the site. Not because of any security concern but because Google and the like rank sites higher up if they are https and; well, this is nonfunctionalarchitect.com and security is a kind of ‘thing’ if you know what I mean. But certs cost £’s (or $’s or €’s or whatever’s). Not real pricey, but still I can think of other things to spend £50 on.
But hello!, along comes letsencrypt.org. A service allowing you to create SSL certs for free! Now in public beta. Whoo hooo!
It isn’t particularly pretty at the moment and certs only last 90 days but it seems to work ok. For Openshifts WordPress gear you can’t really do much customization (and probably don’t want to) so installing letsencrypt on that looks messier than I’d like. Fortunately you can create a cert offline with letsencrypt and upload it to wordpress. Steps in a nutshell:
- Install letsencrypt locally. Use a Linux server or VM preferably.
- Request a new manual cert.
- Upload the specified file to your site.
- Complete cert request.
- Upload certificate to openshift.
- Install letsencrypt:
git clone https://github.com/letsencrypt/letsencrypt
- Request a new manual cert:
./letsencrypt-auto --agree-dev-preview -d <your-full-site-name> --server https://acme-v01.api.letsencrypt.org/directory -a manual auth -v --debug
- This command will pause to allow you to create a file and upload it to your website. The file needs to be placed in the /.well-known/acme-challenge folder and has a nice random/cryptic base-64 encoded name (and what appears to be a JWT token as contents). This is provided on screen and mine was called something like KfMsKDV_keq4qa5gkjmOsMaeKN4d1C8zB3W8CnwYaUI with the contents something like KfMsKDV_keq4qa5gkjmOsMaeKN4d1C8zB3W8CnwYaUI.6Ga6-vVZqcFb83jWx7pprzJuL09TQxU2bwgclQFe39w (except that’s not the real one…). To upload this to an openshift wordpress gear site:
- SSH to the container. The address can be found on the application page on Openshift.
- Make a .well-known/acme-challenge folder in the webroot which can be done on the wordpress gear after SSHing via.
- Create the file with the required name/content in this location (e.g. see vi).
- Once uploaded and you’re happy to contine, press ENTER back on the letsencrypt command as requested. Assuming this completes and manages to download the file you just created you’ll get a response that all is well and the certificates and key will have been created.
- To upload these certs to your site (from /etc/letsencrypt/live/<your-site-name/ locally), go to the Openshift console > Applications > <your-app> > Aliases and click edit. This will allow you to upload the cert, chain and private key files as below. Note that no passphrase is required.You need to use fullchain.pem as the SSL cert on Openshift.com and leave the cert chain blank. If you don’t do this then some browsers will work but other such as Firefox will complain bitterly…
- Save this and after a few mins you’ll be done.
Once done, you can access the site via a secure HTTPS connection you should see a nice secure icon showing that the site is now protected with a valid cert 🙂
Details of letsencrypt.org supported browsers are on their website..