Skip to main content

Equifarce!

I was/am interested into the whole Equifax hack and how this happened. To this end I posted a brief link yesterday to the Struts teams response. A simple case of failing to patch! Case closed..

But then I've been thinking that's not really very fair.

This was (apparently) caused by a defect that's been around for a long time. The developers had reacted very quickly when the problem was identified (within 24 hrs) but Equifax - by all accounts - had failed to patch for a further 6 months.

What did we expect? That they'd patch it the next day? No chance. Within a month? Maybe. But if the issue is embedded in some third party product then they're dependent upon a fix being provided and if it's in some in-house developed tool then they need to be able to rebuild the app and test it before they can deploy. Struts was/is extremely popular. It was the Spring of its day and is still deeply embedded in all sorts of corporate applications and off the shelf products. Fixing everything isn't going to happen overnight.

Companies like Equifax will also have hundreds, even thousands, of applications and each application will have dozens of dependencies any one of which could have suffered a similar issue. On top of this, most of these applications will be minor, non critical tools which have been around for many years and which frankly few will care about. Running a programme to track all of these dependencies, patch applications and test them all before rolling them into production would take an army of developers, testers, sys-ops and administrators working around the clock just to tread water. New features? Forget it. Zero-day? Shuffles shoes... Mind you, it'd be amusing to see how change management would handle this...

So we focus on the priority applications and the low hanging fruit of patching (OS etc.) and hope that's good enough? Humm... anything else we can do?

Well, we're getting better with build, test and deployment automation but we're a long way from perfection. So do some of that, it'll make dealing with change all the easier but its no silver bullet. And again, good luck with change management...

Ultimately though we have to assume we're not going to have perfect code (there's no such thing!)... that we're not able to patch against every vulnerability... and that zero day exploits are a real risk.

Other measures are required regardless of your patching strategy. Reverse proxies, security filters, firewalls, intrusion detection, n-tier architectures, heterogenous software stacks, encryption, pen-testing etc. Security is like layers of swiss-cheese - no single layer will ever be perfect, you just hope the holes don't line up when you stack them all together. Add to this some decent monitoring of traffic and an understanding of norms and patterns - at least something which you actually have people looking at continually rather than after the event - and you stand a chance of protecting yourself against such issues, or able to identify potential attacks before they become actual breaches.

Equifax may have failed to patch some Struts defect for six months but that's not the worst of it. That they were vulnerable to such a defect in the first place smells like.. well, like they didn't have enough swiss-cheese. That an employee tool was also accessible online and provided access to customer information with admin/admin credentials goes on to suggests a real lack of competency and recklessness at senior levels.

Adding insult to injury, to blame an open-source project (for the wrong defect!) which heroically responded and addressed the real issue within 24 hrs of it being identified six month earlier (!?) makes Equifax look like an irresponsible child. Always blaming someone else for their reckless actions.

They claim to be "an innovative global information solutions company". So innovative they're bleeding edge and giving their, no our!, data away. I'm just not sure who's the bigger criminal... the hackers or Equifarce!

Comments

Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.