Skip to main content

Security Vulnerability Review

Secunia have released an interesting report on the number of vulnerabilities found across various products. There's much talk about MS and non-MS products which kind of makes sense from a Windows perspective but the report also contains a list of "20 core products" with most vulnerabilities identified in 2014.

Trouble is the report doesn't specify where and how this "core" list was compiled; contrary to the 50 portfolio products which is based on the Secunia client running on Windows devices.

If it's the same Secunia client - and I suspect it is since the number of vulnerabilities match for those products in both lists where I would expect some potential variance (Chrome on Windows v OSX for example) - then it's rather odd that OSs such as Oracle Solaris and Gentoo Linux crop up... Hypervisors perhaps...

Another shocking thing for me was the number of IBM products in that top 20; 8 out of 20! Really? 40% of the products with most vulnerabilities are from IBM? Of course few people use these things (IBM Tivoli Composite Application Manager for Transactions anyone?) but it's possible the stats are skewed due to too little data but it doesn't sing well for big-blue... other than perhaps noting that they seem to find-'em and fix-'em.

... and that Java was #17 on the list is more worrying. That beastie is everywhere and riddled with problems...

Comments

Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.