Skip to main content

Power to the People

Yesterday I received my usual gas and electricity bill from my supplier with the not so usual increase to my monthly direct debit of a nice round 100%! 100% on top of what is already more than I care for... joy!

What followed was the all too familiar vent-spleen / spit-feathers etc. before the situation was resolved by a very nice customer services representative who had clearly seen this before... humm..

So, as I do, I ponder darkly on how such a situation could have arisen. And as an IT guy, I ponder darkly about how said situation came about through IT (oh what a wicked web we weave)... Ok, so pure conjecture, but this lot have previous...

100%! What on earth convinced them to add 100%? Better still, what convinced them to add 100% when I was in fact in credit and they had just reimbursed me £20 as a result?...

Customer service rep: It's the computers you see sir.

Me: The computers?

CSR: Well because they reimbursed you they altered your direct-debit amount.

Me: Yeah, ok, so they work out that I'm paying too much and reduce my monthly which would kind of make sense but it's gone up! Up by 100%!

CSR: Well er yes. But I can fix that and change it back to what it was before...

Me: Yes please do!

CSR: Can I close the complaint now?

Me: Well, you can't do much else can you? But really, you need to speak to your IT guys because this is just idiotic...

(more was said but you get the gist).

So, theories for how this came about:

  1. Some clever-dick specified a requirement that if they refund some money then claw it back ASAP by increasing the monthly DD by 100%!..

  2. A convoluted matrix exists which is virtually impossible to comprehend; a bit like their pricing structure, which details how and when to apply various degrees of adjustment to DD amounts and has near infinite paths that cannot be proven via any currently known mathematics on the face of this good earth.

  3. A defect exists somewhere in the code.


If 1 or 2 then sack the idiots who came up with such a complete mess - probably the same lot who dream up energy pricing models so a "win-win" as they say!

If 3 then, well, shit happens. Bugs happen. Defects exist; god knows I've been to root-cause of many...

It's just that this isn't the first time, or the second, or the third...

(start wavy dreamy lines)

The last time they threatened to take me to court because I wouldn't let their meter maid in when they'd already been and so hadn't even tried again..  and since they couldn't reconcile two different "computer systems" properly it kept on bitching until it ratcheted up to that "sue the bastards" level. Nice.

(end wavy dreamy lines)

... and this is such an simple thing to test for. You've just got to pump in data for a bunch of test scenarios and look at the result - the same applies to that beastly matrix! ... or you've got to do code reviews and look for that "if increase < 1.0 then surely it's wrong and add 1.0 to make it right" or "increase by current/current + (1.0*current/current)".

So bad test practices and/or bad coding practices - or I could also accept, really bad architecture which can't be implemented, or really bad project management which just skips anything resembling best practices, or really bad leadership which doesn't give a toss about the customer - and really bad operations and maintenance regardless because they know they've got some pretty basic problems and yet they clearly can't help themselves to get it fixed.

It all comes down to money and with practices like this they'll be losing customers hand over fist (they do have the worst customer sat ratings apparently).

They could look at agile, continuous-integration, test automation, dev-ops, TDD and BDD practices as is the rage but they need to brush away the fairy dust that often accompanies these concepts (concepts I generally support incidentally) and realise this does not mean you can abandon all sanity and give up on the basic principles of testing and coding!

If anything these concepts weigh even more heavily on such fundamentals - more detailed tracking of delivery progress and performance, pair-programming, reviewing test coverage, using tests to drive development, automating build and testing as much as can be to improve consistency and quality, getting feedback from operational environments so you detect and resolve issues faster, continuous improvement and so on.

Computer systems are more complex and more depended on by society then ever before, they change at an ever increasing rate, interact in an constantly changing environment with other systems and are managed by disparate teams spread across the globe who come and go with the prevailing technological wind. Your customers and your business relies on them 100% to, at best, get by, and at worst, to exist. You cannot afford not to do it right!

I'm sure the issues are more complex than this and there are probably some institutionalised problems preventing efficient resolution mores the pity. But hey, off to search for a new energy provider...

Comments

  1. I can recommend a new energy supplier. Check out my LinkedIn page for details.....

    ReplyDelete

Post a comment

Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.