ERROR-BT_SPORT:VC040-NFR:0xFF-FAIL

Screen Shot 2015-09-15 at 21.38.05BT’s Sports online player – which being polite is piss poor and with the UX design provided by a six year old – is a fine example of how not to deal with errors in user interfaces. “User” being the key word here…

Rather than accepting users are human beings in need of meaningful error messages, and perhaps in-situ advice about what to do about it, they insist on providing cryptic codes with no explanation and which you need to google the meaning of (ok, I admit I ignored the FAQ!). This will lead you to an appalling page; clearly knocked up by the six year olds senior sibling on day one of code-club, littered with links you need to eyeball which finally leads you to something useful telling you how to deal with the idiocy of BT’s design decisions.

In this particular case the decision some halfwit architect made which requires users to downgrade their security settings (e.g. VC002 or VC040)! So a national telecom provider and ISP is insisting that users weaken their online security settings so they can access a half arsed service?…

Half-arsed since when you do get it working it performs abysmally with the video quality of a 1996 RealAudio stream over a 28.8 kbps connection. This likely because some mindless exec has decided they don’t want people watching on their laptops, they’d rather they use the “oh-god-please-don’t-get-me-started-on-how-bad-this-device-is” BT Vision Box – I feel sick just thinking about it…

In non-functional terms:

  • Form – Fail
  • Performance – Fail
  • Security – Fail
  • Operability – Well all I know is that it failed on day one of launch and I suspect it’s as solid as a house of cards behind the scenes. Lets see what happens if the service falls over at champions league final kick-off!

Success with non-functionals alone doesn’t guarantee success – you need a decent functional product and lets face it, champions league football is pretty decent – but no matter how good the function, if it’s unusable, if it makes your eyes bleed, if it performs like a dog, if it’s insecure and if its not reliable then you’re going to fail! It’s actually pretty impressive that BT have managed to fail on (almost) every count! Right, off now to watch something completely different not supplied by BT… oh, except they’re still my ISP because – quelle surprise – that bit actually works!

A little chit-chat

It’s been a while since I posted anything so  I thought I’d write an article on the pros and cons of chatty interfaces…

When you’re hosting a sleep-over for the kids and struggling to get them all to sleep you’ll no doubt hear that plaintiff cry “I’m thirsty!”, usually following in quick succession by 20 other “Me too!”‘s…

What you don’t do is walk down the stairs, fetch a glass from the cupboard, fill it with milk from the fridge and take it back upstairs… then repeat 20 more times – you may achieve your steps-goal for the day but it’ll take far too long. Instead you collect all the orders, go downstairs once, fill 21 glasses and take them all back upstairs on a tray (or more likely just give them the bottle and let them fight it out).

And so it is with a client-server interface. If you’ve a collection of objects on a client (say customer records) and you wanted to get some more info on each one (such as the address) you’d be better off asking the server the question “can you give me the addresses for customers x, y and z?” in one go rather than doing it three times. The repeated latency, transit time and processing by the server may appear fine for a few records but will very quickly deteriorate as the number of records rises. And if you’ve many clients doing this sort of thing you can get contention for resources server side which breaks things for all users – even those not having a wee natter…

Is chatty always bad?

It comes down to the atomicity of requests, responsibility and feedback.

Atomicity…

If it’s meaningful to ask a question like “can you give me the address for customer x, account-balance for customer y and shoe-size for customer z?” then by all means do so, but most likely that’s really three separate questions and you’ll get better reuse and maintainability if you split them out. Define services to address meaningful questions at the appropriate granularity – what is “appropriate” is for you work out…

Responsibility…

You could break the requests out server side and have a single request which collects lots of data and returns this at once rather than having lots of small requests go back-and-forth. This model is common for web-pages which contain data from multiple sources – one requests for the page, server collects data from sources and combines this in one response back to the user. Problem is, if any one of the the sources is performing badly (or down) the whole page suffers and your server chokes in sympathy as part of a cascading failure. So is it the responsibility of the server to collate the data and provide a unified view (often it is) or can that responsibility be pushed to the client (equally often it can be)? If it can be pushed to the client then you can actually improve performance and reliability through more chatty round-trips. Be lazy, do-one-thing-well, and delegate responsibility as much as possible…

Feedback…

Your end-user is a grumpy and impatient piece of wetware incapable of acting repetitiously who seeks instant gratification with minimal effort and is liable to throwing the tantrums of a two year old when asked to “Please wait…”. God help us all if you send them an actual error… To improve usability you can trigger little round-trip requests to validate data asynchronously as they go and keep the user informed. This feedback is often enough to keep the user happy – all children like attention – and avoids wasted effort. Filling in a 20 page form only to have it rejected because of some cryptic checkbox on page 3 isn’t going to win you any friends. And yet it is still o’ so common… I generally work server-side and it’s important to remember why we’re here – computers exist for the benefit of mankind. User interface development is so much more complicated than server-side and yet simplicity for the user has to be the goal.

So chatty is generally bad for performance and reliability though it can though be good for UX and depending on where the responsibility sits, can improve things overall.

However, as we move towards a micro-services based environment the chatter is going to get louder and more disruptive. Understanding the nature and volumetrics of the questions that are asked of services and ensuring they are designed to address these at the correct granularity will help keep the chatter down whilst making the code more maintainable and reusable. As always, it’s never black-and-white…

 

 

Heavy Handed?

Is it really heavy-handed to give users a slightly second rate experience because they use an out of date browser?

Me thinks not really… effort spent should be proportional to the size of the user base.

Just a pity they didn’t go further and send any user of IE off to the 1999 edition and throttle their download to the 28kbps they deserve… 80% of the effort for 20% of the users.

 

A slight case of overbombing

Darcula + Flux @ midnight is not a productive combination!

Seemed like a good idea at the time – Darcula, easy on the eye, good for long term work at the screen; Flux, nicely adjusts screen temperature to suit the time of day and help you get to sleep…

The two together? Not good. Feeling tired, can’t read the screen properly but the drive keeps you pressing on… Grrrrr.

Computation, Data and Connections

My ageing brain sees things in what feels like an overly simplistic and reductionist way. Not exactly the truth so much as a simplified version of reality which I can understand well enough to allow me to function (to a degree).

So in computing I see everything as being composed of three fundamental (overly simplified) elements; computation, data and connections.

Computation refers to those components which do stuff, essentially acting on data to transform it from one form to another. This may be as simple as taking data from a database and returning it as a web page or as complex as processing video data to identify and track movements of individuals.

Data is the information itself which critically has no capability to do anything. Like Newtons first law, data at rest will remain at rest until an external computation acts upon it. Data may be in a database (SQL or NoSQL), file-system, portable USB drive, memory etc. (memory is really just fast temporary data storage).

And connectivity hooks things together. Without some means of connectivity computation can’t get a handle on the data and nothing much happens regardless. This may be a physical ethernet cable between devices, logical database driver, a file-system handle etc.

So in every solution I can define each component (physical or logical) as fulfilling one of these three core functions.

The next step from a non-functional perspective is to consider a matrix of these elements and my four classifications for non-functional requirements (security, performance & capacity, operability and form) to discuss the key considerations that are required for each.

For example, backup and recovery is primarily a consideration for data. Computational elements have no data as such (log files for example would be data output from compute components) and so a different strategy can be adopted to recover these (e.g. redundant/resilient instances, configuration management, automated-build/dev-ops capabilities etc.). Like wise, connectivity recovery tends to be fairly manual since connections can traverse many components, any one of which could fail, and so more effort is required to PD (problem determination) and identify root cause. A key requirement to aid this is a full understanding of the connections between components and active monitoring of each so you can pinpoint problems more easily – unfortunately it’s naive to think you understand all the connections between components in your systems.

The matrix thus looks something like this:

Computation Data Connections
Security  Computation v Security
Performance & Capacity
Operability Redundancy, configuration management, build automation. Backup, recovery, resiliency. Connectivity model, monitoring.
Form

 

Now I just need to fill the rest of it in so when I work on a new solution I can classify each component and refer to the matrix to identify non-functional aspects to consider. At least that’s the theory…

Data Currency and Exploding Bunnies

There is such a thing as data currency – i.e. how current and up to date the data is. On the web, stale data is a social disease which deservingly leads to isolation and the irritating and distant tut-tut‘ing that goes with it – much like zits on a greasy teenager. We’re all culpable (me in particular given the last time I updated the mighty stellarmap.com) but I expect more from The Guardian. So… tut-tut to The Guardian for pressing UK Gov Data from 2010 on their Data home-page today. There was me thinking I’d happen across some interesting nuggets only to find old, stale and consequently misleading data.

Of course I’m not helping by providing a bunch of links to stale content myself but it’s time more attention was paid to data currency by net publishers. Perhaps then this wouldn’t be such a big problem.

Screen grab below for the hell of it…

gdata

 

From a non-functional perspective an important criteria for data is how current it needs to be. Various caching strategies may; or may not, be viable depending on how critical this is. If you’re buying stock then you want the correct price now, if you’re browsing the news then perhaps today is sufficient. This affects how deep into your system transactional tentacles reach and the resources that need to be committed to address this. However, the issue on The Guardian site likely relates instead to the algorithms that promote data and how these are either insensitive to the element of time, the subject of data of low velocity (which it isn’t in this case) or which are sensitive; and who wouldn’t be, to the internet scale viral effect causing excessive temporary popularity. Perhaps content providers need to start saying, “ok, we know it’s a funny video of a cat stuck in a washing machine but that was so 2005 and welcome to 2015, so here’s a fully interactive 3d experience of a bunny playing with grenades instead”

Excremental Form

We often think we know what good design is; whether it be system, code or graphic design, and it’s a good thing that we strive for perfection.

Perfection though is subjective, comes at a cost and is ultimately unachievable. We must embrace the kludges, hacks, work-arounds and other compromises and like the Greek idiom; “whoever is not Greek is barbarian”, we should be damn proud of being that little bit barbaric even if we continue to admire the Greeks.

The question is not whether the design is good but whether the compromises are justified, sound and fit for purpose. Even shit can have good and bad form.

Google Maps Usability

Is it me or is the recently revamped Google Maps not as usable as the old one? They’ve maxed out the map; which is nice, but the menu items are now so obscure you’d not know where they are without flipping endlessly around the map (it’s worse on the Android version IMHO).

For example, I search for a location, find it, right click, select “directions to here” and it provides directions from that location to itself!? Well ok, I can kind of understand why but it’s obviously not what I meant… So I right click on home and it says “What’s here?”, not “directions from here”. To do that I need to type (wtf!) my start location into the drop-down on the left of the screen. Not unusable just irritating…

Plus the search menu overlaps the map which can be irritating on smaller screens and seems to slide up-and-down at will as you drag around the map – again, irritating… (but still pretty much the best map service out there…).

IE AppContainers and LocalStorage

IE’s EPM (Enhanced Protected Mode) mode provides separate containers for web storage between desktop and Metro mode when using the Internet Zone. There’s a page which discusses the detail but never really states why it behaves like this. It seems to me that this is unnecessarily complex and will lead to user confusion and angst – “why does switching to desktop mode lose my session/cookies/storage?” or more simply – “why do I have to login again?”. It’s also arguably a security risk since users will have multiple sessions/cookies active so could inadvertently leave themselves logged in or could lead to duplicate transactions because items may be placed in the basket in separate containers etc. It would be less of a concern if users couldn’t easily switch, but of course they can because MS has kindly put a menu item on the Metro page to “View in the Desktop”!? It all seems to be related to providing enterprise users with the ability to maintain and configure a setup to provide greater access/functionality to  intranet sites than you would want for untrusted Internet sites (enabling various plugins and the like).

To a degree, fair enough, but it’s mostly as a result of intranet sites adopting features that weren’t standardised or hardened sufficiently in the first place (ActiveX, Java etc.). These need to be got rid of though this will cost companies dearly – replacing existing functionality with something else but with no significant added value to the business bar adherence to standards/security compliance etc. is  a hard sell.

So MS is; from one viewpoint, forced into this approach. The problem is it just adds more weight to my view that MS is so dependent on the enterprise customer and supporting the legacy of cruft they (MS & corporate intranets) have spawned over so many years that MS are no longer able to provide a clean, consistent and usable system (some would say they never were…).

Violation of rule #1 – Keep it Simple!