Come back Lotus Notes, all is forgiven!

** Warning: Functional rant! **

I’ve spent a significant part of the past few years swearing at Lotus Notes. So much in fact that this post may be slightly bias as I struggle on a daily basis with the absence of my old adversary. Oh how I would prefer to face my old foe each morning…

Instead I face the miserable and shallow facade of functionality that is Microsoft Outlook and Lync. Yuck!

Meetings are regular missed as the pathetic reminder slides into view in the bottom right corner – cunningly; and silently, disguising itself behind another window (must have been designed for the NSA!). Scheduling a meeting is nigh on an impossibility; if you don’t get the time correct first time, start all over again. Viewing participants in a meeting? Clearly why would anyone want to…. Delegations are poorly communicated, attachments are obscured next to the subject… And so on…

I admit I don’t know Outlook very well and am a long term Notes user. Hands up! However, it seems none of my new colleagues who have been suffering this barbyesque piece of plastic crap for some time enjoy it either. Many of them have abandoned the thick (dumb) client in favour of the less abrasive web-based version and next to no-one uses Lync – it’s like shouting into the void! There’s nobody here…. Partly because it spends most of the time trying to connect, partly because it’s just shit! So shit I can feel the pulse in my neck swelling at the mere thought of it. Sametime on the other hand was the backbone of instant communication and critical to daily life. Now I have to walk across the office floor… damn it, it’s not humane, it’s barbaric, it’s just so 1995!

I was looking forward to not facing Notes in the morning, I now realise the alternative is worse. That is except for teamrooms; no teamrooms in Outlook, yeay! For that we’ve moved to wikis which are much, much better (and no, it’s not Sharepoint thankfully, it’s Confluence).

You’d have thought the dominance that Outlook and Lync have would be justified to some degree, it’s not. At the end of the day they’re just another email and instant messaging client, and not particularly good ones at that. You can pick which ever one you’re familiar with but that’s about it as a differentiator. Personally I think I’ll switch to Mutt and IRC.

A slight case of overbombing

Darcula + Flux @ midnight is not a productive combination!

Seemed like a good idea at the time – Darcula, easy on the eye, good for long term work at the screen; Flux, nicely adjusts screen temperature to suit the time of day and help you get to sleep…

The two together? Not good. Feeling tired, can’t read the screen properly but the drive keeps you pressing on… Grrrrr.

eDNA – The next step in the obliteration of privacy online?

How your electronic DNA could be the secure login of the future (but let’s hope not).

More big-brother stuff over at The Guardian with eDNA (Electronically Defined Natural Attributes). I suspect that the NoMoreCaptchas product isn’t terribly strong as it feels like it could easily be subverted by introducing more natural delays into bots but it should help  slow them down which can often make it not worth the bother.

More worryingly though is that it’s all too easy for a website to capture this information without you knowing. Many sites already capture key presses, mouse movements, hover overs etc., some for legitimate functional use but many are just to help the marketing guys spy on you. What if the next time you google something they can tell if you’re drunk, stoned, had sex or are just plain tired… Do you really think they wouldn’t want to use that information to push more targeted ads at you?

Drunk => Show ads for porn sites.

Stoned => Show ads for local pizza companies.

Had sex => A combination of babies clothing, pharmacies and the most direct escape route… And of course Facebook would just auto post “Jamie just shagged Sally” in some pseudo scientific experiment on social behaviour.

As I said before, if you want your privacy back then lie! To subvert eDNA we’ll need something to inject noise between fingers and keyboard. Joy.

Computation, Data and Connections

My ageing brain sees things in what feels like an overly simplistic and reductionist way. Not exactly the truth so much as a simplified version of reality which I can understand well enough to allow me to function (to a degree).

So in computing I see everything as being composed of three fundamental (overly simplified) elements; computation, data and connections.

Computation refers to those components which do stuff, essentially acting on data to transform it from one form to another. This may be as simple as taking data from a database and returning it as a web page or as complex as processing video data to identify and track movements of individuals.

Data is the information itself which critically has no capability to do anything. Like Newtons first law, data at rest will remain at rest until an external computation acts upon it. Data may be in a database (SQL or NoSQL), file-system, portable USB drive, memory etc. (memory is really just fast temporary data storage).

And connectivity hooks things together. Without some means of connectivity computation can’t get a handle on the data and nothing much happens regardless. This may be a physical ethernet cable between devices, logical database driver, a file-system handle etc.

So in every solution I can define each component (physical or logical) as fulfilling one of these three core functions.

The next step from a non-functional perspective is to consider a matrix of these elements and my four classifications for non-functional requirements (security, performance & capacity, operability and form) to discuss the key considerations that are required for each.

For example, backup and recovery is primarily a consideration for data. Computational elements have no data as such (log files for example would be data output from compute components) and so a different strategy can be adopted to recover these (e.g. redundant/resilient instances, configuration management, automated-build/dev-ops capabilities etc.). Like wise, connectivity recovery tends to be fairly manual since connections can traverse many components, any one of which could fail, and so more effort is required to PD (problem determination) and identify root cause. A key requirement to aid this is a full understanding of the connections between components and active monitoring of each so you can pinpoint problems more easily – unfortunately it’s naive to think you understand all the connections between components in your systems.

The matrix thus looks something like this:

Computation Data Connections
Security  Computation v Security
Performance & Capacity
Operability Redundancy, configuration management, build automation. Backup, recovery, resiliency. Connectivity model, monitoring.
Form

 

Now I just need to fill the rest of it in so when I work on a new solution I can classify each component and refer to the matrix to identify non-functional aspects to consider. At least that’s the theory…

Interview Tales I – The Bathtub Curve

I’ve been to a few interviews recently, most of which have been bizarrely enjoyable affairs where I’ve had the opportunity to discover much about how things work elsewhere. However, I recently went to an interview for an organisation which has suffered some pretty high profile  system failures recently which I timidly pointed out; hoping not to offend. The response was, in my view, both arrogant and ignorant – perhaps I did offend…

I was informed; rather snootily, that this incident was a one-off having occurred just once in the 15+ year experience of the interviewer and couldn’t happen again. Humm… I raised the point having worked on a number of technology refresh projects and being familiar with the Bathtub Curve (shown below – image courtesy of Engineering Statistics Handbook).

bathtub2

What this shows is how during the early life of a system failures are common (new build, many defects etc.). Things then settle down to a fairly stable; but persistent, level of failures for the main life of the system before things start to wear out and the number of incidents increases again – ultimately becoming catastrophic.

This is kind of obvious for mechanical devices (cars and the like) but perhaps not so much for software. I still have an old ’80s book on software engineering which states that “software doesn’t decay!”. However, as pointed out previously, software is subject to change from a variety of sources, change brings decay and decay increases failure rates. Bathtub curve applies.

Now the reason I mentioned the failure in the first place was because the press I had read pointed towards a combination of ageing systems and complex integration solutions holding things together. I was therefore expecting an answer along the lines of “yes, we need to work to make sure it doesn’t happen again” and “that’s why we’re hiring because we need to address these issues”. This could then lead on and I could relate my experiences on refresh projects, hurrah!… It didn’t work out like that even though it did seem that the raison d’être behind the role itself was precisely because they didn’t have a good enough grip on the existing overall IT environment.

It’s entirely possible that the interviewer is correct (or gets lucky). However, given there has actually been a couple of such incidents at the same organisation recently – two individually unique issues of course – I’m kind of suspicious that what they’re actually seeing is the start of the ramp up in incidents that is typified by the Bathtub Curve. Time will tell.

I wasn’t offered the job, but then again, I didn’t want it either so I think we’re happy going our separate ways.

Data Currency and Exploding Bunnies

There is such a thing as data currency – i.e. how current and up to date the data is. On the web, stale data is a social disease which deservingly leads to isolation and the irritating and distant tut-tut‘ing that goes with it – much like zits on a greasy teenager. We’re all culpable (me in particular given the last time I updated the mighty stellarmap.com) but I expect more from The Guardian. So… tut-tut to The Guardian for pressing UK Gov Data from 2010 on their Data home-page today. There was me thinking I’d happen across some interesting nuggets only to find old, stale and consequently misleading data.

Of course I’m not helping by providing a bunch of links to stale content myself but it’s time more attention was paid to data currency by net publishers. Perhaps then this wouldn’t be such a big problem.

Screen grab below for the hell of it…

gdata

 

From a non-functional perspective an important criteria for data is how current it needs to be. Various caching strategies may; or may not, be viable depending on how critical this is. If you’re buying stock then you want the correct price now, if you’re browsing the news then perhaps today is sufficient. This affects how deep into your system transactional tentacles reach and the resources that need to be committed to address this. However, the issue on The Guardian site likely relates instead to the algorithms that promote data and how these are either insensitive to the element of time, the subject of data of low velocity (which it isn’t in this case) or which are sensitive; and who wouldn’t be, to the internet scale viral effect causing excessive temporary popularity. Perhaps content providers need to start saying, “ok, we know it’s a funny video of a cat stuck in a washing machine but that was so 2005 and welcome to 2015, so here’s a fully interactive 3d experience of a bunny playing with grenades instead”