Excremental Form

We often think we know what good design is; whether it be system, code or graphic design, and it’s a good thing that we strive for perfection.

Perfection though is subjective, comes at a cost and is ultimately unachievable. We must embrace the kludges, hacks, work-arounds and other compromises and like the Greek idiom; “whoever is not Greek is barbarian”, we should be damn proud of being that little bit barbaric even if we continue to admire the Greeks.

The question is not whether the design is good but whether the compromises are justified, sound and fit for purpose. Even shit can have good and bad form.

Scaling on a budget

Pre-cloud era. You have a decision to make. Do you define your capacity and performance requirements in the belief that you’ll build the next top 1000 web-site in the world or start out with the view that you’ll likely build a dud which will be lucky to get more than a handful of visits each day?

If the former then you’ll need to build your own data-centres (redundant globally distributed data-centres). If the latter then you may as well climb into your grave before you start. But most likely you’ll go for something in the middle, or rather at the lower end, something which you can afford.

The problem comes when your site becomes popular. Worse still, when that popularity is temporary. In most cases you’ll suffer something like a slashdot effect for a day or so which will knock you out temporarily but could trash your image permanently. If you started at the higher end then your problems have probably become terminal (at least financially) already.

It’s a dilemma that every new web-site needs to address.

Post-cloud era. You have a choice – IaaS or PaaS? If you go with infrastructure then you can possibly scale out horizontally by adding more servers when needed. This though is relatively slow to provision* since you need to spin up a new server, install your applications and components, add it to the cluster, configure load-balancing, DNS resiliency and so on. Vertical scaling may be quicker but provides limited additional headroom. And this assumes you designed the application to scale in the first place – if you didn’t then chances are probably 1 in 10 that you’ll get lucky. On the up side, the IaaS solution gives you the flexibility to do-your-own-thing and your existing legacy applications have a good chance they can be made to run in the cloud this way (everything is relative of course).

If you go with PaaS then you’re leveraging (in theory) a platform which has been designed to scale but which constrains your solution design in doing so. Your existing applications have little chance they’ll run off-the-shelf (actually, no chance at all really) though if you’re lucky some of your libraries may (may!) work depending on compatibility (Google App Engine for Java, Microsoft Azure for .NET for example). The transition is more painful with PaaS but where you gain is in highly elastic scalability at low cost because it’s designed into the framework.

IaaS is great (this site runs on it), is flexible with minimal constraints, low cost and can be provisioned quickly (compared to the pre-cloud world).

PaaS provides a more limited set of capabilities at a low price point and constrains how applications can be built so that they scale and co-host with other users applications (introducing multi-tenancy issues).

A mix of these options probably provides the best solution overall depending on individual component requirements and other NFRs (security for example).

Anyway, it traverses the rats maze of my mind today due to relevance in the news… Many Government web-sites have pitiful visitor numbers until they get slashdotted or are placed at #1 on the BBC website – something which happens quite regularly though most of the time the sites get very little traffic – peaky. Todays victim is the Get Safe Online site which collapsed under load – probably as result of the BBC advertising it. For such sites perhaps PaaS is the way forward.

* I can’t really believe I’m calling IaaS “slow” given provisioning can be measured in the minutes and hours when previously you’d be talking days, weeks and likely months…

Linux! Champion of Big Data

Big data solutions based on distributed databases such as MongoDB (and Hadoop and others) rely on have very many nodes running in parallel to provide resiliency, performance and scalability.

This is a step up from the “cluster of 2-nodes” model (primary & failover) used for many legacy SQL installations. Such is simply not big enough to support resiliency with the sort of distributed database model NoSQL solutions provide (even if it could scale). For example, you’ll need a minimum of x3 nodes just to allow the election of a primary to work in a replicated cluster and more for sharding using MongoDB.

Of course there’s a reason why you’ve chosen a NoSQL solution in the first place – scale – and the choice of horizontal v vertical scaling at these sizes makes sense. This is all good news for Linux since an increase in the number of nodes has costs associated with it which I will likely dictate that Linux will become the OS of choice for such solutions instead of Windows or other UNIX OS’s. Commodity hardware will likely be the same for all OS’s (bar UNIX’s) so the differentiator will be the OS (on price at least).

Of course, if your volumes are low then you can always stick with a SQL database – tried, tested and actually pretty damn good and suited to most problems out there. In many cases SQL should be the default. NoSQL if you’re forced to by capacity requirements…

Entropy – Part 2

A week or so ago I wrote a piece on entropy and how IT systems have a tendency for disorder to increase in a similar manner to the second law of thermodynamics. This article aims to identify what we can do about it…

It would be nice if there was some silver bullet but the fact of the matter is that; like the second law, the only real way to minimise disorder is to put some work in.

1. Housekeeping

As the debris of life slowly turns your pristine home into something more akin to the local dump, so the daily churn of changes gradually slows and destabilises your previously spotless new IT system. The solution is to crack on with the weekly chore of housekeeping in both cases (or possibly daily if you’ve kids, cats, dogs etc.). It’s often overlooked and forgotten but a lack of housekeeping is frequently the cause of unnecessary outages.

Keeping logs clean and cycling on a regular basis (e.g. hoovering), monitoring disk usage (e.g. checking you’ve enough milk), cleaning up temporary files (e.g. discarding those out of date tins of sardines), refactoring code (e.g. a spring clean) etc. is not difficult and there’s little excuse for not doing it. Reviewing the content of logs and gathering metrics on usage and performance can also help anticipate how frequently housekeeping is required ensure smooth running of the system (e.g. you could measure the amount of fluff hoovered up each week and use this as the basis to decide which days and how frequently the hoovering needs doing – good luck with that one!). This can also lead to additional work to introduce archiving capabilities (e.g. self storage) or purging of redundant data (e.g. taking the rubbish down the dump). But like your home, a little housekeeping done frequently is less effort (cost) than waiting till you can’t get into the house because the doors jammed and the men in white suits and masks are threatening to come in and burn everything.

2. Standards Compliance

By following industry standards you stand a significantly better chance of being able to patch/upgrade/enhance without pain in the future than if you decide to do your own thing.

That should be enough said on the matter but the number of times I see teams misusing APIs or writing their own solutions to what are common problems is frankly staggering. We (and me especially) all like to build our own palaces. Unfortunately we lack sufficient exposure to the space of a problem to be able to produce designs which combines elegance with flexibility to address the full range of use cases or the authority and foresight to predict the future and influence this in a meaningful way. In short, standards are generally thought out by better people than you or me.

Once a standard is established then any future work will usually try to build on this or provide a roadmap of how to move from the old standard to the new.

3. Automation

The ability to repeatedly and reliably build the system decreases effort (cost) and improves quality and reliability. Any manual step in the build process will eventually lead to some degree of variance with potentially unquantifiable consequences. There are numerous tools available to help with this (e.g. Jenkins) though unfortunately usage of such tools is not as widespread as you would hope.

But perhaps the real killer feature is test automation which enables you to continuously execute tests against the system at comparatively negligible cost (when compared to maintaining a 24×7 human test team). With this in place (and getting the right test coverage is always an issue) you can exercise the system in any number of hypothetical scenarios to identify issues; both functional and non-functional, in a test environment before the production environment becomes compromised.

Computers are very good at doing repetitive tasks consistently. Humans are very good at coming up with new and creative test cases. Use each appropriately.

Much like housekeeping, frequent testing yields benefits at lower cost than simply waiting till the next major release when all sorts of issues will be uncovered and need to be addressed – many of which may have been around a while though no-one noticed… because no-one tested. Regular penetration testing and review of security procedures will help to proactively avoid vulnerabilities as they are uncovered in the wild, and regular testing of new browsers will help identify compatibility issues before your end-users do. There are some tools to help automate in this space (e.g. Security AppScan and WebDriver) though clearly it does cost to run and maintain such a continuous integration and testing regime. However, so long as the focus is correct and pragmatic then the cost benefits should be realised.

4. Design Patterns

Much like standards compliance, use of design patterns and good practices such as abstraction, isolation and dependency injection can help to ensure changes in the future can be accommodated at minimal effort. I mention this separately though since the two should not be confused. Standards may (or may not) adopt good design patterns and equally non-standard solutions may (or may not) adopt good design patterns – there are no guarantees either way.

Using design patterns also increases the likelihood that the next developer to come along will be able to pick up the code with greater ease than if it’s some weird hair-brained spaghetti bowl of nonsense made up after a rather excessive liquid lunch. Dealing with the daily churn of changes becomes easier, maintenance costs come down and incidents are reduced.

So in summary, entropy should be considered a BAU (Business as Usual) issue and practices should be put in place to deal with it. Housekeeping, standards-compliance, automation through continuous integration and use of design patterns all help to keep the impact of change minimised and keep the level of disorder down.

Next time, some thoughts on how to measure entropy in the enterprise…

Entropy

Entropy is a measure of disorder in a system. A number of years ago I was flicking through an old book on software engineering from the 1970’s. Beyond being a right riveting read it expounded the view that software does not suffer from decay. That once set, software programs, would follow the same rules over and over and produce the same results time and again ad infinitum. In effect that software was free from decay.

I would like to challenge this view.

We design a system, spend many weeks and months considering every edge case, crafting the code so we’ve handled every possible issue nature can throw at us including those “this exception can never happen but just it case it does..” scenarios. We test it till we can test no more without actually going live and then release our latest most wondrous creation on the unsuspecting public. It works and for a fleeting moment all is well with the universe… from this moment on decay eats away at our precious creation like rats gnawing away on the discarded carcass of the sunday roast.

Change is pervasive and whilst it’s seems reasonable enough that were we able to precisely reproduce the starting conditions the program would run time and again as it did the first time, this isn’t correct for reasons of quantum mechanics and our inability to time travel (at least so far as we know today). However, I’ll ignore the effects of quantum mechanics and time-travel for now and focus on the more practical reasons for change and how this causes decay and increasing entropy in computer systems.

Firstly there’s the general use of the system. Most systems have some sort of data-store; if only for logging, and data is collected in increasing quantities and in a greater variety of combinations over time. This can lead to permutations which were never anticipated which leads to exposure of functional defects or increase volumes beyond the planned capacity of the system. The code may remain the same but when we look at a system and consider it as an atomic unit in its entirety, it is continuously changing. Subsequent behaviour becomes increasingly unpredictable.

Secondly there’s the environment the system exists within – most of which is totally beyond any control. Patches for a whole stack of components are continually released from the hardware up. The first response from most first-line support organisations is “patch to latest level” (which is much easier said than done) but if you do manage to keep up with the game then these patches will affect how the system runs.

Conversely, if you don’t patch then you leave yourself vulnerable to the defects that the patches were designed to resolve. The knowledge that the defect itself exists changes the environment in which the system runs because now the probability that someone will try to leverage the defect is significantly increased – which again increases the uncertainty over how the system will operate. You cannot win and the cost of doing nothing may be more than the cost of addressing the issue.

Then there’s change that we inflict ourselves.

If you’re lucky and the system has been a success then new functional requirements will arise – this is a good thing, perhaps one for later but a system which does not functionally evolve is a dead-end and essentially a failure – call it a “panda” if you wish. The business will invent new and better ways to get the best out of the system, new use cases which can be fulfilled become apparent and a flourish of activities follow. All of which change the original system.

There’s also non-functional requirements change. Form needs a refresh every 18 months or so, security defects need to be fixed (really, they do!), performance and capacity improvements may be needed and the whole physical infrastructure needs to be refreshed periodically. The simple act of converting a physical server to virtual (aka P2V conversion) which strives to keep the existing system as close to current as possible; detritus and all, will typically provide more compute, RAM and disk than was ever considered possible. Normally this makes the old application run so much faster than before but occasionally that speed increase can have devastating effects on the function of the system within time sensitive applications. Legislative requirements, keeping compliant with latest browsers etc., all bring more change…

Don’t get me wrong, change is a good thing normally and the last thing we want is a world devoid of change. The problem is that all this change increases the disorder (entropy) of the system. Take the simple case of a clean OS install. Day 1, the system is clean and well ordered. Examining the disk and logs shows a tidy registry and clean log and temporary directories. Day 2 brings a few patches, which adds registry entries, some logs, a few downloads etc. but it’s still good enough. But by Day n you’ve a few hundred patches installed, several thousand log files and a raft of old downloads and temporary files lying around.

The constant state of flux means that IT systems are essentially subject to the same tendency for disorder to increase as stated in the second law of thermodynamics. Change unfortunately brings disorder and complexity. Disorder and complexity makes things harder to maintain and manage, increasing fragility and instability. Increased management effort results in increased costs.

Well, that’s enough for today.. next week, what we can do about it

Reuse

Reuse! My favourite subject. Now, are you sitting comfortably? Then I’ll begin…

Once upon a time in a land far far away, the king of computer-land was worried. Very worried indeed. His silly prime minister had borrowed lots and lots of money to build lots of new computer systems and programs and now they couldn’t pay the interest on the debt. Worse still, none of the systems worked together and the land was becoming a confusing mess and there were lots of traffic jams and angry people. No-one knew how it worked and everything kept breaking. It was very expensive and it didn’t work. The villagers were not happy and were threatening to chop the heads off the king and queen because they were French.

Then one day, a strange young prince claiming to be from a neighbouring country arrived bringing promises to sort out all the mess and clean the country up. And what’s more, he’d do it cheaply and the country would have more money and better computer systems. The king and queen were very happy and the villagers were pleased as well – although they still want to chop the heads of the king and queen, because they were French and it would be fun.

So they listened to the prince and liked his ideas and gave him all the money they had left. The prince was going to build even more computer systems but they would all be based on the same design so would be cheap and quick to build. This meant he could spend more money on the design so it would be very good as well as cheap to build.

Then the prince said that he could also make a large hotel and everyone could live under the same roof. This would save on roofs because there would only be one and would be cheaper to run because there would only be one electricity bill. The villagers liked this because they liked going on holiday. The king and queen liked this because they had decided to go on holiday and so the villagers could not chop off their heads even though they were French.

Then the prince started to design the computer systems. He decided to start with the post-box because everyone sent letters. So he spoke to Granny Smith and Mrs Chatterbox about what they needed. They liked his design. It was round and red and pretty – it looked a bit like the old post-boxes.

Then he spoke to the bookshop keeper who didn’t like his design because it was too small for him to post a book. So the prince made it bigger, much bigger.

Then he spoke to the postman who didn’t like it because it was too big and would give him too many parcels to carry but the prince decided to ignore the postman because he was clearly an idiot.

So two of the postboxes were built; one in case the other was full, and the villagers liked them a lot even though the postman did not.

Next the prince decided to build the hotel so asked the villagers how they would like their room to look; because there could only be one design. Some wanted it round, some square, some with a balcony, some with stairs… and everyone want an en-suite with bidet even if they did not know how to use it. So the prince designed a flexible framework consisting of transformable panels which could be positioned wherever the villager chose. No-one liked the tents and the bidet was missing. The villagers were very angry and started to build a guillotine because they were French.

Then some of the villagers started to place their tents at the entrance to the hotel so they could get out quickly. But this stopped other villagers from coming in so made them angry. Then another villager blocked the toilet and all the villagers were angry and the hotel staff decided to go on strike because they were French and they hadn’t had a strike yet.

So the villagers decided to summon the king and queen to come back from holiday and sort out the mess. So they each sent a letter recorded delivery. But the postbox didn’t understand what “recorded delivery” meant because it was just a big round red box and and postman didn’t want to pick up all the letters anyway because there were too many to carry and they hadn’t paid the postage. So the king and queen didn’t return to sort out the mess and the villagers were apoplectic with rage.

So the villagers burnt all the tents and drowned the postman and the prince in the river. Then the king and queen returned from holiday to find the city on fire and lots of angry villagers carrying pitchforks and pointing to a guillotine. But the king and queen were fat and so couldn’t run away. So the villagers decided to form a republic and elected the prime-minister to become the president. The president chopped off the heads of the king and queen and the villagers were happy so gave a gallic shrug; because they were French, and lived happily for the next week or so…

All of which begs the question… what’s this got to do with reuse?

Well, two things.

  1. Design reuse requires good design to be successful. And for the design to be good there must be lots of consistent requirements driving it. All too often reuse programs are based on the notion of “build it and they will come” where a solution is built for a hypothetical problem which it’s believed many requirements face. Often the requirements don’t align and a lot of money is spent designing a multi-functional beast which tries; and often fails, to do too much which increases complexity which increases cost. The additional effort needed to consider multiple requirements from disparate systems significantly increases design, build and maintenance costs.  To make this worse, true cases of reuse are often common problems in the wider industry and so industry standard solutions and design patterns may exist which have been thought out by smarter people than you or me. To tackle these in-house is tantamount to redesigning the wheel… generally badly.
  2. Instance reuse sounds like a great idea – you can save on licenses, on servers and other resources – but this creates undesirable dependencies which are costly to resolve and act to slow delivery and reduce ease of maintenance. Furthermore, resource savings are often limited as you’ll only save on a narrow portion of the overall requirements – you’ll need more compute, more memory and more storage. Getting many parties to agree to changes is also time-consuming and consequently costly and makes management of the sum more of a headache than it need be.

Personally I believe if you’re going to progress a reusable asset program you need to validate that there really exists multiple candidate usage scenarios (essentially  the cost of designing and building a reusable asset must be less than cost of designing and building n assets individually), that requirements are consistent and that you’re not reinventing the wheel. If this is the case, then go for it. Alternatively you may find an asset harvesting program to review and harvest “good” assets may yield better results; technically as well as being more efficient and cost effective. Then there’s the view that all reuse is opportunistic in so much as using something designed to be “re”used is really just “use”, and not “reuse” – as I once noted, “wearing clean underpants is ‘use’, turning them inside out and back to front is ‘reuse'”.

In terms of instance reuse, in my view it’s often not worth saving a few licenses given the headaches that results from increased dependencies between what should be independent components. The problem is complicated with hardware, rack space and power consumption so is often not clear and some compromise is needed. However, the silver bullet here is virtualisation where a hypervisor can allocate and share resources out dynamically allowing you to squeeze many virtual machines onto one physical machine. License agreements may allow licensing at the physical CPU level instead of virtual CPU which can then be over allocated so you can have many guest instances running on fewer host processors. This isn’t always the case of course and the opposite may be cheaper so this needs careful review of licensing and other solution costs.

Reuse isn’t always a good idea and the complexities needed in design and build and additional dependencies resulting may outweigh the costs of just doing it n times in the first place. Use of standards, design patterns and harvesting of good assets should be the first  front in trying to improve quality and reduce costs . Any justification for creating reusable assets should include a comparative estimates of the costs involved; including ongoing cost to operations.