Instrumentation as a 1st Class Citizen

I wrote previously that we are moving into an era of instrumentation and things are indeed improving. Just not as fast as I’d like. There’s a lot of good stuff out there to support instrumentation and monitoring including the likes of the ELK (ElasticSearch, Logstash, Kibana) and TIG (Telegraf, InfluxDB, Grafana) stacks as well as their more commercial offerings such as TICK (Telegraf, InfluxDB, Chronograf, Kapacitor), Splunk, DataDog, AppDynamics and others. The problem is, few still really treat instrumentation as a real concern… until it’s too late.

Your customers love this stuff! Really, they do! There’s nothing quite as sexy as an interactive graph showing how your application is performing as the load increases – transactions, visitors, response-times, server utilisation, queue-depths etc. When things are going well it gives everyone a warm fuzzy feeling that all is right with the universe. When things are going wrong it helps to quickly focus you in on where the problem is.

However, this stuff needs to be built into everything we do and not be an afterthought when the pressures on to ship-it and you can’t afford the time and effort to retrofit it. By then it’s too late.

As architects we need to put in the infrastructure and services needed to support instrumentation, monitoring and alerting. At a minimum this means putting in place standards for logging, data-retention polices, a data collection solution, repository for the data and some tooling to allow us to search that data and visualize what’s going on. Better still we can add alerting when thresholds breach and use richer analytics to allow us to scale up and down to meet demand.

As developers we need to be considering what metrics we want to capture from the components we build as we’re working on them. Am I interested in how long it’s taking for this function call? Do I want to know how many messages a service is handling? How many threads are being spawned? What exceptions are being thrown? Where from? What the queue depths are?.. etc. Almost certainly… YES! And this means putting in place strategies for logging these things. Perhaps you can find the data in existing log files.. Perhaps you need to use better tooling for detailed monitoring… Perhaps you need to write some code yourself to track how things are going…

Doing this from the start will enable you to get a much better feel for how things are working before you launch – including a reasonable view of performance and infrastructure demands which will allow you to focus your efforts better later when you do get into sizing and performance testing. It’ll mean you’re not scrambling around look for log files to help you root-cause issues as your latest release goes into meltdown. And it’ll mean your customer won’t be chewing your ear off asking you what’s going on every five minutes – they’ll be able to see it for themselves…

So please, get it in front of your customer, your product owner, your sponsor, your architects, your developers, your testers and make instrumentation a 1st class citizen in the backlog.

Performance Testing is Easy

Performance testing is easy. We just throw as many requests at the system as we can as quickly as we want and measure the result. Job done right?

tl;dr? Short form…

  1. Understand the user scenarios and define tests. Review the mix of scenarios per test and the type of tests to be executed (peak, stress, soak, flood).
  2. Size and prepare the test environment and data. Consider the location of injectors and servers and mock peripheral services and systems where necessary.
  3. Test the tests!
  4. Execute and monitor everything. Start small and ramp up.
  5. Analyse results, tune, rinse and repeat until happy.
  6. Report the results.
  7. And question to what level of depth performance testing is really required…

Assuming we’ve got the tools and the environments, the  execution of performance tests should be fairly simple. The first hurdle though is in preparing for testing.

User Scenarios and Test Definitions

In order to test we first need to understand the sort of user scenarios that we’re going to encounter in production which warrant testing. For existing systems we can usually do some analysis on web-logs and the like to figure out what users are actually doing and try to model these scenarios. For this we may need a year or more of data to see if there are any seasonal variations and to understand what the growth trend looks like. For new systems we don’t have this data so need to make some assumptions and estimates as to what’s really going to happen. We also need to determine which of the scenarios we’re going to model and the transaction rates we want them to achieve.

When we’ve got system users calling APIs or running batch-jobs the variability is likely to be low. Human users are a different beast though and can wander off all over the place doing weird things. To model all scenarios can be a lot of effort (which equals a lot of cost) and a risk based approach is usually required. Considerations here include:

  • Picking the top few scenarios that account for the majority of activity. It depends on the system, but I’d suggest keeping these scenarios down to <5 – the fewer the better so long as it’s reasonably realistic.
  • Picking the “heavy” scenarios which we suspect are most intensive for the system (often batch jobs and the like).
  • Introducing noise to tests to force the system into doing things they’d not be doing normally. This sort of thing can be disruptive (e.g. a forced load of a library not otherwise used may be just enough to push the whole system over the edge in a catastrophic manner).

We next need to consider the relative mix of user scenarios for our tests (60% of users executing scenario A, 30% doing scenario B, 10% doing scenario C etc.) and the combinations of scenarios we want to consider (running scenarios A, B, C ; v’s A, B, C plus batch job Y).

Some of these tests may not be executed for performance reasons but for operability – e.g. what happens if my backup runs when I’m at peak load? or what happens when a node in a cluster fails?

We also need test data.

For each scenario we should be able to define the test data requirements. This is stuff like user-logins, account numbers, search terms etc.

Just getting 500 test user logins setup can be a nightmare. The associated test authentication system may not have capacity to handle the number of logins or account and we may need to mock it out. It’s all too common for peripheral systems not to be in the position to enable performance testing as we’d like and in any case we may want something that is more reliable when testing. For any mock services we do decide to build we need to work out how this should respond and what the performance of this should look like (it’s no good having a mock service return in 0.001 seconds when the real thing takes 1.8 seconds).

Account numbers have security implications and we may need to create dummy data. Search terms; especially from humans, can be wild and wonderful – returning millions or zero records in place of the expected handful.

In all cases, we need to prepare the environment based on the test data we’re going to use and size it correctly. Size it? Well, if production is going to have 10 millions records it’s not much good testing with 100! Copies of production data; possibly obfuscated, can be useful for existing systems. For new though we need to create the data. Here be dragons. The distribution of randomly generated data almost certainly won’t match that of real data – there are far more instances of surnames like Smith, Jones, Taylor, Williams or Brown than there are like Zebedee. If the distribution isn’t correct then the test may be invalid (e.g. we may hit one shard or tablespace and associated nodes and disks too little or too much).

I should point out that here that there’s a short cut for some situations. For existing systems with little in the way of stringent security requirements, no real functional changes and idempotent requests; think application upgrades or hardware migrations of primarily read-only websites, replaying the legacy web-logs may be a valid way to test. It’s cheap, quick and simple – if it’s viable.

We should also consider the profile and type of tests we want to run. For each test profile there are three parts. The ramp-up time (how long it takes to get to the target volume), steady-state time (how long the test runs at this level for), ramp-down time (how quickly we close the test (we usually care little for this and can close the test down quickly but in some cases we want a nice clean shutdown)). In terms of test types there are:

  • Peak load test – Typically a 1 to 2 hr test at peak target volumes. e.g. Ramp-up 30 minutes, steady-state 2hrs, ramp-down 5 mins.
  • Stress test – A longer test continually adding load beyond peak volumes to see how the system performs under excessive load and potentially where the break point is. e.g. Ramp-up 8 hrs, steady-state 0hrs, ramp-down 5 mins.
  • Soak test – A really long test running for 24hrs or more to identify memory leaks and the impact of peripheral/scheduled tasks. e.g. Ramp-up 30 mins, steady-state 24hrs, ramp-down 5 mins.
  • Flood test (aka Thundering Herd) – A short test where all users arrive in a very short period. In this scenario we can often see chaos ensue initially but the environment settling down after a short period. e.g. Ramp-up 0mins, steady-state 2hrs, ramp-down 5 mins

So we’re now ready to script our tests. We have the scenarios, we know the transaction volumes, we have test data, our environment is prep’d and we’ve mocked out any peripheral services and systems.

Scripting

There are many test tools available from the free Apache JMeter and Microsoft web stress tools to commercial products such as HP LoadRunner and Rational Performance Tester to cloud based solutions such as Soasta or Blitz. Which tool we choose depends on the nature of the application and our budget. Cloud tools are great if we’re hosting in the public cloud, not so good if we’re an internal service.

The location of the load injectors (the servers which run the actual tests) is also important. If these are sitting next to the test server we’ll get different results than if the injector is running on someones laptop connected via a VPN tunnel over a 256kbit ADSL line somewhere in the Scottish Highlands. Which case is more appropriate will depend on what we’re trying to test and where we consider the edge of our responsibility to lie. We have no control over the sort of devices and connectivity internet users have so perhaps our responsibility stops at the point of ingress into our network? Or perhaps it’s a corporate network and we’re only concerned with the point of ingress into our servers? We do need to design and work within these constraints so measuring and managing page weight and latency is always a concern but we don’t want to have the complexity of all that “stuff” out there which isn’t our responsibility weighing us down.

Whichever tool we choose, we can now complete the scripting and get on with testing.

Testing

Firstly, check everything is working. Run the scripts with a single user for 20 minutes or so to ensure things are responding as expected and that the transaction load is correct. This will ensure that as we add more users we’re scaling as we want and that the scripts aren’t themselves defective. We then quite quickly ramp the tests up, 1 user, 10, users, 100 users etc. This helps to identify any concurrency problems early on with fewer users than expected (which can add too much noise and make it hard to see whats really going on).

If we’ve an existing system, once we know the scripts work we will want to get a baseline from the legacy system to compare to. This means running the tests on the legacy system. What? Hang on! This means we need another instance of the system available running the old codebase with similar test data and similar; but possibly not identical, scripts! Yup. That it does.

If we’ve got time-taken logging enabled (%D for Apache mod_log_config) then we could get away with comparing the old production response times with the new system so long as we’re happy the environments are comparable (same OS, same types of nodes, same spec, same topology, NOT necessarily the same scale in terms of numbers of servers) and that the results are showing the same thing (which depends on what upstream network connectivity is being used). But really, a direct comparison of test results is better – comparing apples with apples.

We also need to consider what to measure and monitor. We are probably interested in:

  • For the test responses:
    • Average, max, min and 95th percentile for the response time per request type.
    • Average, max, min size for page weight.
    • Response codes – 20x/30x probably good, lots of 40x/50x suggests the test or servers are broken.
    • Network load and latency.
  • For the test servers:
    • CPU, memory, disk and network utilisation throughout the test run.
    • Key metrics from middle-ware; queue depths, cache-hit rates, JVM garbage collection (note that JVM memory will look flat at the server level so needs some JVM monitoring tools). These will vary depending on the middle-ware and for databases we’ll want a DBA to advise on what to monitor.
    • Number of sessions.
    • Web-logs and other log files.
  • For the load injectors:
    • CPU, memory, disk and network utilisation throughout the test run. Just to make sure it’s not the injectors that are overstretched.

And finally we can test.

Analysis and Tuning

It’s important to verify that the test achieved the expected transaction rates and usage profiles. Reviews of log files to ensure no-errors and web-logs to confirm transaction rates and request types help verify that all was correct before we start to review response times and server utilisation.

We can then go through the process of correlating test activity with utilisation, identifying problems, limits near capacity (JVM memory for example) and extrapolate for production – for which some detailed understanding of the scaling nature of the system is required.

It’s worth noting that whilst tests rarely succeed first time, in my experience it’s just as likely to be an issue with the test as it is with the system itself. It’s therefore necessary to plan to execute tests multiple times. A couple of days is normally not sufficient for proper performance testing.

All performance test results should be documented for reporting and future needs. To already have an understanding of why certain changes have been made and a baseline to compare to the next time the tests are run is invaluable. It’s not war-and-peace, just a few of pages of findings in a document or wiki. Most test tools will also export the results to a PDF which can be attached to keep track of the detail.

Conclusion?

This post is already too long but  one thing to question is… Is it worth the effort?

A Zipf distribution exists for systems and few really have that significant a load. Most handle a few transactions a second if that. I wouldn’t suggest “no performance testing” but I would suggest sizing the effort depending on the criticality and expected load. Getting a few guys in the office to hit F5 whilst we eyeball the CPU usage may well be enough. In code we can also include timing metrics in unit tests and execute these a few thousand times in a loop to see if there’s any cause for concern. Getting the engineering team to consider and monitor performance early on can help avoid issues later and reduce he need for multiple performance test iterations.

Critical systems with complex transactions or an expected high load (which as a rough guide I would say is anything around 10tps or more) should be tested more thoroughly. Combining capacity needs with operational needs informs the decision – four 9’s and 2k tps is the high end from my experience – and a risk based approach should always be used when considering performance testing.

MongoDB Write Concern Performance

MongoDB is a popular NoSQL database which scales to very significant volumes through sharding and can provide resiliency through replication sets. MongoDB doesn’t support the sort or transaction isolation that you might get with a more traditional database (read committed, dirty reads etc.) and works at the document level as an atomic transaction (it’s either inserted/updated, or it’s not) – you cannot have a transaction spanning multiple documents.

What MongoDB does provide is called “Write-Concern” which provides some assurance over whether the transaction was safely written or not.

You can store a document and request “acknowledgement” (or not), whether the document was replicated to any replica-sets (for resiliency), whether the document was written to the transaction log etc. There’s a very good article on the details of Write-Concern over on the MongoDB site. Clearly the performance will vary depending on the options chosen and the Java driver supports a wide range of these:

  • ACKNOWLEDGED

  • ERRORS_IGNORED

  • FSYNCED

  • FSYNC_SAFE

  • JOURNALED

  • JOURNAL_SAFE

  • MAJORITY

  • NONE

  • NORMAL

  • REPLICAS_SAFE

  • REPLICA_ACKNOWLEDGED

  • SAFE

  • UNACKNOWLEDGED

So for a performance comparison I fired up a small 3 node MongoDB cluster (2 database servers, 1 arbitrator) and ran a script to store 100 documents in the database using the various methods available to see what the difference is. The database was cleaned down each time (to zero – so overall is very small).

**WARNING: Performance testing is highly dependent upon the environment in which it is run. These results are based on a dev/test environment running x3 guests on the same host node and may not be representative for you and only exist to provide a comparison. **

The results for all modes are shown below and shows x3 relatively distinct clusters.

Write Concern - All Modes
Write Concern – All Modes

Note: The initial run in all cases incurs a start up cost and hence appears slower than normal. This dissipates quickly though and performance can be seem to improve after this first run.

The slowest of these are FSYNCED, FSYNC_SAFE, JOURNALED and JOURNALED_SAFE (with JOURNALED_SAFE being the slowest).

Write-Concern Cluster 3 - Slowest
Write-Concern Cluster 3 – Slowest

These options all require the data to be written to disk which explains why they are significantly slower than other options though the contended nature of the test environment likely makes the results appear worse than they would be in a production environment. FSYNC modes are mainly useful for backups and the like so shouldn’t be used in code. JOURNALED modes depend on the journal commit interval (default 30 or 100ms) as well as the performance of your disks. Interestingly JOURNAL_SAFE is the supposedly the same as JOURNALED so seems a little odd that I can see a relatively significant reduction in performance consistently.

The second cluster improves performance significantly (from 3.5s overall to 500ms). This group covers the MAJORITY, REPLICAS_SAFE and REPLICAS_ACKNOWLEDGED options.

write-concern-c2
Write-Concern Cluster 2 – Mid

These options are all related to data replication to secondary nodes. REPLICA_ACKNOWLEDGED waits for x2 servers to have stored the data whilst MAJORITY waits for the majority to have stored and in this test since there are only x2 database servers it’s unsurprising that the results are similar. As the number of database servers increases then MAJORITY may be safer than REPLICA_ACKNOWLEDGED but will suffer some performance degradation. This though isn’t a linearly scaled performance drop since replication will generally occur in parallel. REPLICA_SAFE is supposedly the same as REPLICA_ACKNOWLEDGED and in this instance the results seem to back this up.

The fastest options cover everything else; ACKNOWLEDGED, SAFE, NORMAL, NONE, ERRORS_IGNORED and UNACKNOWLEDGED.

Write-Concern Cluster 1 - Fastest
Write-Concern Cluster 1 – Fastest

In theory I was expecting SAFE and ACKNOWLEDGED to be similar with NORMAL, NONE, ERRORS_IGNORED and UNACKNOWLEDGED quicker still since this last set shouldn’t wait for any acknowledgement from the server – once written to socket then assume all ok. However, the code I used was an older library I developed some time back which returns the object ID once stored. Since this has to read some data back, some sort of acknowledgement is implicit and so unsurprisingly they all perform similarly.

ERRORS_IGNORED and NONE are deprecated and shouldn’t be used anymore whilst NORMAL seems an odd name as the default for MonoDB itself is ACKNOWLEDGED!?

In summary. For raw speed ACKNOWLEDGED should do though if you want fire-and-forget then specific code and UNACKNOWLEDGED should be faster still. A performance drop will occur if you want the assurance that the data has been replicated to another server via REPLICA_ACKNOWLEDGED and this will depend on your network performance and locations so is worth testing for your specific needs. Finally, if you want to know it’s gone to disk then it’s slower still with the JOURNALED option, especially if you’ve contention on the disks as I do. For the truly paranoid there should be a REPLICA_JOURNALED option which would confirm both replicated and journaled.

Finally, if you insist on a replica acknowledging as well then it needs to be online and your code may hang if a replica is not available. If you’ve lots of replicas then this may be acceptable but if you’ve only 1 (as in this test case) then it’s bad enough to bring the application down immediately.

 

Mad Memoization (or how to make computers make mistakes)

Memoization is a technique used to effectively cache the results of computationally expensive functions to improve performance and throughput on subsequent executions. It can be implemented in a variety of languages but is perhaps best suited to functional programming languages where the response to a function should be consistent for a given set of input values. It’s a nice idea and has some uses but perhaps isn’t all that common since we tend to design  programs so that we only call such functions once; when needed, in any case.

I have a twist on this. Rather than remembering the response to a function with a particular set of values, remember the responses to a function and just make a guess at the response next time.

A guess could be made based on the entropy of the input and/or output values. For example, where the response is a boolean value (true or false) and you find that 99% of the time the response is “true” but it takes 5 seconds to work this out, then… to hell with it, just return “true” and don’t bother with the computation. Lazy I know.

Of course some of the time the response would be wrong but that’s the price you pay for improving performance throughput.

There would be some (possibly significant) cost to determining the entropy of inputs/outputs and any function which modifies the internal state of the system (non-idempotent) should be avoided from such treatment for obvious reasons. You’d also only really want to rely on such behaviour when the system is busy and nearly overloaded already so you need a way to quickly get through the backlog – think of it like the exit gates of a rock concert when a fire breaks out, you quickly want to ditch the “check-every-ticket” protocol in favour of a “let-everyone-out-asap” solution.

You could even complicate the process a little further and employ a decision  tree (based on information gain for example) when trying to determine the response to a particular set of inputs.

So, you need to identify expensive idempotent functions, calculate the entropy of inputs and outputs, build associated decision trees, get some feedback on the performance and load on the system and work out at which point to abandon reason and open the floodgates – all dynamically! Piece of piss… (humm, maybe not).

Anyway, your program would make mistakes when under load but should improve performance and throughput overall. Wtf! Like when would this ever be useful?

  • DoS attacks? Requests could be turned away at the front door to protect services deeper in the system?
  • The Slashdot effect? You may not give the users what they want but you’ll at least not collapse under the load.
  • Resiliency? If you’re dependent on some downstream component which is not responding (you could be getting timeouts after way too many seconds) then these requests will look expensive and the fallback to some default response (which may or may not be correct!?).

Ok, perhaps not my best idea to date but I like the idea of computers making mistakes by design rather than through incompetence of the developer (sorry, harsh I know, bugs happen, competent or otherwise).

Right, off to take the dog for a walk, or just step outside then come back in again if she’s feeling tired…

 

Entropy – Part 2

A week or so ago I wrote a piece on entropy and how IT systems have a tendency for disorder to increase in a similar manner to the second law of thermodynamics. This article aims to identify what we can do about it…

It would be nice if there was some silver bullet but the fact of the matter is that; like the second law, the only real way to minimise disorder is to put some work in.

1. Housekeeping

As the debris of life slowly turns your pristine home into something more akin to the local dump, so the daily churn of changes gradually slows and destabilises your previously spotless new IT system. The solution is to crack on with the weekly chore of housekeeping in both cases (or possibly daily if you’ve kids, cats, dogs etc.). It’s often overlooked and forgotten but a lack of housekeeping is frequently the cause of unnecessary outages.

Keeping logs clean and cycling on a regular basis (e.g. hoovering), monitoring disk usage (e.g. checking you’ve enough milk), cleaning up temporary files (e.g. discarding those out of date tins of sardines), refactoring code (e.g. a spring clean) etc. is not difficult and there’s little excuse for not doing it. Reviewing the content of logs and gathering metrics on usage and performance can also help anticipate how frequently housekeeping is required ensure smooth running of the system (e.g. you could measure the amount of fluff hoovered up each week and use this as the basis to decide which days and how frequently the hoovering needs doing – good luck with that one!). This can also lead to additional work to introduce archiving capabilities (e.g. self storage) or purging of redundant data (e.g. taking the rubbish down the dump). But like your home, a little housekeeping done frequently is less effort (cost) than waiting till you can’t get into the house because the doors jammed and the men in white suits and masks are threatening to come in and burn everything.

2. Standards Compliance

By following industry standards you stand a significantly better chance of being able to patch/upgrade/enhance without pain in the future than if you decide to do your own thing.

That should be enough said on the matter but the number of times I see teams misusing APIs or writing their own solutions to what are common problems is frankly staggering. We (and me especially) all like to build our own palaces. Unfortunately we lack sufficient exposure to the space of a problem to be able to produce designs which combines elegance with flexibility to address the full range of use cases or the authority and foresight to predict the future and influence this in a meaningful way. In short, standards are generally thought out by better people than you or me.

Once a standard is established then any future work will usually try to build on this or provide a roadmap of how to move from the old standard to the new.

3. Automation

The ability to repeatedly and reliably build the system decreases effort (cost) and improves quality and reliability. Any manual step in the build process will eventually lead to some degree of variance with potentially unquantifiable consequences. There are numerous tools available to help with this (e.g. Jenkins) though unfortunately usage of such tools is not as widespread as you would hope.

But perhaps the real killer feature is test automation which enables you to continuously execute tests against the system at comparatively negligible cost (when compared to maintaining a 24×7 human test team). With this in place (and getting the right test coverage is always an issue) you can exercise the system in any number of hypothetical scenarios to identify issues; both functional and non-functional, in a test environment before the production environment becomes compromised.

Computers are very good at doing repetitive tasks consistently. Humans are very good at coming up with new and creative test cases. Use each appropriately.

Much like housekeeping, frequent testing yields benefits at lower cost than simply waiting till the next major release when all sorts of issues will be uncovered and need to be addressed – many of which may have been around a while though no-one noticed… because no-one tested. Regular penetration testing and review of security procedures will help to proactively avoid vulnerabilities as they are uncovered in the wild, and regular testing of new browsers will help identify compatibility issues before your end-users do. There are some tools to help automate in this space (e.g. Security AppScan and WebDriver) though clearly it does cost to run and maintain such a continuous integration and testing regime. However, so long as the focus is correct and pragmatic then the cost benefits should be realised.

4. Design Patterns

Much like standards compliance, use of design patterns and good practices such as abstraction, isolation and dependency injection can help to ensure changes in the future can be accommodated at minimal effort. I mention this separately though since the two should not be confused. Standards may (or may not) adopt good design patterns and equally non-standard solutions may (or may not) adopt good design patterns – there are no guarantees either way.

Using design patterns also increases the likelihood that the next developer to come along will be able to pick up the code with greater ease than if it’s some weird hair-brained spaghetti bowl of nonsense made up after a rather excessive liquid lunch. Dealing with the daily churn of changes becomes easier, maintenance costs come down and incidents are reduced.

So in summary, entropy should be considered a BAU (Business as Usual) issue and practices should be put in place to deal with it. Housekeeping, standards-compliance, automation through continuous integration and use of design patterns all help to keep the impact of change minimised and keep the level of disorder down.

Next time, some thoughts on how to measure entropy in the enterprise…

Entropy

Entropy is a measure of disorder in a system. A number of years ago I was flicking through an old book on software engineering from the 1970’s. Beyond being a right riveting read it expounded the view that software does not suffer from decay. That once set, software programs, would follow the same rules over and over and produce the same results time and again ad infinitum. In effect that software was free from decay.

I would like to challenge this view.

We design a system, spend many weeks and months considering every edge case, crafting the code so we’ve handled every possible issue nature can throw at us including those “this exception can never happen but just it case it does..” scenarios. We test it till we can test no more without actually going live and then release our latest most wondrous creation on the unsuspecting public. It works and for a fleeting moment all is well with the universe… from this moment on decay eats away at our precious creation like rats gnawing away on the discarded carcass of the sunday roast.

Change is pervasive and whilst it’s seems reasonable enough that were we able to precisely reproduce the starting conditions the program would run time and again as it did the first time, this isn’t correct for reasons of quantum mechanics and our inability to time travel (at least so far as we know today). However, I’ll ignore the effects of quantum mechanics and time-travel for now and focus on the more practical reasons for change and how this causes decay and increasing entropy in computer systems.

Firstly there’s the general use of the system. Most systems have some sort of data-store; if only for logging, and data is collected in increasing quantities and in a greater variety of combinations over time. This can lead to permutations which were never anticipated which leads to exposure of functional defects or increase volumes beyond the planned capacity of the system. The code may remain the same but when we look at a system and consider it as an atomic unit in its entirety, it is continuously changing. Subsequent behaviour becomes increasingly unpredictable.

Secondly there’s the environment the system exists within – most of which is totally beyond any control. Patches for a whole stack of components are continually released from the hardware up. The first response from most first-line support organisations is “patch to latest level” (which is much easier said than done) but if you do manage to keep up with the game then these patches will affect how the system runs.

Conversely, if you don’t patch then you leave yourself vulnerable to the defects that the patches were designed to resolve. The knowledge that the defect itself exists changes the environment in which the system runs because now the probability that someone will try to leverage the defect is significantly increased – which again increases the uncertainty over how the system will operate. You cannot win and the cost of doing nothing may be more than the cost of addressing the issue.

Then there’s change that we inflict ourselves.

If you’re lucky and the system has been a success then new functional requirements will arise – this is a good thing, perhaps one for later but a system which does not functionally evolve is a dead-end and essentially a failure – call it a “panda” if you wish. The business will invent new and better ways to get the best out of the system, new use cases which can be fulfilled become apparent and a flourish of activities follow. All of which change the original system.

There’s also non-functional requirements change. Form needs a refresh every 18 months or so, security defects need to be fixed (really, they do!), performance and capacity improvements may be needed and the whole physical infrastructure needs to be refreshed periodically. The simple act of converting a physical server to virtual (aka P2V conversion) which strives to keep the existing system as close to current as possible; detritus and all, will typically provide more compute, RAM and disk than was ever considered possible. Normally this makes the old application run so much faster than before but occasionally that speed increase can have devastating effects on the function of the system within time sensitive applications. Legislative requirements, keeping compliant with latest browsers etc., all bring more change…

Don’t get me wrong, change is a good thing normally and the last thing we want is a world devoid of change. The problem is that all this change increases the disorder (entropy) of the system. Take the simple case of a clean OS install. Day 1, the system is clean and well ordered. Examining the disk and logs shows a tidy registry and clean log and temporary directories. Day 2 brings a few patches, which adds registry entries, some logs, a few downloads etc. but it’s still good enough. But by Day n you’ve a few hundred patches installed, several thousand log files and a raft of old downloads and temporary files lying around.

The constant state of flux means that IT systems are essentially subject to the same tendency for disorder to increase as stated in the second law of thermodynamics. Change unfortunately brings disorder and complexity. Disorder and complexity makes things harder to maintain and manage, increasing fragility and instability. Increased management effort results in increased costs.

Well, that’s enough for today.. next week, what we can do about it