Skip to main content

Documents v Wikis

I used to spend a significant proportion of my time working with documents. Nasty 100 page beasties where 80% of them felt like generic copy text designed principally with the goal of obscuring the 20% of new content you were really interested in. Consequently I developed a severe dislike of any document more than a few pages long.

The agile manifesto came along and suggested we focus on “working software over comprehensive documentation” which by some has unfortunately been taken to mean “no documentation”. Let’s just say there’s a significant grey area between the extremes of “comprehensive” and “none-at-all”.

Personally I’m aware that I fall more into the “comprehensive” camp than the other though I put this down to the fact that; for me, putting things down on paper is a great way of helping me think things through. For me, documentation is a design tool.

On the other hand, wikis…! I used to see wikis as a saviour from the hours/days/weeks spent reviewing documents and trying to keep content consistent across multiple work-products. Finally a tool where I can update in once place, link to other content and focus on the detail without the endless repetition. Something in support of the agile manifesto which endeavours to provide enough documentation without going overboard on it. Unfortunately recent experience has exposed a few issues with this.

Table below compares the two.

ForGood formatting control.

Easy to provide a historic record.

Provides a point of contract sign-off.

Easy to work offline.

Generally accessible.
Highly used critical content is maintained.

Good sharing within the team.

Hyperlinking – easy to reference other content.

Promotes short/succinct content.

Good historic record (one is kept automatically).
AgainstYou have a naïve hope that someone will read it.

Promotes bloat and duplication.

Promotes the MS monopoly.

Poorly maintained.

Rapidly goes stale.

Promotes the illusion of quality.

Poor formatting control.

Requires online connectivity.

Low usage detail becomes rapidly out of date.

Poor sharing outside of the team.

Hyperlinking – nothing is ever in one place.

Poor historic record (too may fine grain changes makes finding a version difficult).


Hyperlinks, like the connections in relational databases, are really cool and to my mind often more important than the content itself. That wikis makes such a poor job of maintaining these links is; in my view, a significant flaw in their nature. The web benefits from such loose coupling through competition between content providers (i.e. websites) but wikis – as often maintained by internal departments – don’t have the interested user base or competitive stimulus to make this model work consistently well. Documents just side-step the issue by duplicating content.

So wikis should work well to maintain operational documentation where the user base has a vested interest in ensuring the content is maintained. Just so they don’t get called out at 3am on Sunday morning because no-one knows the location of a database or what script to run to restart the system.

Documents on the other hand should work better as design products and contractual documents which all parties need to agree on at a fixed point and which aren’t going to be turned to very often. Ten commandments stuff.

The problem is the bit connecting the two and the passage of time and decay turning truth into lies. The matrix connecting requirements and solution, architecture and operations, theory and practice, design and build, dogma and pragmatism. Where the “why?” is lost and forgotten, myth becomes currency and where the rain dance needed each time the database fails-over originates. Not so much the whole truth and nothing but the truth, as truth in parts, assumptions throughout and lies overall.

The solution may be to spend more effort maintaining the repositories of knowledge – whether it’s documents, wikis or stone tablets sent from above. It’s just a shame that effort costs so much and is seen as being worth so little by most whilst providing the dogmatic few with the zeal to pursue a pointless agenda.



Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.