Hack a Mousetrap

My 10 year old son and I have been playing with Arduino recently… specifically to build a mouse trap alarm. And clearly we’re not the only ones thinking about this… Some guys over at MS have tried to hack¬†a mousetrap using every piece of technology they can get their hands on (I’m sure I saw the kitchen sink in there somewhere). Nice ūüôā

JBOSS Openshift Queue Deployment

Deploying to Openshift is, in theory, as simple as git push. But if you’ve made any changes to the app-server environment you’ll find it gets blitzed on deployment (this is actually a good thing since it’ll force you to get into the habits of automating deployments).

To deal with this Openshift gives you the ability to define a set of action-hooks that get called during the various stages of the applications lifecycle.

These are just shell scripts and are pretty easy to define – just create a file in the .openshift/action_hooks directory in the project root matching the name of the hook you want.

In the case of queue deployment we need a post_start script which uses the JBOSS CLI to create queues.

The script .openshift/action_hooks/post_start looks like this:

echo "Starting JBOSS Queue Configuration..."
${OPENSHIFT_JBOSSAS_DIR}/bin/tools/jboss-cli.sh --connect controller=${OPENSHIFT_JBOSSAS_IP}:${OPENSHIFT_JBOSSAS_MANAGEMENT_NATIVE_PORT} --file=${OPENSHIFT_REPO_DIR}/cli/create-queues.cli
echo "JBOSS configuration complete!"

Make sure the script is executable via chmod +x .openshift/action_hooks/post_start.

The use of various environment variables ensures the script will work regardless of the configuration the image fires-up with and the “echo” commands ensures some sort of output is dumped to stdout during push for confirmation.

This script references a cli script (cli/create-queues.cli) looking like this:

jms-queue add --queue-address=queueA --entries=queue/QueueA
jms-queue add --queue-address=queueB --entries=queue/QueueB

And hey presto! On deployment you’ll see a couple of messages output showing:

remote: Starting JBOSS Queue Configuration...
remote: JBOSS configuration complete!

And if you tail the logs (rhc tail -a ) you should see confirmation of the deployment as below:

2016/02/01 16:57:57,641 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-8) trying to deploy queue jms.queue.queueA
2016/02/01 16:57:57,644 INFO [org.jboss.as.messaging] (MSC service thread 1-8) JBAS011601: Bound messaging object to jndi name java:/queue/QueueA
2016/02/01 16:57:57,738 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-8) trying to deploy queue jms.queue.queueB
2016/02/01 16:57:57,740 INFO [org.jboss.as.messaging] (MSC service thread 1-8) JBAS011601: Bound messaging object to jndi name java:/queue/QueueB

Finally the JBOSS console will show your queues in all their glory…

JBOSS Queues


I’m experimenting; or trying to, with IBM Bluemix virtual machines. Clearly beta and half the time I get a UI in italian (!?) but the documentation is woeful.

Simple VM created… What’s the connection string?

Launch Horizon (oddly a separate site, but ok,.. beta…) says it’s:

ssh -i cloud.key <username>@<instance_ip>

Ok, I know the key, I know the IP, but what’s the username?

(the username might be different depending on the image you launched):

Yeah, ok, but what is it? It’s a standard IBM image (CentOS 7 in this case) so…

Nada (Spanish, so still not sure what’s with the Italian)! No documentation, no advice, a broken link in the VM docs… Stack Overflow has no questions under bluemix usernames.. But thankfully DW answers does – though not specifically my question, rather others with sudo issues! Not a great way to find out… Perhaps they hand out that bit of info on the training course… Anyway, two days lost of whatever free period I get, I can login. Now I need to remember what I wanted to try out..

For the record it’s:


Session Abolition

I’ve been going through my bookcase; on orders¬†from a higher-being, to weed out old, redundant books and make way for… well, I’m not entirely sure what, but anyway, it’s not been very successful.

I came across an old¬†copy of Release It! by Michael T. Nygard and started flicking through, chuckling occasionally as memories (good and bad) surfaced. It’s an excellent book but made me stop and think when I came across a note reading:

Serve small cookies
Use cookies for identifiers, not entire objects. Keep session data on the server, where it can't be altered by a malicious client.

There’s nothing fundamentally wrong with this other than¬†it chimes with a problem I’m currently facing and I don’t like any of the usual¬†solutions.

Sessions either reside in some sort of stateful pool; persistent database, session management server, replicated memory etc., or more commonly exist stand-alone within each node of a cluster. In either case load-balancing is needed¬†to route requests to the home¬†node where the session exists (delays in replication means you can’t go to any node even when a stateful pool is used). Such load-balancing is performed by a network load-balancer, reverse proxy, web-server (mod_proxy, WebSphere plugin etc.) or application server and can work using numerous different algorithms; IP based routing, round-robin, least-connections etc.

So in my solution I now need some sort of load-balancer – more components, joy! But¬†even worse, it’s creating havoc with reliability. Each time a node fails I lose all sessions on that server (unless I plumb for a session-management-server which I need like a hole in the head). And nodes fails all the time… (think cloud, autoscaling and hundreds of nodes).

So now I’m going to kind-of break that treasured piece of advice from Michael and create larger cookies (more likely request parameters) and include in them some every-so-slightly-sensitive details which I really shouldn’t. I should point out this isn’t is criminal as it sounds.

Firstly the data really isn’t that sensitive. It’s essentially¬†routing information that needs to be remembered between requests – not my credit card details.

Secondly it’s still very small – a few bytes or so but I’d probably not worry too much until it gets to around 2K+¬†(some profiling¬†required here I suspect).

Thirdly,¬†there are other ways to protect the data – notably encryption and hashing. If I don’t want the client to be able to read it then I’ll encrypt it. If I don’t mind the client reading the data but want to make sure it’s not been tampered with, I’ll use an HMAC¬†instead. A JSON Web Token like format should well work in most¬†cases.

Now I can have no session on the back-end servers at all but instead¬†need to decrypt (or verify the hash) and decode a token on each¬†request. If a node fails I don’t care (much) as any other node can handle the same request and my load balancing can be as dumb as I can wish.

I’ve sacrificed performance for reliability – both in terms of computational effort server side and in terms of network payload – and made¬†some simplification to the¬†overall¬†topology to boot. CPU cycles are getting pretty cheap now though and this pattern should scale horizontally and¬†vertically – time for some testing… The network penalty¬†isn’t so cheap but again should be acceptable and if I avoid using “cookies” for the token then I can at least save the load¬†on every single request.

It also means that in a network of micro-services, so long as each service propagates these tokens around, the more thorny routing problem in this sort of environment virtually disappears.

I do though now have a key management problem. Somewhere, somehow I need to store the keys securely whilst distributing them to every node in the cluster… oh and don’t mention key-rotation…

Cash-haemorrhaging public cloud

Interesting point of view on how cloud service providers are haemorrhaging cash to sustain these models in the hope they’ll win big in the long run.

As data storage and compute costs fall they may well be able to sustain existing pricing though I suspect ultimately they’ll need to ratchet things up. Cost comparisons are also hard to get right due to the complexity of pricing from suppliers and I also believe¬†the difference in architectural patterns used in the cloud versus¬†on premise further complicates things (something for another day).

What I do know is that their are those in the industry who cannot afford to be left behind in the race to the cloud; IBM, Microsoft and Google notably. They will likely¬†be pumping all they can into the cloud to establish their position in the market – and maintain their position generally…

Internet Scale Waste

Whilst reading up on internet scale computing I came across a presentation on Slideshare which contains the page below.


23 millions domains for 24,000 customers = just under 1,000 domains per customer. Now that seems like a lot but I strongly suspect it’s more like most customers have x1 domain with a few having many many thousands (something akin to a Zipfs distribution). Likely someone out there will have many hundreds of thousand of domains… I wonder who needs so many domains…

On an aside, eh-hem, I get a lot of comments from people pertaining to be from something like www.hu12gyd38hasjakdh8102e12e2djklasdagghkagqdncc.com, all of which turn out be spammers. Hummm….. I wonder how much spam/botware/malware waste resides in the cloud…?


Chaos Monkey

I’ve had a number of discussions in the past about how we should be testing failover and recovery procedures on a regular basis – to make sure they work and everyone knows what to do so you’re not caught out when it happens for real (which will be at the worst possible moment).¬†Scheduling these tests, even in production, is (or should be) possible at some convenient(‘ish) time.¬†If you think it isn’t then you’ve already got a resiliency problem (you’re out when some component fails) as well as¬†a maintenance problem.

I’ve also talked (ok, muttered) about how a healthy injection of randomness can actually improve stability, resilience and flexibility. Something covered by Nassim Taleb in his book Antifragile.

Anyway, beat to the punch again, Netflix developed a tool called Chaos Monkey (aka Simian Army) a few years back with randomly kills elements of the infrastructure to help identify weak points. Well worth checking out on codinghorror.com.

For the record…¬†I’m not advocating that you use Chaos Monkey in production… Just that it’s a good way to test the resiliency of your environment and identify potential failure points. You should be testing procedures in production in a more structured manner.

Cloud Computing Patterns

Found a¬†website today on Cloud Computing Patterns. Bit of a teaser to buy the book really since there’s not a great deal of detail on the site itself. Still, a useful inventory of cloud patterns and some nice diagrams ¬†showing how things like workload elasticity¬†works and the service models; IaaS, PaaS and SaaS, fit together.