Skip to main content

Cash-haemorrhaging public cloud

Interesting point of view on how cloud service providers are haemorrhaging cash to sustain these models in the hope they'll win big in the long run.

As data storage and compute costs fall they may well be able to sustain existing pricing though I suspect ultimately they'll need to ratchet things up. Cost comparisons are also hard to get right due to the complexity of pricing from suppliers and I also believe the difference in architectural patterns used in the cloud versus on premise further complicates things (something for another day).

What I do know is that their are those in the industry who cannot afford to be left behind in the race to the cloud; IBM, Microsoft and Google notably. They will likely be pumping all they can into the cloud to establish their position in the market - and maintain their position generally...

Comments

  1. I also must wonder how many systems transition from "cloud" to dedicated/managed hosting providers once companies do the maths on what it'll cost them to run their apps in the cloud over a period of 3 years or more ............

    Next thought do cloud providers need lots of people to come in and out of their hosting environment to make money, or do they bank on long term retention to pay the bills... hmm ....

    ReplyDelete
  2. In theory the cost savings can be substantial. In practice the cost models are complicated and may be overlooked by architects and developers in the rush to migrate to the cloud. When the bill lands the CFO may have something unpleasant to say... We need to do our own cost models and projections but perhaps we should also have an exit strategy in mind to move off a cloud provider if needs must. Unfortunately industry standards aren't exactly prominent in the nebulous world of cloud so finding an alternative once you've bought into the various capabilities of one provider isn't going to be easy.

    Personally I think cloud is pretty cool but I suspect you're right and they'll be a backlash at some point with some major outages and/or orgs moving back to their own tin as the charges mount. By then the world will have moved on though and hopefully the standards and options will have coalesced somewhat... hmm indeed...

    ReplyDelete

Post a comment

Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.