Ah yes, Microsoft minutes… oh no, hang on…!!!
19,498 yrs later…
Now that’s what I call reuse (of the opportunistic kind)!…
… so I’ve just been told the most important thing about a computer is the number of “pouce” it has (thumbs to you and me). Humm… well, ok. Mine generally has two at most and I get angry if it’s got any more. My father in-law has 15.6 of them. Wow! That’s one powerful machine (I guess)!?
For the record, I’m not sure where the original came from but there’s a copy on Stack Exchange; “Why shouldn’t I use ECB encryption”, and the ECB penguin clearly has history as noted on PyTux…
Is it really heavy-handed to give users a slightly second rate experience because they use an out of date browser?
Me thinks not really… effort spent should be proportional to the size of the user base.
Just a pity they didn’t go further and send any user of IE off to the 1999 edition and throttle their download to the 28kbps they deserve… 80% of the effort for 20% of the users.
Is it me or has there been a radical explosion in the title “Chief” recently? CEO, CFO, CIO, CTO, I can get this (kind of)… But isn’t the point that such a role is, well, the chief? Like the president being commander-in-chief?
So we dilute the office (Executive, Financial, Information, Technology…) and you’re “chief” of your office… But ok, whatever, you need a little viagra to stimulate these guys.
We then have chief architect (period) which; I admit, in some cases was a role I had some respect for. But now I’m seeing chief-architect-of-xxx (where xxx is some random project spawned the morning after a particularly heavy drinking session). You’re not the chief, you’re a muppet for believing the title has any bearing on your status. The only effect that title has is to make the CEOs feet go cold when he realises his veil of authority is slowly eroding away, and for the minions you supposedly lead to think you’re a bit of a dick.
So I’ve decided to bypass this faux “chief” thing and skip it, going straight for… Master-of-the-Universe!
Now I’m just waiting for someone to title themselves “God of Pocket Calculators!”… and the circle is complete, as Dylan said, everybody “Gotta serve somebody” (and yes, I know it’s a Willie Nelson cover!).
“They may call you Doctor or they may call you Chief … But you’re gonna have to serve somebody”…
Memoization is a technique used to effectively cache the results of computationally expensive functions to improve performance and throughput on subsequent executions. It can be implemented in a variety of languages but is perhaps best suited to functional programming languages where the response to a function should be consistent for a given set of input values. It’s a nice idea and has some uses but perhaps isn’t all that common since we tend to design programs so that we only call such functions once; when needed, in any case.
I have a twist on this. Rather than remembering the response to a function with a particular set of values, remember the responses to a function and just make a guess at the response next time.
A guess could be made based on the entropy of the input and/or output values. For example, where the response is a boolean value (true or false) and you find that 99% of the time the response is “true” but it takes 5 seconds to work this out, then… to hell with it, just return “true” and don’t bother with the computation. Lazy I know.
Of course some of the time the response would be wrong but that’s the price you pay for improving performance throughput.
There would be some (possibly significant) cost to determining the entropy of inputs/outputs and any function which modifies the internal state of the system (non-idempotent) should be avoided from such treatment for obvious reasons. You’d also only really want to rely on such behaviour when the system is busy and nearly overloaded already so you need a way to quickly get through the backlog – think of it like the exit gates of a rock concert when a fire breaks out, you quickly want to ditch the “check-every-ticket” protocol in favour of a “let-everyone-out-asap” solution.
You could even complicate the process a little further and employ a decision tree (based on information gain for example) when trying to determine the response to a particular set of inputs.
So, you need to identify expensive idempotent functions, calculate the entropy of inputs and outputs, build associated decision trees, get some feedback on the performance and load on the system and work out at which point to abandon reason and open the floodgates – all dynamically! Piece of piss… (humm, maybe not).
Anyway, your program would make mistakes when under load but should improve performance and throughput overall. Wtf! Like when would this ever be useful?
Ok, perhaps not my best idea to date but I like the idea of computers making mistakes by design rather than through incompetence of the developer (sorry, harsh I know, bugs happen, competent or otherwise).
Right, off to take the dog for a walk, or just step outside then come back in again if she’s feeling tired…