The Fog of “Containerized Algorithmic Services”?

The latest IBM jargon here made me chuckle. It disguises a very old story.

Let’s unpick it a little.
Fog is one of the buzzwords for low-hanging / near-at-hand / in-your-face “Cloud”
Cloud being the metaphorical buzzword for all things distributed and linked via the internet.
IoT is simply a reminder that those connected things can (and will be) anything and everything.

The internet is simply the ubiquitous communications network technology connecting pretty much anything and everything these days. Within that, as specific technologies advance, possibilities for physical distribution of those things change and grow, and the optimum arrangement in terms of traffic, volume, speed, reliability, security etc for different kinds of use, continually change too. That’s the technology, the IT (or ICT).

But, communication is much older than the internet.

What matters is meaning and know-how. As Einstein famously said “The communication problem is the illusion that it has happened.” – true on so many levels here – and “Communication is like a cat, you pull it’s tail at one end, it miaows at the other.”

Transferring the bits of information is one thing, but what it means, what’s meant to happen on arrival is a whole ‘nother thing. Without the latter – semantics – it’s Einstein’s illusion to call it communication.

Sure, most people believe Sir TBL gave us the Semantic Web – thereby giving our generation the focus that use of the internet is primarily about connecting information with meaning. In fact the Semantic Web was coined long before internet technology – by Foucault in 1966 – and, not in so many words, has existed as long as philosophy and epistemology.

The population of the world being very large, the number of ways different people and organisations are working on that is enormous. Old ideas are repackaged – in both knowledge and ignorance – with new metaphors and names, and that’s fine with billions clamouring for attention. That’s marketing.

The problem is if the re-packaging is too fast – an arms race of new jargon – the communication is all Einstein’s illusion. That’s a problem for (at least) two reasons.

  • One, is that rolling the whole topic under a single new set of buzzwords also hides the fact that problem does still have distinct technology and content, functional and semantic aspects to be addressed. Sure they may all be well understood in their distinct parts, but the packaging doesn’t change the fact that these distinct “workfronts” need to be addressed using different knowledge and resources in any particular application or solution. One size fits all may be true at some conceptual level but different solutions do have different parts.
  • Two, is that the value of parts of the solution that have already been perfectly well understood by previous work – using previous metaphors and terminology - is easy to overlook and forget, and will need to be reinvented again later anyway. Baby is thrown out with the bathwater.

I was prompted to write by David Hodgson’s comment on the original thread. The phrase that tickled me particularly in the piece itself was the one quoted in my title here: “Containerized Algorithmic Services”. Frankly, for reasons Dave states, I’ve not tried to decipher the jargon of the whole article in detail, but this one concept below jumps out as one that has been well worked already, but simply trampled under the endless stream of new jargon as each new technology cycle evolves.

  • Containerised Algorithmic Services at smart endpoints – is the original vision of our Shorthand or “Signature” Templates. (ask Hans, Magne or Onno). Having characterised your “packet” of data semantically – let’s face it packets have been fundamental to the internet and mark-up languages since day one – and having expressed it algorithmically in its signature, any number of behaviours, services and functions can be driven by such algorithms – anywhere in your extended network.

I’ve managed to resist writing on the irony of “Fog” before. I’d guess whoever first coined it knew what they were doing to cloud reality 😉

  1. #1 by Victor Agroskin on March 18, 2016 - 21:20

    You still have need to store desired behaviour in some form beside your semantically marked data (or somewhere else waiting for it). Signatures form an indispensable part of the service description, but algorithmic content was underspecified in your original work and still is. Some form of rule language added by recent David’s work can not help much – all we can rely on is logical inference engine, not a Turing full language.

  2. #2 by admin on March 24, 2016 - 08:46

    Sure the algorithmic content was “underspecified”, but the point was having the hooks and structure there (in semantic form) that those more concerned with programming behaviour could create those components. David’s stuff – the mainstrewam sem-web angle – misunderstands that business behaviour is somehow rule-based driven directly by your model. It’s not. The parts that can be “inferred” can be programmed in packets too. ie I don’t care where the business behaviour comes from – some pattern recognition / inference engine, or a human requirement – it can then be captured in a packet associated with the information components and each particular business application. It’s the componentisation I’m emphasising. That make sense?

(will not be published)