Missioning your visioning

Just a repost; hardly anything but the footnotes had to change from that post from over two years ago..:

I was triggered recently[1] about some very common mix-up, and suddenly saw a spark of insight. Since the seeing concerned me, it was at a distance of course (not), so I’ll share it with you to see whether you recognise the somewhat local phenomenon and its ramifications and cures.

This all being about the top-level structure for any organisation, i.e., vision, mission, strategy.

So often mixed up qua priorities and order.
Where a great many do think one starts with the mission, thinking endlessly wrongly that one determines the mission first, then … oh vision how can we define that, then, to cover the actual second step, and strategy oh we will not be able to define that.
Because of the mix-up you can’t. Because vision comes first. First define what and how you think the world will look like in somesomewhat more distant future. ONLY THEN [skipping the boldface] one defines whether and where one’s (organisation’s) place is thought to be within that; mission. Only then does one define how to get from the paltry Today’s position, to where the bright future is, and which path to follow; strategy.
When you begin with the mission, you utterly falsely assume there’s a place for you in the first place which, when you think that way, there is ever more certainly not.
When you start with the vision (which can be grand, being without you in the picture will help to that end…), you would also need to think about what your raison d’être in that world view would be. You have no right to existence [note that we’re discussing organisational rights, not natural persons’ rights for which the opposite does hold; ed.] other than making the future better – than any of your competitors can. If there’s competitors that can, in principle given their today’s capabilities, in fact do better than you, you’ll have to find something else to do as the world, by Pareto’s comparative advantage reasoning [which is one of the few economists’ reasonings of any value at all when without the ‘ceteris paribus’ totalitarian destructive lies; ed.], is better off when you leave it to others – the world better off being your purpose or you have no place on this planet. Anything related to profitability has no, zero, rien du tout, nul, nada, place in a mission statement since it’s a derivative requirement (sic) only, towards some specific but not therefore important, stakeholders known as shareholders or investors that, if your mission is true and virtuous, should be utterly grateful for the opportunity to be allowed to invest in your strategy and would even have to pay rent, not receive it. For organisations, it is only a requirement in as far as reserves are built, for hard times and even there one could insert some evolutionary theory that sometimes extremely improving mutations are wiped out by some unfortunate accident (act of nature) before becoming a species-wide improvement — same, for organisations that are set to become most beneficial to the world already on the road towards fulfilling their mission but fail halfway due to adverse but insufficiently bendable market conditions. Bad luck. Move over. Machiavelli’s Fortuna again. Read his Original to see how sobering he meant that and that you can be (Aristotelian-)virtuous all the way through.

So, first vision, then mission. And only then, strategy of course. Being the course. You need to take; roughly. Bending and shaping as you go along; sailing to an up-wind buoy. Setting the boundary conditions – including the boundaries of means you’ll allow as not all means are allowed for just any ends, no end may be enough for some means, most ends are not worth pursuing in the first place. You get the drift; if the means are not ethical, virtuous, the ends will never be. Excepting only a handful of situations like war/genocide, large-scale natural disaster, et al. where one may have to shoot the bad guys to prevent them shooting the good guys.[2]

But hey, this was just the spark. About the wrong-order its cause(s). The rest … good for PowerPoint presentations on the subject I’d say.
[Edited to add: Next try to understand the utter ridiculousness of the Dutch ‘beleid’ as translation of ‘policy’. Meaning (in both languages, creepingly) not what it has meant over the past decades, centuries, but more like ‘petty micromanagement ruleset’. No more general picket post placement in the distance but rails, shackles…]
Leaving you with:

[The lands you defends, into the fuzziness; (from) Salzburg Castle]

[1] Yes, in a conversation to establish whether I’d want to work somewhere, and they’d have me. Both ways, the result was Unsure But Let’s Keep On Talking. They had a Mission on their glass door, and a Vision, and Values – you understand that Mission and Vision were a jumble of Calimero-aspirational buzz, and Values were a big fat Mehhh. If you have to post them on your door in the hall to remind any employee coming to work day in, day out, you’re .. well, not quite living them, are you ..?
[2] Tell-tale, this [1] club had Values there, not Strategy. The latter, didn’t even surface in a second conversation at all. As if one bumped into a fluffy ceiling when trying to raise the level, before being pulled back to mundane hire-warm-body work descriptions. Some sparks of want to move forward and some slight claim of record there but hardly any self-volunteered methodology hints or so…

Exit 2018 1163

Asad day for all you aficinados of this blog. After some five years and about 1163 posts (you’ll see…; own, mostly with own pics), this is the last of the (work)daily update. Yes, I’ve managed. But will turn to more serious, somewhat-more long-form content so will stop the drivel. I will not post daily, but when I do … And I’ll intersperse with some margin notes posts. Per 1/1 these will have no picture, the long for ones will – just check the link-post’lets and you’ll see. In line with the season: Enjoy less frequent but more professional, beautiful fireworks.
Or be safe with your own fireworks. Else, stand candidate for the Darwin Awards, which is also OK with me especially if you’ve not appreciated my blog; excepting the few I care for ;-|

Now then …
[Some room available. Live and die to be worth it, or take a hike; Arlington]

After 2018’s hypes, this

Already you thought you had enough on your plate, for 2018 qua predictions even when most will play out differently than stated? And though these ones are [as in: when you verify/falsify them in the near future, they will have become ‘are’] actually correct…
these will also play a role in 2018.

Yes, yes, in a much more fundamental way, and maybe in the mainstream media only per ill-understood sensational pastiche, but still it will certainly [same] augment the fuzz around quantum computing. That will, in the end, when made operational not be much of a shocker anymore. Too much dilution in the latter, to still make good on its supercalifragilisticexpealidocious claims. Too bad / good, depending on which side of the quantum-crypto-crackability wars you are – the latter not even mattering since this and this. And this in particular. What will the above mean in this respect?

[Edited to add: Oh and this just in. Relevant, on a nearer-future scale]

Leaving it there for you, to study and be prepared… plus:
[Fattened over the holiday season, you are ..? Shardless London it was, ‘is’ish]

Predictions 2018+

Hold me to account for the following.
For these are the predictions that for a change, will pan out:

  1. Bitcoin disasters, as in price crashes and partial recoveries. With a plethora of other coins rising in prominence, and price. The room for diversification will abound. And I will devise that Coin Maturity Index you’ve all been waiting for.
    But with blockchain successes in many places. E.g., one will see some e-voting based on it, spring up. Not that such solutions will be the thing of the year (yet), but still in all sorts of unexpected places, ‘chain solutions prove helpful improvements over whatever there was.
    Maybe DACs but I doubt it.
  2. AI of course. With a seriously increasing rate (sic) of successful point solutions. Bots everywhere, also as consumerbots responding to (and biasing…) sell-side bots. Ever more blue-on-blue… But also, many new applications of e.g., image processing plus autonomous-something plus ‘intelligent’ responses. New car software, that start to behave with signs of something resembling accurate and apt reactions.
    And, like yesterday’s post, a lot of debate and settling of arguments and contentions, about the philosophical, and ethical, aspects of ‘intelligence’.
    Important also, will be the growth of ‘tweaked AI’. Like, neural nets having learned, then analysed and pruned. Possibly turned into ‘expert systems’, with tons of Fuzzy Logic in between. Now that will make ‘AI’ systems much, much more useful and easily deployable in the coming year(s), decades, and also is the avenue for bias correction and prevention (in that order).
    But first, there will be this – when will it show genius, even if recognised after decades?
  3. Augmented Reality. Many applications will surface, and ‘seeing’ someone using it / being helped by it, will become less than unusual. This typically is one that comes forward out of the through of disillusion and may blossom (short of ‘explode’ I need to add).
  4. IoT disasters. Of course; not so hard to predict. And then I mean, really massive ones, black-outing a full major EU country or elsewhere. Also, the budding of serious, wide-reaching and securing standards in this field.
  5. The Surfacing of new ways to compose / manage infrastructure, the latter from the hardware layer all the way up to high in the stack. From containers becoming mainstay (and people now learning about them, beyond the surruptitous being-around of today!), to Low Code systems (check out these for a good idea how far things can go with that!) et al.
    Plus, REST API’s will be in this mix, very clearly. Don’t know how, but do know-in-advance that they will.
  6. Privacy will have become so mundane that it’s not interesting anymore, qua innovation. Yes of course, legal battles will fly all around, with many hits and misses. But next year, I will also release my perennial Paper on privacy measurements, metrics, indices, that will help the world establish better rules and solutions. Just you wait and see. Read. And study.
  7. Oh, and a new wifi protocol. A secure one, that holds out for a couple of years to come. Please.

Now then:

[Where Bermuda-clad waitresses bring the bubbles with strawberries; Cyprus you gathered]

Eternal Life

Remember Castranova’s Synthetic Worlds? You should. If only because this, is, still, in The Atlantic [edited to add: with an intelligent comment/reply here] – because that is still around, it allows for comparison of origins, and the utopical futures as described in the book, with-or-to the current state of the world. Where we have both the ‘hardly any change noticeable in current-state affairs when compared with the moonshot promises of yesterday’ and the ‘look what already has changed; the whole Mobile world change wasn’t even in those rosy pictures’ and ‘hey don’t critique yet we may only be halfway through any major tectonic humanity shifts’.
Where the latter of course ties in with the revitalised [well there‘s a question mark attached like, what do you mean by that; is it ‘life’ and would we even want that] version in ‘singularity’ the extreme nirvana of uploaded, eternal minds. As if the concept of ‘mind’ or ‘intelligence’ would make any sense, in that scenario. And also, since this (pronounced ‘Glick’ one guru told me lately; the importance of continued education), where the distinction between ‘eternal’ and ‘forever in time’ clearly plays up as in this (same), against …

In circles, or Minkovsky‘s / Penrose‘s doodles [wormholing their value], the issue comes back to flatlander (i.e., screen) reality, if there is such a thing…
Oh well; leaving you with:
[Warping perspectives, in ‘meaning’, too; Salzburg]

Aïe! Missing the Point

Yet again, some seem to not understand what they’re talking about when it comes to transparency in AI…
Like, here. Worse, this person seems to be a rapporteur to the European Economic and Social Comittee advising the European Committee. If that sounds vague – yes it does even for Europeans.

For the ‘worse’ part: The umpteenth Error, to consider that the secrecy of algorithms is the only thing that would need to change to get transparency about the fuctioning of a complete system.
1. The algorithm is just a part of the system, and the behaviour of the system is not determined in anything close to any majority part by the algorithm – the data fed to it, and the intransparent patterns learned by it, are. The transparency needs to be about the algorithm but much more about the eventual parameters as learned throughout the training time and the training/tuning after that. [Update before press release: There seems to be an erroneous assumption by some way too deep into EC affairs that the parameters are part of the ‘algorithm’ which is Newspeak at its worst, and counterproductive certainly here, and hence dangerous.]
2. The algorithm can just be printed out … If anyone would need that. One can just as easily run an AI code analyser (how good would that be? They exist already, exponentially increasing their quality, savvyness) over the source- or decompiled code.
3. The eventual parameters … not so much; they’re just there in a live system; unsure how well they are written out into any file or so (should be, for backup purposes – when not if AI systems will get legal personhood eventually (excepting the latter-day hoaxes re that), will a power switch-off be the same as attempted murder, and/or what would the status of a backup AI ‘person’ be ..?).
4. Bias, etc. will be in the parameters. The algorithms, mostly-almost-exclusively will be blank slates. No-one yet knows how to tackle that sufficiently robustly since even if the system is programmed (algorithm..!) to cough up parameters, the cleverer systems will know (?) how to produce innocent-looking parameters instead of the possibly culpable actual ones. Leads into this trickery by AI systems, have been demonstrated to develop (unintentionally) in actual tests.
5. How to trick AI pattern recognition systems … the newest of class breaks have just been demonstrated in practice – their theoretical viability had been proven long before – e.g., in this here piece as pointed out only yesterday [qua release; scheduled last week ;-]. Class break = systemically unrepairable … [ ? | ! ].

Let’s hope the EC could get irrelevant just that little less quickly by providing it with sound advice. Not the bumbling litlle boys’ and girls’ type of happythepreppy too-dim-wits. [Pejorative, yes, but not degrading: should, maybe not could, have known better ..!]

Oh, and:
[Already at a slight distance, it gets hazy what goes on there; from the Cathédrale]

Loss of memory

Recently, was reminded (huh) that our memories are … maybe still better than we think, compared to the systems of record that we keep outside of our heads. Maybe not in ‘integrity’ of them, but in ‘availability’ terms. Oh, there, too, some unclarity whether availability regards the quick recall, the notice-shortness of ‘at short notice’ or the long-run thing, where the recall is ‘eventually’ – under multivariate optimisation of ‘integrity’ again. How ‘accurate’ is your memory? Have ‘we’ in information management / infosec done enough definition-savvy work to drop the inaccurately (huh) and multi- interpreted ‘integrity’ in favour of ‘accuracy’ which is a thing we actually can achieve with technical means whereas the other intention oft given of data being exactly what was intended at the outset (compare ‘correct and complete’), or do I need to finish this line that has run on for far too long now …?
Or have I used waaay too many ””s ..?

Anyway, part II of the above is the realisation that integrity is a personal thing, towards one’s web of allegiances as per this and in infosec we really need to switch to accuracy, and Part I is this XKCD:

Awaiting Asibot

All Are Ardently Awaiting – stop, semantics go over syntactic alli – the release of Asibot, as here.
Because we all need such a system. The inverse of Dragon Naturally (into Nuance, too little heard of as well!) combined with a ghost writer, as it were / is / will be. When prepped with one’s own set of texts, should be able to generate useful ground work for all those books you have been wanting to write for a long time but couldn’t get started.
Now, would such a system be able to extract hidden layers, stego-type of themes, that are in your texts that you aren’t even aware of ..? What kind of services would be interested most? Oh, that one’s answered before I finished typing; the three-letter abbrev (uh?) kind of course.

Still, would very much want to meddle with the system… Plus:

[If applicable to music, sunny Spring days in Saltzburg, too for …]

Trust ⊻ Verify

You get that. Since Verify → ¬Trust. When you verify, you engender the loss of trust. And since Trust is a two-way street (either both sides trust each other, or one will loose initial trust and both will end up in distrust), verification leads to distrust all around – linked to individualism and experience [we’re on the slope to less-than-formal-logic semantics here] this will result in fear all around. And Michael Porter’s two books, not to mention Ulrich Beck in his important one. So, if you’d still come across any type that hoots ‘Trust, but verify’, you know you’ve met him.

Since the above is so dense I’ll lighten up a bit with:
Part of the $10 million I spent on gambling, part on booze and part on women. The rest I spent foolishly. (George Raft)

Which is exactly the sort of response we need against the totalitarian bureaucracy (i.e., complete dehumanisation of totalitarian control – which approaches a pleonasm) that the world is sliding into. Where previously, humanity had escapes, like emigrating to lands far far away, but that option is no more. Hopefully, ASI will come in time to not be coopted by the One Superpower but then, two avenues remain: a. ASI itself is worse, and undoes humanity for its stubborn stupidity and malevolence (the latter two being a fact); b. ASI will bring the eternal Elyseum.
Why would it b. ..?
And if it doesn’t arrive in time, a. will prevail since the inner circle will shrink asymptotically which is unsustainable biologically.

Anyway, on this bleak note I’ll leave you with:

[Escape from the bureacrats; you-know-where]

Fog(gy) definitions, mist(y) standards

If you thought that containers were only something to ship wine in, by the pallet, you a. would be right, b. would maybe have overslept on the new concept, c. would not mind I introduce the next thing, being fog computing. I’m not making this up as a part, or extension, of low-hanging cloud computing.
You think I’m kidding, right? Or, that I should have called it mist computing which is a thing already but only a somewhat different thing… You’re still with me?

Then it’s time to read up. And weep. Over this here piece that sets the standard, quite literally.

There. You see ..? Indeed low-hanging, as in the stack … That wasn’t so hard. But implementation will be, if required to be secure. Have fun, will TLS. Or so.

OK, this post was as it stated just an introduction to the IoThing – I was serious though about the Go Study part. Plus:
[Cloudy top cover, smiley backside of a place of worship; Ronchamps FR]