Bam! Goner.

No, not fireworks.
Because of joining a reputable consultancy, I’ll be posting much less frequently from now on. This is more about saying Thanks for all your attention [and not for all your not so much, yes I keep track which at these slim-to-none figures is easy], so long and thanks for all the fish, and See You Soon.
[Edited to add: After just over six months, looks (and talk) had deceived … reputable turns out to also not being positive sometimes. This time, clearly. Over to Secura per 1 Sept]

[To construct more elaborate ideas and stories; Calatrava’s Science museum, Valencia]

Meddle Managers

OK, an integration of very old material with some new stuff:
The things from the past being about how, in frequent swirls of comments and rants against ‘middle’ managers, there were two responses from my side: 1. Yes, they are, often (I), of the worst kind and that qualification has no Dutch connotation involved here; 2. No, they’re valuable and have a serious, very serious role to play, if they play it right like often (II) happens. This sort of discussion you can read back through my posts here.

Sure, there was, and is, much to improve qua meddling management. Take for example the span of control … When well-balanced, there’s insufficient time/room to micro-manage (or the one doing it will burn out; Darwinian selection) and there’s no blow-out due to overstretch (him again).
Or take the many strands on Leadership on the one hand and Supportive, Facilitating Foremen on the other. Like, this recent piece, in a way [read hard and you’ll find the way by the way], or this even further-reaching vista.

Now, we’re all eagerly awaiting Kristel Thieme‘s follow-on posts …

The three together, would make quite a case for management in the right way, right ..?

Net up, the Spanish Inquisition. First, still:

[I’ve been told that fat back sides are still in fashion. Why …!?!? as they are ugly as s…; this, in Amsterdam harbour (6th of Europe, still..?)]

Rules, rules, rules Must Be. …?

A collection again of various viewpoints into one Idea [not to be pronounced like the construction kit company Eye-kja], in a collate-yourself fashion:

Part one, being yesterday’s post as here.
Coupled to, hey did I write it that early already how hopeful I was … this golden oldie.

And some quotexts:
Eric Hultgren stated in a post that “Its still amazing to me that most of our organizational rules and practices (eg. time reporting, hierarchical reporting, performance management, budgeting, vacation approval etc.) are based on that a few people would misbehave if rules and practices were not there. Would it not make more sense to have organizational rules and practices that would fit the absolute majority of the people instead?
to which Jad Nohra commented: “When something seems amazing, it’s usually because there is a hidden reason which, when understood, makes it seem much less amazing. In this case:
No it’s not amazing, because the rules and practices are not written for the sake of any people in the organization, but sadly, they are written there mostly for the organization to be able to claim they have rules, and forward the blame to individuals in case of misbehavior. This is the main reason for these rules to exist.

Which both is to be linked to the respective chapters of Bruce Schneier’s much too much overlooked work in this field.

In the end … most of you will go this or that.
Whatever …?? And:

[Odd, qua style, I’m unsure what to make of it – consistent, different, probably works for the inside and maybe the outside but overall? Utrecht Papendorp]

Group ethics

Individuals can be ethical, groups not so much.” — as a conclusion of this insightful piece.
At which, immediately my Canetti and Ortega y Gasset antennas went off the charts.

And with the note of the ‘can’ in the above — not so much the petty little bureaucrats with their mostly mumbo-jumbo about cold fusion 3LoD farrago [yes just look that up will you]. They are too hopelessly caught up in their view to consider the im- or rather anti-moral ramifications of the wrong understanding of the idea.

Also rather, an onward discussion would involve solving the trolley problem and knowing-plus-beyond-that-understanding one’s own position anyway, as in this nifty test, brutally biased in utilitarianism. Yes, that’s a pejorative if ever there is one.

Still, I’d like to leave it here now with an even more positive note; hard but not impossible …:
Can we bridge the gap between semantic levels, i.e., grasp the jump from mere knowledge to understanding and insight, the jump from personal data to privacy, etc. ..? Noting that there’s aggregation at work (Ortega y G again) plus an interesting notion of ‘emergent properties‘. Unsure whether e.g., psychology/psychosociology/sociology [my browser spell (huh, ‘spelling’ but the abbreviation is better when dealing with browser ‘security’, eh?) checker intersects ‘psychopathology’, appropriately] would have some clear enough takes on that one ..! The jump from psycho to socio I mean.
If we can get a grip on these kinds of jumps, we for certain be able to build better brains — moving from plain ML to symbolic reasoning and vice versa or rather, in full brain-copying cooperation. Capice ..? Also, we’d better be able to do something about/with ethics.

After which:

[Time to pond(er); Alhambra Granada]

The No Kill Portfolio

Recently, I floated the idea to use a portfolio approach to risk management its controls aspects, highlighting (positive or negative) correlations in their (positive or negative) effectiveness, individually and overall. This could be extended to include derivation of an optimal portfolio given a max of e.g., costs which must include the cost of harrassing your users with those controls, not possibly but most probably far outweighing the costs of controls-in-a-narrow-sense plus the benefits … But still, one could imagine some form of ‘efficient frontier‘ or so; including calculating the greeks. Yes I did graduate in Finance in the heydays of the Yuppies.

But I also had a long time ago (in an internet era far, far away; 2013) already, discussed the possibilities of better fighting of the kill chain (then, almost avant la lettre), through the use of cleverer controls, here (near the bottom of that one) in a matrix where the above userhinder could be reduced significantly. Probably through [haven’t digested this idea-that-came-while-typing-it fully yet] improving the portfolio by naturally limiting correlations, much improving inherent hedging that leads to a. a jump up in effectiveness, b. efficiency gains, c. easier acceptance by end users since security is more natural, more easy to maybe not even noticing to do the right thing.

Is there any grad student out there that needs to write a thesis / paper? Here’s your subject; I’d love to advise.
Others, may also comment please…

For the time being:

[Winter’s coming, do this it’s fun (and much harder than you think!!) no need to brace yourself (before…); Peterborough Curling Club]

AI and pattern matching — from the business side

As stated earlier, I believe, in the power of ‘AI’ – when driven from ‘the business’ side. The latter being so ill defined it hurts, a lot.
But nevertheless, take a Forbes article. Square that with the Accenture robotics approach. Then cube the lot with McKinsey’s insights.
Oh and a dose of this on organisational readiness to receive, also helps a lot.


Four dimensions of the same. Possibly, some HBR can be thrown into the mix but already, you see that the bottom-up approach, which is valuable too here and there, can in fact be augmented, over-arched, with a more strategy-execution style program…
That should do it.

Oh also don’t forget: Mid-level, there is some solution in using a pipeline (like here! and here), but note that this, too, still a. requires Translators, b. is quite bottom-up, c. is at most mid-, project management level, d. has too little connection with program / project portfolio management to be deployed just like that. However necessary in your projects. Study both sides ..!

Leaving you with …:

[How a jumble still aligns, making the picture interesting; Zuid-As Amsterdam]

DevML Auditing

Since reinforcement learning is too broad a term… we’d better call it continued-in-deployment learning.
Back to the subject, being this CiDL. And how one would audit a system that uses, as one of its many (sic) parts, such a module. The weights change constantly, a little. By which point would one say that the system needs to be re-trained since the implementation ‘certification’ (huh) of e.g., unbiasedness determined once a long time ago, doesn’t apply anymore?

A couple of considerations:

  • For sure (sic again), the unbias of yesterday is today’s societally unacceptable bias.
  • [Shall we newspeak-change ‘bias’ to ‘prejudice’ ..? That does legally and practically capture better what’s at stake.]
  • The cert will have to have a clause of validity at delivery time only, or be a fraud.
  • Have we similar issues already, with other ‘algorithms’..? Yes we do. As explained here.
  • Since, between the lines, you read there the connection to ‘Dev(Sec)Ops’… That, similar to scrummy stuff, should be no problem to audit nowadays, or … check on your engagement checklist: You do not have the prerequisite knowledge let alone understanding period
  • So, how do you audit DevOps developments, for e.g., continued ‘existence’ of controls once devised? How could you not also audit ML performance (vis-à-vis criteria set before training started, in terms of error rates etc.etc.etc. ..?) to see that it remains within a certain bandwidth of accuracy and that’s enough ..? The bandwidth being the remain-materially-compliant-with(in)-controls of other algo levels.
  • Side note: How do you today ‘audit’ human performance on manual execution of procedures i.e., algorithms ..??

That’s all for now; details may follow (or not; IP has several meanings…).
Leaving you with:

[Designed for Total Compliance; one of the Big-4’s offices, Zuid-As Amsterdam]

Your Cyberrrr… Portfolio ..?

Tinkering with the ideas for IRM/ORM to tackle the post heat map world. The one that emerges once all the one’s out there take the truth seriously, which would lead to:

And for the nay-(not…)sayers this post to illuminate the destructive cling-to-failure-mode thinking of many still, in RM.
After which there is still tons of stuff to manage in the IRM/ORM arena [the difference set between those, shrinks fast ..!]. Risk registers cut it as little, slim-to-none, as the heat map fallacies.

Triggered among others by this illuminating post by Graeme Keith that almost matter-of-factly outlines a portfolio approach to operational risks that can be used in so many places, much better than the common list of risks.

Since, e.g., the portfolio approach better suits the complexity of interactions, i.e., correlations, between threats, controls and control weaknesses (including interactions between them, and every control creates its own extra (sic) set of vulnerabilities, etc.), and vulnerabilities in all sorts of non-linear (sic) ways.
And also, this picks up on the (other) strand of ‘quantitative as far as that goes’ of this, this and this posts. The latter being an illustrative example of the vicious circles of ever complexicating methodological troubles that characterise end-of-times thinking in wrong (dead-end alley) directions.

For now, I’ll let you have fun digesting the above. Just remember: Portfolios are the Future ..!

[But it is Art; Tate Modern many years ago]

Old Bis’ had learned …

Bismarck: Any fool can learn from experience. It’s better to learn from the experience of others.
Which aligns nicely with this old post. Tired, it is.

Also, we learn from history …
* That we don’t learn from history
* Because it repeats, but in inperceptible ways
* Until it’s too late / with (unavoidable…) sobering hindsight

Like, what you’ve been taught about Genghis Khan.
But also this, half-misquoted, but half-suggesting to study the Books behind it.
And we could go on. Better first start on these …

Have I reached the repeat apparently necessary for the achievement of Learning ..?

[Enduring success never seems to be with those that seek this as their life goal(s) …; Martinique]

Algorithms pose no problem for humanity; algo-debatebabblers do

So, once we had Turing machines. If you don’t know what that were / are, please recuse yourself from the discussion…

Now that we have only a handful left, globally, the following [my following’s also a handful, ya’know]:
After, we had hardware (with what we now call firmware; still programmable). Then Operating Systems et al. (remember enterprise buses, message queuing ..?), and other Software that were based on algorithms just like firmware was/is but now, recognisably and readably so. Next came parametrisation, like your SAP software just sits there until you direct the traffic flow through the system based on the 30.000+ parameters in SAP you set, or not, and possibly on the content of the data as well (yes this goes for regular algorithms too; think if-then) — also, parametrisation on e.g., your ‘software defined’ networks… — semi-static algorithms with changes controlled not only by data but by almost-(relatively-)static params.

Suddenly, when we have the latter but with the parameters badly pin-downable i.e., in neural network weights, we get #algorithmsagainsthumanity — that might seem a step further abstracted but isn’t. Advantage: There’s nothing new, also in e.g., auditing; just some translation and extension to do.

Major drawback: The discussion about the dangers of ‘algorithms’ is mostly carried among (sic) ‘governance’/legally affected debatebabblers, of the hideous kind. That have zero clue what an ‘algorithm’ is. But hey, they babble so nicely so why don’t you let them? Well, for one; they pretend to understand what they’re babbling about and pretend they have a say in these matters. Which, due to their demonstrated sheer incompetence, they do not. And their conclusions have zero weight anyway because their scope and depth is a joke. The only thing they seem capable of, is debating into eternity without ever reaching a conclusion but they’ll remain happy they had such a fruitful debate – the only thing they’ll have nailed is the waste of time. Let alone that they themselves for certain (sic) consider hemselves ‘moral’, but their mileage may vary widely

Or is it about ML disregarding all that happened in systems development before, and now the ‘ethicists’ are trying to catch up from the wrong end ..? Like here, they come up with what had been around for over half a century. I suggest that all that think they’re seasoned debaters on the ‘dangers’ of ‘algorithms’, wait another half a century before being allowed / wheeled into the discussion again.

And yes, the previous two paragraphs exclude to the extreme, the few that do understand their business, you know, like the late Stephen Hawking (salve, eminence), prof. Bengio, et al.; they form the opposite end of the wisdom scale. But they seldomly ‘debate’, they discuss. If you don’t know the difference, see the 1st line above.

There’s just no angle that helps, other than offboarding the debatebabblers.
And let them concentrate on:

[If you don’t agree with all the above but recognised Ferranza: you’re triply wrong. It’s Noto (Palazzo Nicolaci di Villadorata) and Ferranza doesn’t even exist.]