Blog

Group ethics

Individuals can be ethical, groups not so much.” — as a conclusion of this insightful piece.
At which, immediately my Canetti and Ortega y Gasset antennas went off the charts.

And with the note of the ‘can’ in the above — not so much the petty little bureaucrats with their mostly mumbo-jumbo about cold fusion 3LoD farrago [yes just look that up will you]. They are too hopelessly caught up in their view to consider the im- or rather anti-moral ramifications of the wrong understanding of the idea.

Also rather, an onward discussion would involve solving the trolley problem and knowing-plus-beyond-that-understanding one’s own position anyway, as in this nifty test, brutally biased in utilitarianism. Yes, that’s a pejorative if ever there is one.

Still, I’d like to leave it here now with an even more positive note; hard but not impossible …:
Can we bridge the gap between semantic levels, i.e., grasp the jump from mere knowledge to understanding and insight, the jump from personal data to privacy, etc. ..? Noting that there’s aggregation at work (Ortega y G again) plus an interesting notion of ‘emergent properties‘. Unsure whether e.g., psychology/psychosociology/sociology [my browser spell (huh, ‘spelling’ but the abbreviation is better when dealing with browser ‘security’, eh?) checker intersects ‘psychopathology’, appropriately] would have some clear enough takes on that one ..! The jump from psycho to socio I mean.
If we can get a grip on these kinds of jumps, we for certain be able to build better brains — moving from plain ML to symbolic reasoning and vice versa or rather, in full brain-copying cooperation. Capice ..? Also, we’d better be able to do something about/with ethics.

After which:

[Time to pond(er); Alhambra Granada]

The No Kill Portfolio

Recently, I floated the idea to use a portfolio approach to risk management its controls aspects, highlighting (positive or negative) correlations in their (positive or negative) effectiveness, individually and overall. This could be extended to include derivation of an optimal portfolio given a max of e.g., costs which must include the cost of harrassing your users with those controls, not possibly but most probably far outweighing the costs of controls-in-a-narrow-sense plus the benefits … But still, one could imagine some form of ‘efficient frontier‘ or so; including calculating the greeks. Yes I did graduate in Finance in the heydays of the Yuppies.

But I also had a long time ago (in an internet era far, far away; 2013) already, discussed the possibilities of better fighting of the kill chain (then, almost avant la lettre), through the use of cleverer controls, here (near the bottom of that one) in a matrix where the above userhinder could be reduced significantly. Probably through [haven’t digested this idea-that-came-while-typing-it fully yet] improving the portfolio by naturally limiting correlations, much improving inherent hedging that leads to a. a jump up in effectiveness, b. efficiency gains, c. easier acceptance by end users since security is more natural, more easy to maybe not even noticing to do the right thing.

Is there any grad student out there that needs to write a thesis / paper? Here’s your subject; I’d love to advise.
Others, may also comment please…

For the time being:

[Winter’s coming, do this it’s fun (and much harder than you think!!) no need to brace yourself (before…); Peterborough Curling Club]

AI and pattern matching — from the business side

As stated earlier, I believe, in the power of ‘AI’ – when driven from ‘the business’ side. The latter being so ill defined it hurts, a lot.
But nevertheless, take a Forbes article. Square that with the Accenture robotics approach. Then cube the lot with McKinsey’s insights.
Oh and a dose of this on organisational readiness to receive, also helps a lot.

There.

Four dimensions of the same. Possibly, some HBR can be thrown into the mix but already, you see that the bottom-up approach, which is valuable too here and there, can in fact be augmented, over-arched, with a more strategy-execution style program…
That should do it.

Oh also don’t forget: Mid-level, there is some solution in using a pipeline (like here! and here), but note that this, too, still a. requires Translators, b. is quite bottom-up, c. is at most mid-, project management level, d. has too little connection with program / project portfolio management to be deployed just like that. However necessary in your projects. Study both sides ..!

Leaving you with …:

[How a jumble still aligns, making the picture interesting; Zuid-As Amsterdam]

DevML Auditing

Since reinforcement learning is too broad a term… we’d better call it continued-in-deployment learning.
Back to the subject, being this CiDL. And how one would audit a system that uses, as one of its many (sic) parts, such a module. The weights change constantly, a little. By which point would one say that the system needs to be re-trained since the implementation ‘certification’ (huh) of e.g., unbiasedness determined once a long time ago, doesn’t apply anymore?

A couple of considerations:

  • For sure (sic again), the unbias of yesterday is today’s societally unacceptable bias.
  • [Shall we newspeak-change ‘bias’ to ‘prejudice’ ..? That does legally and practically capture better what’s at stake.]
  • The cert will have to have a clause of validity at delivery time only, or be a fraud.
  • Have we similar issues already, with other ‘algorithms’..? Yes we do. As explained here.
  • Since, between the lines, you read there the connection to ‘Dev(Sec)Ops’… That, similar to scrummy stuff, should be no problem to audit nowadays, or … check on your engagement checklist: You do not have the prerequisite knowledge let alone understanding period
  • So, how do you audit DevOps developments, for e.g., continued ‘existence’ of controls once devised? How could you not also audit ML performance (vis-à-vis criteria set before training started, in terms of error rates etc.etc.etc. ..?) to see that it remains within a certain bandwidth of accuracy and that’s enough ..? The bandwidth being the remain-materially-compliant-with(in)-controls of other algo levels.
  • Side note: How do you today ‘audit’ human performance on manual execution of procedures i.e., algorithms ..??

That’s all for now; details may follow (or not; IP has several meanings…).
Leaving you with:

[Designed for Total Compliance; one of the Big-4’s offices, Zuid-As Amsterdam]

Your Cyberrrr… Portfolio ..?

Tinkering with the ideas for IRM/ORM to tackle the post heat map world. The one that emerges once all the one’s out there take the truth seriously, which would lead to:

And for the nay-(not…)sayers this post to illuminate the destructive cling-to-failure-mode thinking of many still, in RM.
After which there is still tons of stuff to manage in the IRM/ORM arena [the difference set between those, shrinks fast ..!]. Risk registers cut it as little, slim-to-none, as the heat map fallacies.

Triggered among others by this illuminating post by Graeme Keith that almost matter-of-factly outlines a portfolio approach to operational risks that can be used in so many places, much better than the common list of risks.

Since, e.g., the portfolio approach better suits the complexity of interactions, i.e., correlations, between threats, controls and control weaknesses (including interactions between them, and every control creates its own extra (sic) set of vulnerabilities, etc.), and vulnerabilities in all sorts of non-linear (sic) ways.
And also, this picks up on the (other) strand of ‘quantitative as far as that goes’ of this, this and this posts. The latter being an illustrative example of the vicious circles of ever complexicating methodological troubles that characterise end-of-times thinking in wrong (dead-end alley) directions.

For now, I’ll let you have fun digesting the above. Just remember: Portfolios are the Future ..!
And:

[But it is Art; Tate Modern many years ago]

Old Bis’ had learned …

Bismarck: Any fool can learn from experience. It’s better to learn from the experience of others.
Which aligns nicely with this old post. Tired, it is.

Also, we learn from history …
* That we don’t learn from history
* Because it repeats, but in inperceptible ways
* Until it’s too late / with (unavoidable…) sobering hindsight

Like, what you’ve been taught about Genghis Khan.
But also this, half-misquoted, but half-suggesting to study the Books behind it.
And we could go on. Better first start on these …

Have I reached the repeat apparently necessary for the achievement of Learning ..?
Anyway:

[Enduring success never seems to be with those that seek this as their life goal(s) …; Martinique]

Algorithms pose no problem for humanity; algo-debatebabblers do

So, once we had Turing machines. If you don’t know what that were / are, please recuse yourself from the discussion…

Now that we have only a handful left, globally, the following [my following’s also a handful, ya’know]:
After, we had hardware (with what we now call firmware; still programmable). Then Operating Systems et al. (remember enterprise buses, message queuing ..?), and other Software that were based on algorithms just like firmware was/is but now, recognisably and readably so. Next came parametrisation, like your SAP software just sits there until you direct the traffic flow through the system based on the 30.000+ parameters in SAP you set, or not, and possibly on the content of the data as well (yes this goes for regular algorithms too; think if-then) — also, parametrisation on e.g., your ‘software defined’ networks… — semi-static algorithms with changes controlled not only by data but by almost-(relatively-)static params.

Suddenly, when we have the latter but with the parameters badly pin-downable i.e., in neural network weights, we get #algorithmsagainsthumanity — that might seem a step further abstracted but isn’t. Advantage: There’s nothing new, also in e.g., auditing; just some translation and extension to do.

Major drawback: The discussion about the dangers of ‘algorithms’ is mostly carried among (sic) ‘governance’/legally affected debatebabblers, of the hideous kind. That have zero clue what an ‘algorithm’ is. But hey, they babble so nicely so why don’t you let them? Well, for one; they pretend to understand what they’re babbling about and pretend they have a say in these matters. Which, due to their demonstrated sheer incompetence, they do not. And their conclusions have zero weight anyway because their scope and depth is a joke. The only thing they seem capable of, is debating into eternity without ever reaching a conclusion but they’ll remain happy they had such a fruitful debate – the only thing they’ll have nailed is the waste of time. Let alone that they themselves for certain (sic) consider hemselves ‘moral’, but their mileage may vary widely

Or is it about ML disregarding all that happened in systems development before, and now the ‘ethicists’ are trying to catch up from the wrong end ..? Like here, they come up with what had been around for over half a century. I suggest that all that think they’re seasoned debaters on the ‘dangers’ of ‘algorithms’, wait another half a century before being allowed / wheeled into the discussion again.

And yes, the previous two paragraphs exclude to the extreme, the few that do understand their business, you know, like the late Stephen Hawking (salve, eminence), prof. Bengio, et al.; they form the opposite end of the wisdom scale. But they seldomly ‘debate’, they discuss. If you don’t know the difference, see the 1st line above.

There’s just no angle that helps, other than offboarding the debatebabblers.
And let them concentrate on:

[If you don’t agree with all the above but recognised Ferranza: you’re triply wrong. It’s Noto (Palazzo Nicolaci di Villadorata) and Ferranza doesn’t even exist.]

‘Bird – Museum(s)piece ..?

Hi y’all…

Anyone knows in which museums this artful masterpieces are …?
Like, this, and this too, deserve to be.

No…? Oh the Fail if these weren’t saved for eternity. How many similar artists have been lost for us, and those after us, by oversight [not ‘governance’ but seeing without looking and registering…].
I’m off now, to enjoy similar Good Things.

Oh, and:

[Would be suitable, there! Gugg’nhm Bilbao. When side clutter enhances the total view]

Your life … goner, by AI’s decision

It was a pleasure to read AI’s life predictor service, which is quite the opposite of a predictor test to get into life.
What when you’re supposed to live (or, let’s say, be alive, in a narrower sense) for a long while still but you encounter a ‘self-driving’ car in Arizona ..? Will there be an AI system that can resurrect you? If so, why not employ/deploy that first ..?
What when you’ve managed to live beyond the predicted age ..? Will AI be powerful enough already to make good on its own predictions ..? E.g., through the steering and commandeering of such a human-disabled car…

Good luck by the way with training the system. Survivorship bias in your training set, takes on a somewhat sinisterly literal meaning. How to undo this in the training data, and/or how to correct for future ‘car’-type accidents where we have no clue about the unknown unknown new technology that may alter the possibilities of encountering such tech …? Oh yeah, ‘the future is uncertain so we cannot guarantee any validity of predictions’… Nice disclaimer-try but it makes the whole exercise futile from the word Go. Hence wasteful, qua energy hence immoral.

Edited to add, already pre-pub …: This, on how atrociously-bad snake oil most AI is …

Now then:

[Maybe AI can predict what sort of monument you will have preferred; London]

Security-in-Scrum and Audit-in-Agile

Recently, there was a brief flurry around the ability to ‘audit’ in agile environments or not.
Some notes:

  • Agile development methods (which would include organisational changes …!) can be audited of course. Either from the Soll or As Is, probably with a wide margin between the two, vis-à-vis the just-delivered To Be which in itself is rather some vague proxy for the actually originally wanted To Be. From this, flow your findings. Gaps abound.
  • Side stepping the issue of DevOps/DevSecOps or is that in:
  • Security in a narrow sense, by the way, can be done; easily implemented and maintained (if properly tested against possibly lost controls). But one fails when security controls — mind you, a vast majority of controls are about security-in-the-widest-sense of safeguarding (hah! as if your posture is effective beyond mere luck and happenstance!) against loss of value — just don’t fit a sprint or any dev cycle you’d use. Still, when considering the Wagile/Agifall bandwidth, security controls can be of the higher level and still get implemented in parts (sprints ..? or ‘supersprints’ or what?). If, big if, only your sprint deliverables aren’t pushed into production as soon as available. They are not MVPs ..!! First, even for the minimalistic of minimal deliverable, some security controls larger than a sprint, may (will, for sure) have to be in place; how ..? Security in epics doesn’t necessarily help here. Building the sec infra first, maybe — but that goes against about-all of the agile idea, right? (Wrong, by the way: the agile manifesto wasn’t cast in stone as late, late laggards want to do their sprint’lets.) There’s more to life than picking up only the easiest, nicest post-its on Monday morning.
  • Also, it remains superbly important to remember this, on re-use of code…
  • Supposing you’ve ‘fixed’ security in your sprint teams e.g., by including a sec officer where appropriate+ (details to be had here), one (given the above: ‘hence’) can indeed audit e.g., at epic level, if the focus retreats to control objectives not controls — a dire need for sanity already in the audit world — and flexibility in the detailed implementation is maintained. Given such continuous development, there would be need for more Prove Me than before even if this runs against my personal grain. By way of showing the completeness and consistency of the controls you had put / maintained in place vis-à-vis [that one again] control objectives. But that’s just the Law of Conservation of Trouble; you want to be flexible and do the Agily-scrummy-kindergartenfreedom’y thing, then you also do the chores, only the sun rises for free (i.e., you don’t get owt for nowt).
  • If you don’t want to see that, you have no place in the organisational world. Controls are about non-flexibilising business, in their essence; their very (need for!) existence is based on not allowing ad hoc deviations. So, if you want no controls, you want no work.
  • Now, as a homework assignment: outline how this translates back up when the control objectives are just the How of a higher-level control objective’s What.
  • Oh and there’s another strand which is about the transformation of the IT function in (larger) organisations. No longer the controller over all things IT but having to turn into a flex intermediairy between business demand (in broad terms! and including security requirements) and delivery-from-anywhere. I see trouble when this has to be squared with agile team work; who does what when parts are elsewhere?

Never mind. ..?
For the trouble, I’ll give you:

[Not all plain sailing; St. Cat’s Docks London]

Maverisk / Étoiles du Nord