DevML Auditing

Since reinforcement learning is too broad a term… we’d better call it continued-in-deployment learning.
Back to the subject, being this CiDL. And how one would audit a system that uses, as one of its many (sic) parts, such a module. The weights change constantly, a little. By which point would one say that the system needs to be re-trained since the implementation ‘certification’ (huh) of e.g., unbiasedness determined once a long time ago, doesn’t apply anymore?

A couple of considerations:

  • For sure (sic again), the unbias of yesterday is today’s societally unacceptable bias.
  • [Shall we newspeak-change ‘bias’ to ‘prejudice’ ..? That does legally and practically capture better what’s at stake.]
  • The cert will have to have a clause of validity at delivery time only, or be a fraud.
  • Have we similar issues already, with other ‘algorithms’..? Yes we do. As explained here.
  • Since, between the lines, you read there the connection to ‘Dev(Sec)Ops’… That, similar to scrummy stuff, should be no problem to audit nowadays, or … check on your engagement checklist: You do not have the prerequisite knowledge let alone understanding period
  • So, how do you audit DevOps developments, for e.g., continued ‘existence’ of controls once devised? How could you not also audit ML performance (vis-à-vis criteria set before training started, in terms of error rates etc.etc.etc. ..?) to see that it remains within a certain bandwidth of accuracy and that’s enough ..? The bandwidth being the remain-materially-compliant-with(in)-controls of other algo levels.
  • Side note: How do you today ‘audit’ human performance on manual execution of procedures i.e., algorithms ..??

That’s all for now; details may follow (or not; IP has several meanings…).
Leaving you with:

[Designed for Total Compliance; one of the Big-4’s offices, Zuid-As Amsterdam]

Your Cyberrrr… Portfolio ..?

Tinkering with the ideas for IRM/ORM to tackle the post heat map world. The one that emerges once all the one’s out there take the truth seriously, which would lead to:

And for the nay-(not…)sayers this post to illuminate the destructive cling-to-failure-mode thinking of many still, in RM.
After which there is still tons of stuff to manage in the IRM/ORM arena [the difference set between those, shrinks fast ..!]. Risk registers cut it as little, slim-to-none, as the heat map fallacies.

Triggered among others by this illuminating post by Graeme Keith that almost matter-of-factly outlines a portfolio approach to operational risks that can be used in so many places, much better than the common list of risks.

Since, e.g., the portfolio approach better suits the complexity of interactions, i.e., correlations, between threats, controls and control weaknesses (including interactions between them, and every control creates its own extra (sic) set of vulnerabilities, etc.), and vulnerabilities in all sorts of non-linear (sic) ways.
And also, this picks up on the (other) strand of ‘quantitative as far as that goes’ of this, this and this posts. The latter being an illustrative example of the vicious circles of ever complexicating methodological troubles that characterise end-of-times thinking in wrong (dead-end alley) directions.

For now, I’ll let you have fun digesting the above. Just remember: Portfolios are the Future ..!
And:

[But it is Art; Tate Modern many years ago]

Old Bis’ had learned …

Bismarck: Any fool can learn from experience. It’s better to learn from the experience of others.
Which aligns nicely with this old post. Tired, it is.

Also, we learn from history …
* That we don’t learn from history
* Because it repeats, but in inperceptible ways
* Until it’s too late / with (unavoidable…) sobering hindsight

Like, what you’ve been taught about Genghis Khan.
But also this, half-misquoted, but half-suggesting to study the Books behind it.
And we could go on. Better first start on these …

Have I reached the repeat apparently necessary for the achievement of Learning ..?
Anyway:

[Enduring success never seems to be with those that seek this as their life goal(s) …; Martinique]

Algorithms pose no problem for humanity; algo-debatebabblers do

So, once we had Turing machines. If you don’t know what that were / are, please recuse yourself from the discussion…

Now that we have only a handful left, globally, the following [my following’s also a handful, ya’know]:
After, we had hardware (with what we now call firmware; still programmable). Then Operating Systems et al. (remember enterprise buses, message queuing ..?), and other Software that were based on algorithms just like firmware was/is but now, recognisably and readably so. Next came parametrisation, like your SAP software just sits there until you direct the traffic flow through the system based on the 30.000+ parameters in SAP you set, or not, and possibly on the content of the data as well (yes this goes for regular algorithms too; think if-then) — also, parametrisation on e.g., your ‘software defined’ networks… — semi-static algorithms with changes controlled not only by data but by almost-(relatively-)static params.

Suddenly, when we have the latter but with the parameters badly pin-downable i.e., in neural network weights, we get #algorithmsagainsthumanity — that might seem a step further abstracted but isn’t. Advantage: There’s nothing new, also in e.g., auditing; just some translation and extension to do.

Major drawback: The discussion about the dangers of ‘algorithms’ is mostly carried among (sic) ‘governance’/legally affected debatebabblers, of the hideous kind. That have zero clue what an ‘algorithm’ is. But hey, they babble so nicely so why don’t you let them? Well, for one; they pretend to understand what they’re babbling about and pretend they have a say in these matters. Which, due to their demonstrated sheer incompetence, they do not. And their conclusions have zero weight anyway because their scope and depth is a joke. The only thing they seem capable of, is debating into eternity without ever reaching a conclusion but they’ll remain happy they had such a fruitful debate – the only thing they’ll have nailed is the waste of time. Let alone that they themselves for certain (sic) consider hemselves ‘moral’, but their mileage may vary widely

Or is it about ML disregarding all that happened in systems development before, and now the ‘ethicists’ are trying to catch up from the wrong end ..? Like here, they come up with what had been around for over half a century. I suggest that all that think they’re seasoned debaters on the ‘dangers’ of ‘algorithms’, wait another half a century before being allowed / wheeled into the discussion again.

And yes, the previous two paragraphs exclude to the extreme, the few that do understand their business, you know, like the late Stephen Hawking (salve, eminence), prof. Bengio, et al.; they form the opposite end of the wisdom scale. But they seldomly ‘debate’, they discuss. If you don’t know the difference, see the 1st line above.

There’s just no angle that helps, other than offboarding the debatebabblers.
And let them concentrate on:

[If you don’t agree with all the above but recognised Ferranza: you’re triply wrong. It’s Noto (Palazzo Nicolaci di Villadorata) and Ferranza doesn’t even exist.]

‘Bird – Museum(s)piece ..?

Hi y’all…

Anyone knows in which museums this artful masterpieces are …?
Like, this, and this too, deserve to be.

No…? Oh the Fail if these weren’t saved for eternity. How many similar artists have been lost for us, and those after us, by oversight [not ‘governance’ but seeing without looking and registering…].
I’m off now, to enjoy similar Good Things.

Oh, and:

[Would be suitable, there! Gugg’nhm Bilbao. When side clutter enhances the total view]

Your life … goner, by AI’s decision

It was a pleasure to read AI’s life predictor service, which is quite the opposite of a predictor test to get into life.
What when you’re supposed to live (or, let’s say, be alive, in a narrower sense) for a long while still but you encounter a ‘self-driving’ car in Arizona ..? Will there be an AI system that can resurrect you? If so, why not employ/deploy that first ..?
What when you’ve managed to live beyond the predicted age ..? Will AI be powerful enough already to make good on its own predictions ..? E.g., through the steering and commandeering of such a human-disabled car…

Good luck by the way with training the system. Survivorship bias in your training set, takes on a somewhat sinisterly literal meaning. How to undo this in the training data, and/or how to correct for future ‘car’-type accidents where we have no clue about the unknown unknown new technology that may alter the possibilities of encountering such tech …? Oh yeah, ‘the future is uncertain so we cannot guarantee any validity of predictions’… Nice disclaimer-try but it makes the whole exercise futile from the word Go. Hence wasteful, qua energy hence immoral.

Edited to add, already pre-pub …: This, on how atrociously-bad snake oil most AI is …

Now then:

[Maybe AI can predict what sort of monument you will have preferred; London]

Security-in-Scrum and Audit-in-Agile

Recently, there was a brief flurry around the ability to ‘audit’ in agile environments or not.
Some notes:

  • Agile development methods (which would include organisational changes …!) can be audited of course. Either from the Soll or As Is, probably with a wide margin between the two, vis-à-vis the just-delivered To Be which in itself is rather some vague proxy for the actually originally wanted To Be. From this, flow your findings. Gaps abound.
  • Side stepping the issue of DevOps/DevSecOps or is that in:
  • Security in a narrow sense, by the way, can be done; easily implemented and maintained (if properly tested against possibly lost controls). But one fails when security controls — mind you, a vast majority of controls are about security-in-the-widest-sense of safeguarding (hah! as if your posture is effective beyond mere luck and happenstance!) against loss of value — just don’t fit a sprint or any dev cycle you’d use. Still, when considering the Wagile/Agifall bandwidth, security controls can be of the higher level and still get implemented in parts (sprints ..? or ‘supersprints’ or what?). If, big if, only your sprint deliverables aren’t pushed into production as soon as available. They are not MVPs ..!! First, even for the minimalistic of minimal deliverable, some security controls larger than a sprint, may (will, for sure) have to be in place; how ..? Security in epics doesn’t necessarily help here. Building the sec infra first, maybe — but that goes against about-all of the agile idea, right? (Wrong, by the way: the agile manifesto wasn’t cast in stone as late, late laggards want to do their sprint’lets.) There’s more to life than picking up only the easiest, nicest post-its on Monday morning.
  • Also, it remains superbly important to remember this, on re-use of code…
  • Supposing you’ve ‘fixed’ security in your sprint teams e.g., by including a sec officer where appropriate+ (details to be had here), one (given the above: ‘hence’) can indeed audit e.g., at epic level, if the focus retreats to control objectives not controls — a dire need for sanity already in the audit world — and flexibility in the detailed implementation is maintained. Given such continuous development, there would be need for more Prove Me than before even if this runs against my personal grain. By way of showing the completeness and consistency of the controls you had put / maintained in place vis-à-vis [that one again] control objectives. But that’s just the Law of Conservation of Trouble; you want to be flexible and do the Agily-scrummy-kindergartenfreedom’y thing, then you also do the chores, only the sun rises for free (i.e., you don’t get owt for nowt).
  • If you don’t want to see that, you have no place in the organisational world. Controls are about non-flexibilising business, in their essence; their very (need for!) existence is based on not allowing ad hoc deviations. So, if you want no controls, you want no work.
  • Now, as a homework assignment: outline how this translates back up when the control objectives are just the How of a higher-level control objective’s What.
  • Oh and there’s another strand which is about the transformation of the IT function in (larger) organisations. No longer the controller over all things IT but having to turn into a flex intermediairy between business demand (in broad terms! and including security requirements) and delivery-from-anywhere. I see trouble when this has to be squared with agile team work; who does what when parts are elsewhere?

Never mind. ..?
For the trouble, I’ll give you:

[Not all plain sailing; St. Cat’s Docks London]

How to drown

You will be drowned in a river 2in deep on average. 2in being societally acceptable as average depth, whatever the consequences of you being put in the 50y deep spot. Or you were in the 1in spot but all were moved.

That’s how unbiasing goes. It’s still aggregate discrimination. And also this one: No data point can tell you which distribution it belongs to. Which implies, flipped, that no statistics can tell you whether one outcome is part of it. An individual case can never be assessed to be part of some statistical population (sub)class and then be treated accordingly.
Thus obliterating any validity of statistical justice. Only (root) cause analysis may help, and then you will must decide on a whim whether the cause is culpable and even if then have to decide whether you’d want some [with 100% certainty later backfiring] win-lose correction or win-win, which you’ll have to seek out and strife [no, not strive] for.
Implicating that human analysis and thought needs to be applied; good! but also with the flip side that said human is personally liable for any mistakes, for the full damage amount. No hiding behind organisation-has-personhood lies. Full liability, for the full amount of damage, direct or indirect, however unquantifiable [to be overcompensated just to be on the safe side].

Hence, also this: The right to be forgotten; what if some accidental individual-only specific characteristic tilted your training and you’ll have to redo it all again…

To conclude about all of the above: Why do ML-statistics if you have no logical hence legal grounds to use the results ..?

Oh well;

[Your unbiasedness for lunch; Paleis het Loo]

No Control but another Cycle

Tinkering with the learned stuff from recent discussions among peers, and including e.g., this and this past post among many related ones, I realised:

  • The good ol’BCG matrix is still very, very valid today, with most large orgs stiffly in the bottom right corner (Dogs) or less right but all desperately trying to move their business to the left. Some realising that they should, in fact, be on the upper deck but just can’t move there – is all they can, especially qua mind, incapable ..? Just hinting, not saying (outright).
  • Startups-to-scale-ups and what have we, the loose gun set, are quite in the upper left corner.
  • All the biggies want to be more like Elvis <previous bullet> and not realise that of what they have been told by others to want to be, all agile’y flex’y start-up Millennial culture loosely connected companies, 95%++ fails. Skipping for a moment that the M generation already has been atoned into a gen that strives for (income and other) stability and protection more than we’ve seen for a long while, and ones already have to look much further. Writing off the middle section; too bad qua societal costs but they’ve never been educated, trained or experienced to amount to much, overprotected as they were. Also not writing off the 45+ bunch, that do have tons of inspiration, stamina, education (sic), training and experience, plus energy to innovate.
  • But foremost, I learned from analysis and introspection [processing mindfully in a calm eye of the stormy world], that the main cause of current Schumpeterian failure stems from the fear-of-death cling to the ‘Management Control Cycle’ which in its current-day almost-exclusive form isn’t about management or control, nor is it a proper cycle. Since it’s confused too much [Dutch government / governance babblers: near-completely everywhere] with the ‘PDCA’ cycle.
  • What needs to change, is all that <previous bullet> BS. Into supportive, facilitating management. Remember, one should hire the people to do what they’re best at, and then let them ..? Well, there’s a little (sic) thing left out there, which is: What is left for you yourself then? The answer to that is: What you would want to do: Take risks! Like a. extremely well-considered risk-based decisions, also about what not to do [McK: If it ain’t a choice to not do something else, your idea isn’t a strategy proper you’re just an indecisive coward]; b. yes you should from there, instigate strategy execution without which your ‘strategy’ is an abuse of that word or anything; c. Yes you may be wrong but only if you didn’t risk-base your decisions properly i.e., didn’t use all available info to the max [and still at some point decided enough is enough and didn’t choke for fear of choice] and if you didn’t take Von Moltke to heart.

Which’all leads to the following:

Do we need to add …?
Do you dare this …!?

Yeah, I know, I know, there’s so much that can be added to this… E.g., this. Go ahead, add. As if that either diminishes the purpose of the above or there’s no extension possible to it, for varying assumptions about what it stands for.
For the view:

[‘Strategy’ may also lock you up in an outdated paradigm…! Ávila]

Mining to robo

Anyone out there that can share a clear case of an actual application of process mining … in order to move onto RPA ..?
Was thinking. That happens. A lot. All the time. With me, that is. Process mining currently is used in e.g., accountancy, to ‘get a grip’ on actual transaction flows. That may nay always have some portion of deviation from the ‘intended’, designed one-size-fits-all transaction straight jacket. Yes, always.

Now I don’t care about that, as reality just doesn’t let fit itself into some pipe dream one flow, and also don’t care that process mining is used to get that grip, mostly through visualisation and random qualitative professional judgment of admissibility of deviations. In the interim of the annual accounts verification. Yes, dear reader, it’s verification time ..! Falsification is just too easy; child’s play hence not even auditors’ play.
But … can’t the stratification of ‘actual’ process flows, not help enormously with RPA ..? Since with that, too, one strives for as little deviation from the once dreamt-up ideal process flows but will fail when deviations are not allowed. Can one turn the visual thing, which is driven by ‘automated’ analysis and capture in some form of electronic representation already, applying some form of point-and-click pruning, into sufficiently stratified and already-in-code capture so that the transform into RPA can be done automatically ..? Using Low-to-No Code solutions, this should be easy.

Or… hence the call for response … is this done already prolifically ..? Please reply, then.
Otherwise, isn’t it worth pursuing?

[Edited to add: No it’s not about screen scraping. Maybe not far off ..?]

But …:

[Do pick your battlefield(s) carefully; e.g. at Battle near Hastings]