Friday’s Sobering Thoughts – I

And now, ladies and gentlemen humanoids …!
Starting a Friday series of asides.

With this, on Gresham’s Law that is so ill-remembered. Dunning-Kruger, Russell, and many others have pointed this out. [Disclaimer: I’m very sure about all things, and that includes knowing that the said effects don’t regard me]

The impact … All around you. Politics, business, …, any field really where there’s hierarchy I guess (sic). The solution … throughout history, has been Revolution of the society-shattering kind; that brings so much damage to all that are liberated, in a sense. Until the worst come float back up. Ol’ José was right; we learn from history that we don’t learn from history. The summary of Eco’s Foucault’s Pendulum ‘s’s.

Or so. Any solution providers ..? Plus, from this here full coverage:

Designisruption

Don’t know if the world has picked up on this potential-disruptor already by the time you read this, but how about a disruption of the design world in this ..? I will resist to, will simply not describe it as ‘MS Paint on steroids’; nor will I pink-elephant you with that label.

As I draft this (present tense), a month ago (not so present), it occurred to me that No, this would not put all designers out of a job but Yes, the ones working in the field will

  1. See their turnover diminish significantly as a lot of people will cobble something together for themselves that looks better than current-day attempts so who needs a designer anymore, right ..?
  2. See competition from new entrants that need less technical skills at drawing, etc. [double whammy in 1.+2.];
  3. Need to step up their game to remain relevant; meaning they’ll have to improve significantly in the competitive-advantage area of distinguishing the Very Good from the Merely Good i.e., need to be able to demonstrate to set themselves above the generated-design quality;
  4. Learn to leverage the new style, set, of tools in their work. Learn fast, and deep. And even contribute to its extension.

On the plus side; e.g., architects would be much better able to focus on the essence of their designs – inputting actual Ideas into the nice pictures of what might be, that sell. This sounds very much the same as the use of e.g., ML in accountancy: ditch the grunt work and jump to the lofty high-class, high-value, high-€€€ work. Easy-living bohémien style posing.

Oh but that’s a qualification that’s out of place, heh. Plus:

[Just sketch some curves, and this Calatrava-engineered thing at Lyon TGV would designcalculate its own strucutural stuff, right?]

Driving Miss

The adversarial example saga continues … In a story that one doesn’t even need to crack into an auto to ‘hack’ it. *driving into upcoming traffic included in this package.

Well, at least, one case is demonstrated, here. But as modern times go, one is enough to demonstrate, and to make all vulnerable, and to instigate a class break; mere patches won’t hold.

So ms. Daisy may be better off to continue to drive herself…?
And:

[Your innovation, applied; Fabrique Utrecht]

The humans, the AI and the EU

If you parse that onto movie(X, Y, Z) and assuming you have the most basic of understanding about the world around you [I mean, be able to recognise that as a Prolog (-style) goal] you’ll end up with the right reference.
’cause of some, at face/title value, laudable effort, as here: Rules for AI. Like,
According to the guidelines, trustworthy AI should be: (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; (3) robust – both from a technical perspective while taking into account its social environment.
Rightey, you know any, I mean any human that could comply with thee ..!? Why require this, then, of AI systems? And it continues [my unitalics]:
The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches ‘Can be achieved’ – any other options, as the mentioned, will undo the benefits of having AI over human (on average) stupidity ..?
  2. Technical Robustness and safety: AI systems need to be resilient and secure. No non-trivial system built by humans to date, has ever achieved that. Now suddenly, it is required? How, so suddenly and not on old-school systems like European politics? They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. Well, we have safety requirements on all sorts of systems so when it’s not new, why mention it? That is the only way to ensure that also unintentional harm can be minimized and prevented. You hinting at intentional harm that is OK then, like weaponised AI against humans read European citizens that have only slightly different opinions?
  3. Privacy and data governance: besides ensuring full respect for privacy and date protection unlike the GDPR then that allows about anything from (whose!?) law enforcement agencies, right?, adequate data governance mechanisms must also be ensured also non-existent in the current-day world…, taking into account the quality and integrity of the data which is crappy and crappy, respectively, almost everywhere – why not fix that first, eh? How then and if you’d know, why not already?, and ensuring legitimised access to data. Oh yeah, see the prev comment on GDPR.
  4. Transparency: the data, system and AI business models should be transparent. So far, proven impossible, since AI systems have been found to be able to discover how to lie to the investigators… Traceability mechanisms can help achieving this. Partially, most partially. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Impossible as 50% of humans are underaverage IQ, and AI systems may use (far, very far) above-average-IQ-reasoning. Somewhere, dumbing down the explanation will not work. Humans need to be aware that they are interacting with an AI system nice, but humans can be misled easily on this one, too, and must be informed of the system’s capabilities and limitations. Of course! As they are now informed of the huge and severe limitations of, e.g., EU lawmakers. Dare a test …!? How to explain the convergence criteria of neural nets? One can mathematically prove there’s no mathematical proof of neural nets’ trainability/effectiveness [if you don’t get that, you’re out of the discussion] so ‘limitations’..?
  5. Diversity, non-discrimination and fairness: Unfair bias must be avoided at last, fair bias is allowed! since e.g., this, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Yeah, yeah, what does that have to do with AI systems if all those thing are so absolutely e-ve-ry-where today, and that is allowed to continue [don’t say that it isn’t; all of the EU’s government and -s would be in lawsuits against every personhood of the planet including themselves over how today’s world works (if at all)]. Fostering diversity objection! extremely illegal against human rights, to be so parti pris! See here, AI systems should be accessible to all oh yeah, like today, right ..?, regardless of any disability the rare True That in the sea of this nonsense-or-impossibility, and involve relevant stakeholders throughout their entire life circle. Again, like today. NOT, why all of a sudden for AI, then?
  6. Societal and environmental well-being: AI systems should benefit all human beings no just the stakeholders each to their own; banks don’t work for society but for the shareholders period, including future generations unlike climate policies that are just there for the current 1% of course. It must hence nope, non sequitur be ensured that they are sustainable and environmentally friendly first require that of your EMP travel schedule ..!!. Moreover, they should take into account the environment ditto, including other living beings like, maybe even humans? Do animals count, otr only the cuddly ones of those? Insects? Funghi?, and their social and societal impact should be carefully considered. Apply that to the EU first, implying immediate abandonment of the EU altogether…
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes hear hear; now, since no-one knows how to do that [no you liar! I mean consultant; your ‘methods’ are provably inadequate so bugger off with your extortionist hourly rates]. Auditability, which enables the assessment of algorithms same. Simply not true with AI, data and design processes plays a key role therein some role maybe, but there’s much more and these means are very weak in this, plus this, especially in critical applications. Raising the question of whether AI should be allowed into those, and how you test in today’s non-AI [which is …!?!?] systems Moreover, adequate an accessible redress should be ensured. Also centuries of backlog with that, fix that first…

Leaving not much, eh? Not even a nice try; try again. And:

[Now that‘s readable – with the same results as the above]

In Faux Gras Control

Remember the controversy around foie / faux gras ..? It’s listed here, and here – though I’m more of the alternative production (natural process) flavour than ‘real’ trademarked faux gras. [At some point in the past, I had ‘faux gras’ so called because it was alternative production, that couldn’t be called full foie gras for that reason – stoopid.]

But that’s beside the point of this post. Or is it ..? Because I brought the subject up to compare it with / make the analogy to, your local Control industry. Like, regulators, industry groups etc., that legally or less so, only pay lip service to comply-or-explain and arm-twist you into comply-or-die(-explaining). If you deny that, you’re delusional.
Force-feed until slaughter, as They‘ll always find deficiencies (almost the opposite of efficiencies). Long-term health is not the goal of the life of the geese / your company, a life of torture is.
Force-feed until …, so the owners/shareholders get the max, not the geese / your other stakeholders.

And the force-feed … is with Controls, of course. That are the things that make you fat with un-Darwinian overhead. NOT for the health of the whole! Whoever wants to maintain that, excludes themselves from the discussion as too simpleton, n00b, naive. No, not even the happytalk at Board levels (of the abovementioned organisations) can cover the fact [if such a thing exists, this is one] that those words are as empty as outer space, or even more so since at least in outer space, some light may be visible.

After your company’s smothered, of course a new stock of to-forcefeeds is needed. Ah, ‘fintech’ and similar startup/scale-up’s; there they come in droves, driven towards the straightjackets.

Is there an escape ..? One (‘s company) would need to develop, most stealthily, the capacity for flight. But once one spreads one’s wings, one is fund out. It seems the only solution is to not be, to not want to be(come), a goose. Be an eagle, in another industry, that preys on the flock. Until that industry too, gets regulated and control-splattered. Sigh. Such is history’s teachings.

On the positive side:

[Some humanity still visible, down below ominously; Zuid-As]

C’19, anyone ..?

Hey anyone out there already investing in Cobit 2019 ..?
Just wanted to know.

Since the core management picture still looks like it did before; not much news there. But the packaging of its use, much improved.
Yes, I wasn’t a fan of the model before. But am slowly warming up to it. Minus the mirage of the standard (process, content) but plus the risk angle. Big-time-minus the Value vague’ry, little plus global push [that runs counter my mirage mention].

Oh, by the way; where’s Strategy, moreover, where’s Culture …!? Since this. And Von Moltke’s on strategic planning. And…, and… and etc.etc.
And don’t come with the “oh but that’s not in Cobit, that’s in GEIT” or so. If, not too big if, you think the subjects are separate enough to allow separate frameworks, well, then you’re in this territory, laws 1-3.

Now then, back to my question. And:

[Or is it not ‘back to my question’ but ‘closely linking to the territory’…]

Note: From ETL to WEC

Oh how shiny is the world of Extract-Transform-Load in data analysis.

But it’s not. It’s the major fun spoiler in that. Now, I see a better ‘name’ being used [proposed?], in this also otherwise insightful piece.

WEC it will be. Wrangling, Exploration, Cleaning.

All three much needed. The W+C parts, hinting at the average quality of your data, certainly. [The E part having an air of clean, still.] But the W+C parts also hinting at the introduction of bias, etc. – as already flagged here, unbiasing may not be so easy – hence hinting at the return of the old Law of Conservation of Trouble.
When it comes to the use of ML in accounting, for example; there, the less the data is touched the better. Any uncleanness a signal of weakness; scouring for exceptions is not so easy when these very things are cleaned out (or even lost in the wrangling, not conceded).

Oh well. It’s Friday; we’re off now with:

[When the weather agrees; Gent]

You are unimportant

Yes you, my dear reader [can just as well put that in single not plural…] who are in the information security arena.
Per this piece.

Where “It is time to acknowledge the wisdom of the “bean counters.” is of course a fat joke. But in continuation, there’s much to agree with and then worry about since it ‘threatens’ you – if you were/are self-absorbed.
Where in counterpoint, the whole overarching idea of ‘risk management’ is too little prominent. I don’t mean the artritic type of yesteryears, but the modern flavour of a. integration in the ‘first’ line [but then truly; not in lieu-tenants positions but by the executives at all levels themselves], b. the right mix of qualitative and quantitative – not the middling heat map atrocities.

Have you more counterpoints? Counter-counter-…x … [yes that’s points, if you didn’t get the point]?
Oh well, it’s getting late in the day. So:

[A hidden gem behind the hidden gem next door (Bij Qunis): the pumps that drained Holland are still there; Lijnden
 Oh hey that pic again in only a couple days’ time span yes by accident (appropriately or …? ]

No creep, promise!

When the creeps tell you they’re benign, how to tell ..?
Take this, for example: AI to preventitavely profile shoplifters. Right. Where scope creep of the app, will be not (co. name) Vaak (Dutch for: often) but Always Without Exception. If you don’t want to believe the latter, please remove yourself from any discussion and from any position of influence you might have as you show yourself to be mentally unfit to partake in it or have agency, respectively.

Whuddaboud false positives? False negatives will take care of themselves. But the first … And the claim that this is a move towards a society where crime can be prevented … If you think that, see the above. If you believe the human-in-the-loop will remain, see the above.
Also, have a look at the comments re Schneier (here); so many creepy deployments already, so many creepy misses… Oh I read there the English pronounciation of the co.’s name, with an F for V and uh instead of aa.

Be vigilant, citizens! Report this sort of stuff! Take down such morally criminal behaviour!

Plus:

[There go your human rights; Canadian side, own pic]

De heat maps vliegen in het rond

Reken maar, op de bijeenkomst.
Post gescheduled voordat het programma bekend werd dus eens zien hoeveel punten we scoren…: ]

Niet De bijeenkomst. Wegens:

En ook de posts van de afgelopen tijd, waaronder (deze, als u goed leest,) deze, deze en deze plus deze toch vrij helder moge wezen.
En voor degenen die nog verder willen studeren: deze.
Waarbij nog een forse dosis onderliggend-probleem bij deze voorbij zal komen. En weinig over deze.

Nog afgezien overigens van de uitglijder van Trust But Verify, die ook nog steeds populair lijkt – het lijkt haast wel gecorreleerd met de foutheid ervan… Erger nog, er zal wel wat over 3LoD voorbijkomen. Dat dus bij voorbaat de plank op een slechte manier mist [niet eens misslaat, de afwijking is groter dan dat]. En een blokje Agile/Scrum of zo, al of niet in de Internal Audit – misschien juist, maar waarschijnlijk iets minder juist gepresenteerd. En een scheutje Cobit, de nieuwe versie die alles weer zo lekker door elkaar haalt [comments]. Sausje ‘cyber‘ eroverheen en klaar. Want cloud en data-gedreven zijn té jaren ’00 om nog een beetje leuk mee te kunnen doen op het menu, toch?

Ja lees ze (nóg) maar eens na, als voorbereiding op morgen. Of als voorbereiding op Vandaag, als u in ‘risk management’ werkt. Conclusie: You’re Doing It Wrong.
Helaas. Maar als u al aan verbetering werkt, o.a. door uzelve vast te verbeteren door studie op bovenstaande posts, dan is er hoop, een hoop. Op betere tijden respectievelijk te doen.
Ik blijf hoopvol…

Plus:

[Uw risk management mag dan een kunstwerk zijn, maar als dat de enige kwaliteit is… Volledigheid, scherpte, … ?]

Maverisk / Étoiles du Nord