Blog

Flooding (news on global warming) versus stochastics

Where the headline reads like something to do with e.g., vintners having to look forward to having more acidic grapes, or moving North to be able to make light-enough styles.
Which is correct reading.
But this post is more on a current-day issue.

Vines needing really, really long to mature to anything useful, you know. Then again: Think of the vast expanses of Canada that will open up to winemaking… Oh Canada your promise …
And the issue at hand, being of Today. Where already we had ‘Stochastic Terrorism‘ – and now it’s to be linked to ‘Flooding‘.

Well, at least, that’s my idea. The latter, dumbing down the masses so Dunning-Kruger may take effect, and overly affect. In between, sowing the wind and reaping the storm with the former, ST.

See? It’s a double whammy. [I mean that in a culturally-neutral sense, excluding suggestive senses oh here I might have gone again]

Now what?
And:

[the bite always looms under the surface, either from a local of from an ages-term outside view; Baltimore]

Your AI Project

As a brief intermission: Your AI project is truly similar to the highest-tech car around.

In the glossy brochure and in the specs. Das AIuto it is (Truth In AI Engineering), The Ultimate AI-Driven Machine. The Best AI or Nothing. There Is No non-AI Substitute.

Then, grand delivery:

There you go! You have your shiny toy!
Unsure when the AI world will mature…

Yes, personally I’d prefer this brand (CD or SC), or this.

Good AI

…implementations. Now the hubbub [in lieu of a. using in lieu of improperly as so often is the case, b. ‘hype’] is settling down a bit, though some laggards are still at it instead of keeping mum and just doing it in a proper way.

The proper way is that the real talk is of now.

  1. On the one hand, there’s an enormous lack of use of the tools that are available now – not widely available, but they’re there, if one would take a look, and see their building ‘tool’ status. No, I don’t mean the glitzy stuff with fancy screens. I mean the ‘advanced’ stuff, that captures the forefront of what was available in theory functional descriptions and ready-made demonstrators of … yes, twenty-five years ago.
    Those have been transformed, by scraping off the (then) practical drawbacks of lack of training data, of compute power, and comparative economic advantages v.v. human labour. The latter, having come down not in cost [your mileage may vary when comparing purchasing power ..!!] but in intelligence. Yes. Read my many posts on ‘management’, cooperative efforts, et al. The fact (sic) is that now, the marginal out-performance of ‘systems’, gains interest.
  2. On the other, there’s the realisation that as with any hype, the proof is in the long, arduous, classic systems implementation drudgery of building and rolling out actual ‘systems’ – not only the software/hw as that is to be somewhere in a cloud [even if private, localised, i.e., a data centre of yours] or you’re doing IT wrong, but also including the assembly line operators duly called ‘managers’ (them again!) that think they still control stuff but are allowed to tighten a screw here and there and certainly not do more as they shouldn’t interfere with the system since they only can/will do wrong then – the system being the assembly line of information flows, of bits and bytes that should be untouched as that’s where the money is in not in the mere auxiliaries called humans. For lack of a better pejorative word.
    This then includes Change Management of the organisation kind, sometimes jokingly called ‘process‘ change. Which is harder to ‘get’ than understanding (true; sic) AI technology; the latter being much beyond mere theoretical physics Sheldon, let alone rocket science. But still quite easy when one puts one’s mind to it. Unless you’re not even believing in yourself for probably a very good reason.
  3. On the third hand [don’t even bother to smallmindcorrect me], there’s more serious stuff to do around the system engineering. Take it as a normal business investment project, like here. Ah, now there‘s the clue to this post. But without the above intro, it wouldn’t been as much fun, wouldit’vebeen..?

So, finally again:

[Now go contemplate, e.g., where this is. Any half-decent AI implementation will render ‘Convento Corpus Christi, V.N. de Gaia’ in seconds; what about you ..?]

No, you’re wrong. Just pay appropriately.

Again, the issue of the ‘cyber’security skill gap (#ditchcyber, and skill ..?) came up, in this thing. Which seems like a logical piece. But it isn’t. It falls into the same trap as a great many other articles, et al.: When discussing rising pay, do they apply economics 101 ..? No they don’t. They see rising pay, but not the enormous chasm between the risen pay and the market-clearing pay. The former, paltry. The latter …:

Why on earth should people that, apparently, have so much more impact on the results of any company than the CEO or Board has, not be paid respective incomes? If, if, if, the company were rationally managed, the ‘cyber’punks that protect the company, would earn five to fifty times more than the CEO does. As that would reflect the value-to-impact ratio. No, let the CEOs, Boards, do their babble double full time anywhere; it’s still babble without much if any, if any positive contribution to the bottom line. [The very rare exceptions noted. Fun fact: You’re not one of them.]
So, where CEOs claim [corollary fun fact: falsely!] to have such an impact that they deserve the 500-5000+ times the income of workers that actually do have an impact, the ‘cyber’punks to be hired should have such a multiple above that, or are you illogical, irrational on the verge of insanity ..? [not saying on which side of the verge.]

Another option is to not pay the in-demands (?) so ridiculously … and also not the Boards. Oh how huge the pools of money would be that then would flow to the employees that actually matter …?

Also, the ‘skills’ gap is mainly a supply gap, made so most certainly by the demand side (as in this; in Dutch but G Translate will do some, possibly mediocre, trick).
So, for now:

[Yet another Board member (uh-huh, you-know-what-I-mean) compensating design (though Good); London]

Friday’s Sobering Thoughts – I

And now, ladies and gentlemen humanoids …!
Starting a Friday series of asides.

With this, on Gresham’s Law that is so ill-remembered. Dunning-Kruger, Russell, and many others have pointed this out. [Disclaimer: I’m very sure about all things, and that includes knowing that the said effects don’t regard me]

The impact … All around you. Politics, business, …, any field really where there’s hierarchy I guess (sic). The solution … throughout history, has been Revolution of the society-shattering kind; that brings so much damage to all that are liberated, in a sense. Until the worst come float back up. Ol’ José was right; we learn from history that we don’t learn from history. The summary of Eco’s Foucault’s Pendulum ‘s’s.

Or so. Any solution providers ..? Plus, from this here full coverage:

Designisruption

Don’t know if the world has picked up on this potential-disruptor already by the time you read this, but how about a disruption of the design world in this ..? I will resist to, will simply not describe it as ‘MS Paint on steroids’; nor will I pink-elephant you with that label.

As I draft this (present tense), a month ago (not so present), it occurred to me that No, this would not put all designers out of a job but Yes, the ones working in the field will

  1. See their turnover diminish significantly as a lot of people will cobble something together for themselves that looks better than current-day attempts so who needs a designer anymore, right ..?
  2. See competition from new entrants that need less technical skills at drawing, etc. [double whammy in 1.+2.];
  3. Need to step up their game to remain relevant; meaning they’ll have to improve significantly in the competitive-advantage area of distinguishing the Very Good from the Merely Good i.e., need to be able to demonstrate to set themselves above the generated-design quality;
  4. Learn to leverage the new style, set, of tools in their work. Learn fast, and deep. And even contribute to its extension.

On the plus side; e.g., architects would be much better able to focus on the essence of their designs – inputting actual Ideas into the nice pictures of what might be, that sell. This sounds very much the same as the use of e.g., ML in accountancy: ditch the grunt work and jump to the lofty high-class, high-value, high-€€€ work. Easy-living bohémien style posing.

Oh but that’s a qualification that’s out of place, heh. Plus:

[Just sketch some curves, and this Calatrava-engineered thing at Lyon TGV would designcalculate its own strucutural stuff, right?]

Driving Miss

The adversarial example saga continues … In a story that one doesn’t even need to crack into an auto to ‘hack’ it. *driving into upcoming traffic included in this package.

Well, at least, one case is demonstrated, here. But as modern times go, one is enough to demonstrate, and to make all vulnerable, and to instigate a class break; mere patches won’t hold.

So ms. Daisy may be better off to continue to drive herself…?
And:

[Your innovation, applied; Fabrique Utrecht]

The humans, the AI and the EU

If you parse that onto movie(X, Y, Z) and assuming you have the most basic of understanding about the world around you [I mean, be able to recognise that as a Prolog (-style) goal] you’ll end up with the right reference.
’cause of some, at face/title value, laudable effort, as here: Rules for AI. Like,
According to the guidelines, trustworthy AI should be: (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; (3) robust – both from a technical perspective while taking into account its social environment.
Rightey, you know any, I mean any human that could comply with thee ..!? Why require this, then, of AI systems? And it continues [my unitalics]:
The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches ‘Can be achieved’ – any other options, as the mentioned, will undo the benefits of having AI over human (on average) stupidity ..?
  2. Technical Robustness and safety: AI systems need to be resilient and secure. No non-trivial system built by humans to date, has ever achieved that. Now suddenly, it is required? How, so suddenly and not on old-school systems like European politics? They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. Well, we have safety requirements on all sorts of systems so when it’s not new, why mention it? That is the only way to ensure that also unintentional harm can be minimized and prevented. You hinting at intentional harm that is OK then, like weaponised AI against humans read European citizens that have only slightly different opinions?
  3. Privacy and data governance: besides ensuring full respect for privacy and date protection unlike the GDPR then that allows about anything from (whose!?) law enforcement agencies, right?, adequate data governance mechanisms must also be ensured also non-existent in the current-day world…, taking into account the quality and integrity of the data which is crappy and crappy, respectively, almost everywhere – why not fix that first, eh? How then and if you’d know, why not already?, and ensuring legitimised access to data. Oh yeah, see the prev comment on GDPR.
  4. Transparency: the data, system and AI business models should be transparent. So far, proven impossible, since AI systems have been found to be able to discover how to lie to the investigators… Traceability mechanisms can help achieving this. Partially, most partially. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Impossible as 50% of humans are underaverage IQ, and AI systems may use (far, very far) above-average-IQ-reasoning. Somewhere, dumbing down the explanation will not work. Humans need to be aware that they are interacting with an AI system nice, but humans can be misled easily on this one, too, and must be informed of the system’s capabilities and limitations. Of course! As they are now informed of the huge and severe limitations of, e.g., EU lawmakers. Dare a test …!? How to explain the convergence criteria of neural nets? One can mathematically prove there’s no mathematical proof of neural nets’ trainability/effectiveness [if you don’t get that, you’re out of the discussion] so ‘limitations’..?
  5. Diversity, non-discrimination and fairness: Unfair bias must be avoided at last, fair bias is allowed! since e.g., this, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Yeah, yeah, what does that have to do with AI systems if all those thing are so absolutely e-ve-ry-where today, and that is allowed to continue [don’t say that it isn’t; all of the EU’s government and -s would be in lawsuits against every personhood of the planet including themselves over how today’s world works (if at all)]. Fostering diversity objection! extremely illegal against human rights, to be so parti pris! See here, AI systems should be accessible to all oh yeah, like today, right ..?, regardless of any disability the rare True That in the sea of this nonsense-or-impossibility, and involve relevant stakeholders throughout their entire life circle. Again, like today. NOT, why all of a sudden for AI, then?
  6. Societal and environmental well-being: AI systems should benefit all human beings no just the stakeholders each to their own; banks don’t work for society but for the shareholders period, including future generations unlike climate policies that are just there for the current 1% of course. It must hence nope, non sequitur be ensured that they are sustainable and environmentally friendly first require that of you EMP travel schedule ..!!. Moreover, they should take into account the environment ditto, including other living beings like, maybe even humans? Do animals count, otr only the cuddly ones of those? Insects? Funghi?, and their social and societal impact should be carefully considered. Apply that to the EU first, implying immediate abandonment of the EU altogether…
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes hear hear; now, since no-one knows how to do that [no you liar! I mean consultant; your ‘methods’ are provably inadequate so bugger off with your extortionist hourly rates]. Auditability, which enables the assessment of algorithms same. Simply not true with AI, data and design processes plays a key role therein some role maybe, but there’s much more and these means are very weak in this, plus this, especially in critical applications. Raising the question of whether AI should be allowed into those, and how you test in today’s non-AI [which is …!?!?] systems Moreover, adequate an accessible redress should be ensured. Also centuries of backlog with that, fix that first…

Leaving not much, eh? Not even a nice try; try again. And:

[Now that‘s readable – with the same results as the above]

In Faux Gras Control

Remember the controversy around foie / faux gras ..? It’s listed here, and here – though I’m more of the alternative production (natural process) flavour than ‘real’ trademarked faux gras. [At some point in the past, I had ‘faux gras’ so called because it was alternative production, that couldn’t be called full foie gras for that reason – stoopid.]

But that’s beside the point of this post. Or is it ..? Because I brought the subject up to compare it with / make the analogy to, your local Control industry. Like, regulators, industry groups etc., that legally or less so, only pay lip service to comply-or-explain and arm-twist you into comply-or-die(-explaining). If you deny that, you’re delusional.
Force-feed until slaughter, as They‘ll always find deficiencies (almost the opposite of efficiencies). Long-term health is not the goal of the life of the geese / your company, a life of torture is.
Force-feed until …, so the owners/shareholders get the max, not the geese / your other stakeholders.

And the force-feed … is with Controls, of course. That are the things that make you fat with un-Darwinian overhead. NOT for the health of the whole! Whoever wants to maintain that, excludes themselves from the discussion as too simpleton, n00b, naive. No, not even the happytalk at Board levels (of the abovementioned organisations) can cover the fact [if such a thing exists, this is one] that those words are as empty as outer space, or even more so since at least in outer space, some light may be visible.

After your company’s smothered, of course a new stock of to-forcefeeds is needed. Ah, ‘fintech’ and similar startup/scale-up’s; there they come in droves, driven towards the straightjackets.

Is there an escape ..? One (‘s company) would need to develop, most stealthily, the capacity for flight. But once one spreads one’s wings, one is fund out. It seems the only solution is to not be, to not want to be(come), a goose. Be an eagle, in another industry, that preys on the flock. Until that industry too, gets regulated and control-splattered. Sigh. Such is history’s teachings.

On the positive side:

[Some humanity still visible, down below ominously; Zuid-As]

C’19, anyone ..?

Hey anyone out there already investing in Cobit 2019 ..?
Just wanted to know.

Since the core management picture still looks like it did before; not much news there. But the packaging of its use, much improved.
Yes, I wasn’t a fan of the model before. But am slowly warming up to it. Minus the mirage of the standard (process, content) but plus the risk angle. Big-time-minus the Value vague’ry, little plus global push [that runs counter my mirage mention].

Oh, by the way; where’s Strategy, moreover, where’s Culture …!? Since this. And Von Moltke’s on strategic planning. And…, and… and etc.etc.
And don’t come with the “oh but that’s not in Cobit, that’s in GEIT” or so. If, not too big if, you think the subjects are separate enough to allow separate frameworks, well, then you’re in this territory, laws 1-3.

Now then, back to my question. And:

[Or is it not ‘back to my question’ but ‘closely linking to the territory’…]