Blog

Noli me auto

… Yes, again on auto cars [for over a century known as auto-mobiles or just “auto’s”]. This time, a possibly new subject: Who has the self-driving capabilities ..? In a sense close to ‘ownership’ that is. As training is so hard that “As a result, most companies presently guard their machine learning data as trade secrets. But trade secret protection has its drawbacks, one of which is to society at large. Unlike technology taught in patented disclosures, which could allow a new entrant to catch up (provided it licenses or designs around the patent), an AV data set of an unwilling licensor is not obtainable absent a trade secret violation or duplicating the considerable miles driven.[1] — in a distance, it reads a bit like this. Not (only) re the Old Guard, but moreover referring to the stand-offish pose of AI gone beyond AGI… Think that one through.

In practice, for now, it leads to “Keeping AV data secret also creates a “black box” where consumers and authorities are unable to fairly and completely evaluate the proficiency/safety of the AI systems guiding the vehicles. At most, consumers will likely have to rely on publicly compiled data regarding car crashes and other reported incidents, which fail to adequately assess the underlying AI or even isolate AI as the cause (as opposed to other factors). As it is, AV developers’ “disengagement reports” — those tallying incidents where the human attendant must take over for the AI — vary widely, depending on how the developer chooses to interpret the reporting requirement. Without comparable data, consumers are often left with nothing more than anecdotal evidence as to which AV system is the safest or most advanced.” — yet again, intransparency.

Intransparency, at one great blocker of agency … allocatability, in normal human life. Here, we may have a problem..? Wrong Quote alert..! as here.
Nevertheless, this brings back the discussion about driver’s licenses. Currently, car-driver by car-driver.
Soon, both!
As the cars will have to be certified categorically, per software release per physical platform! As the interplay (and compatibility) may vary wildly, and e.g., sw may bloat and hence spoil response times even if tech capabilities don’t undo / worse the sw update(s). Every time, all the time. These failure phenomena creep, too, not only stepwise turn up. [how’zat for brevity?]
As the drivers will have to handle all sorts of exceptions when in all sorts of platforms. Yes, they‘ll need drivers’ licenses still, and be able to demonstrate much more competency, experience and skill in much more of the exceptional circumstances in which they might have to take over from the autos. Driving round the block is useless; shitstorm/clusterfuck traffic is what you get your license for in the first place! So many people will fail, so many will never be able to get the experience (from where, would you suggest ..?).

So, agency … Plus:

[Ah, and there’s NY traffic …]

[1] Quoted from this great introduction.

Friday’s Sobering Thoughts – II

Yup, another one, actually from the same series: A useful reading on the tribulations of fake news, in a way. In another one [way], the scope is much wider. Last Friday’s Friday’s So(m)bering already referred to Ortega y Gasset. This one, too.

Again; how do we fight these things …? One human at a time; if only we knew what ‘time’ is … Even Kant couldn’t make much of it [re: Kritik der reinen Vernunft – no really, all that it has is a derivative notion not an operational definition] – anyways that takes too long.

OK then, leaving you with:

Flooding (news on global warming) versus stochastics

Where the headline reads like something to do with e.g., vintners having to look forward to having more acidic grapes, or moving North to be able to make light-enough styles.
Which is correct reading.
But this post is more on a current-day issue.

Vines needing really, really long to mature to anything useful, you know. Then again: Think of the vast expanses of Canada that will open up to winemaking… Oh Canada your promise …
And the issue at hand, being of Today. Where already we had ‘Stochastic Terrorism‘ – and now it’s to be linked to ‘Flooding‘.

Well, at least, that’s my idea. The latter, dumbing down the masses so Dunning-Kruger may take effect, and overly affect. In between, sowing the wind and reaping the storm with the former, ST.

See? It’s a double whammy. [I mean that in a culturally-neutral sense, excluding suggestive senses oh here I might have gone again]

Now what?
And:

[the bite always looms under the surface, either from a local of from an ages-term outside view; Baltimore]

Your AI Project

As a brief intermission: Your AI project is truly similar to the highest-tech car around.

In the glossy brochure and in the specs. Das AIuto it is (Truth In AI Engineering), The Ultimate AI-Driven Machine. The Best AI or Nothing. There Is No non-AI Substitute.

Then, grand delivery:

There you go! You have your shiny toy!
Unsure when the AI world will mature…

Yes, personally I’d prefer this brand (CD or SC), or this.

Good AI

…implementations. Now the hubbub [in lieu of a. using in lieu of improperly as so often is the case, b. ‘hype’] is settling down a bit, though some laggards are still at it instead of keeping mum and just doing it in a proper way.

The proper way is that the real talk is of now.

  1. On the one hand, there’s an enormous lack of use of the tools that are available now – not widely available, but they’re there, if one would take a look, and see their building ‘tool’ status. No, I don’t mean the glitzy stuff with fancy screens. I mean the ‘advanced’ stuff, that captures the forefront of what was available in theory functional descriptions and ready-made demonstrators of … yes, twenty-five years ago.
    Those have been transformed, by scraping off the (then) practical drawbacks of lack of training data, of compute power, and comparative economic advantages v.v. human labour. The latter, having come down not in cost [your mileage may vary when comparing purchasing power ..!!] but in intelligence. Yes. Read my many posts on ‘management’, cooperative efforts, et al. The fact (sic) is that now, the marginal out-performance of ‘systems’, gains interest.
  2. On the other, there’s the realisation that as with any hype, the proof is in the long, arduous, classic systems implementation drudgery of building and rolling out actual ‘systems’ – not only the software/hw as that is to be somewhere in a cloud [even if private, localised, i.e., a data centre of yours] or you’re doing IT wrong, but also including the assembly line operators duly called ‘managers’ (them again!) that think they still control stuff but are allowed to tighten a screw here and there and certainly not do more as they shouldn’t interfere with the system since they only can/will do wrong then – the system being the assembly line of information flows, of bits and bytes that should be untouched as that’s where the money is in not in the mere auxiliaries called humans. For lack of a better pejorative word.
    This then includes Change Management of the organisation kind, sometimes jokingly called ‘process‘ change. Which is harder to ‘get’ than understanding (true; sic) AI technology; the latter being much beyond mere theoretical physics Sheldon, let alone rocket science. But still quite easy when one puts one’s mind to it. Unless you’re not even believing in yourself for probably a very good reason.
  3. On the third hand [don’t even bother to smallmindcorrect me], there’s more serious stuff to do around the system engineering. Take it as a normal business investment project, like here. Ah, now there‘s the clue to this post. But without the above intro, it wouldn’t been as much fun, wouldit’vebeen..?

So, finally again:

[Now go contemplate, e.g., where this is. Any half-decent AI implementation will render ‘Convento Corpus Christi, V.N. de Gaia’ in seconds; what about you ..?]

No, you’re wrong. Just pay appropriately.

Again, the issue of the ‘cyber’security skill gap (#ditchcyber, and skill ..?) came up, in this thing. Which seems like a logical piece. But it isn’t. It falls into the same trap as a great many other articles, et al.: When discussing rising pay, do they apply economics 101 ..? No they don’t. They see rising pay, but not the enormous chasm between the risen pay and the market-clearing pay. The former, paltry. The latter …:

Why on earth should people that, apparently, have so much more impact on the results of any company than the CEO or Board has, not be paid respective incomes? If, if, if, the company were rationally managed, the ‘cyber’punks that protect the company, would earn five to fifty times more than the CEO does. As that would reflect the value-to-impact ratio. No, let the CEOs, Boards, do their babble double full time anywhere; it’s still babble without much if any, if any positive contribution to the bottom line. [The very rare exceptions noted. Fun fact: You’re not one of them.]
So, where CEOs claim [corollary fun fact: falsely!] to have such an impact that they deserve the 500-5000+ times the income of workers that actually do have an impact, the ‘cyber’punks to be hired should have such a multiple above that, or are you illogical, irrational on the verge of insanity ..? [not saying on which side of the verge.]

Another option is to not pay the in-demands (?) so ridiculously … and also not the Boards. Oh how huge the pools of money would be that then would flow to the employees that actually matter …?

Also, the ‘skills’ gap is mainly a supply gap, made so most certainly by the demand side (as in this; in Dutch but G Translate will do some, possibly mediocre, trick).
So, for now:

[Yet another Board member (uh-huh, you-know-what-I-mean) compensating design (though Good); London]

Friday’s Sobering Thoughts – I

And now, ladies and gentlemen humanoids …!
Starting a Friday series of asides.

With this, on Gresham’s Law that is so ill-remembered. Dunning-Kruger, Russell, and many others have pointed this out. [Disclaimer: I’m very sure about all things, and that includes knowing that the said effects don’t regard me]

The impact … All around you. Politics, business, …, any field really where there’s hierarchy I guess (sic). The solution … throughout history, has been Revolution of the society-shattering kind; that brings so much damage to all that are liberated, in a sense. Until the worst come float back up. Ol’ José was right; we learn from history that we don’t learn from history. The summary of Eco’s Foucault’s Pendulum ‘s’s.

Or so. Any solution providers ..? Plus, from this here full coverage:

Designisruption

Don’t know if the world has picked up on this potential-disruptor already by the time you read this, but how about a disruption of the design world in this ..? I will resist to, will simply not describe it as ‘MS Paint on steroids’; nor will I pink-elephant you with that label.

As I draft this (present tense), a month ago (not so present), it occurred to me that No, this would not put all designers out of a job but Yes, the ones working in the field will

  1. See their turnover diminish significantly as a lot of people will cobble something together for themselves that looks better than current-day attempts so who needs a designer anymore, right ..?
  2. See competition from new entrants that need less technical skills at drawing, etc. [double whammy in 1.+2.];
  3. Need to step up their game to remain relevant; meaning they’ll have to improve significantly in the competitive-advantage area of distinguishing the Very Good from the Merely Good i.e., need to be able to demonstrate to set themselves above the generated-design quality;
  4. Learn to leverage the new style, set, of tools in their work. Learn fast, and deep. And even contribute to its extension.

On the plus side; e.g., architects would be much better able to focus on the essence of their designs – inputting actual Ideas into the nice pictures of what might be, that sell. This sounds very much the same as the use of e.g., ML in accountancy: ditch the grunt work and jump to the lofty high-class, high-value, high-€€€ work. Easy-living bohémien style posing.

Oh but that’s a qualification that’s out of place, heh. Plus:

[Just sketch some curves, and this Calatrava-engineered thing at Lyon TGV would designcalculate its own strucutural stuff, right?]

Driving Miss

The adversarial example saga continues … In a story that one doesn’t even need to crack into an auto to ‘hack’ it. *driving into upcoming traffic included in this package.

Well, at least, one case is demonstrated, here. But as modern times go, one is enough to demonstrate, and to make all vulnerable, and to instigate a class break; mere patches won’t hold.

So ms. Daisy may be better off to continue to drive herself…?
And:

[Your innovation, applied; Fabrique Utrecht]

The humans, the AI and the EU

If you parse that onto movie(X, Y, Z) and assuming you have the most basic of understanding about the world around you [I mean, be able to recognise that as a Prolog (-style) goal] you’ll end up with the right reference.
’cause of some, at face/title value, laudable effort, as here: Rules for AI. Like,
According to the guidelines, trustworthy AI should be: (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; (3) robust – both from a technical perspective while taking into account its social environment.
Rightey, you know any, I mean any human that could comply with thee ..!? Why require this, then, of AI systems? And it continues [my unitalics]:
The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches ‘Can be achieved’ – any other options, as the mentioned, will undo the benefits of having AI over human (on average) stupidity ..?
  2. Technical Robustness and safety: AI systems need to be resilient and secure. No non-trivial system built by humans to date, has ever achieved that. Now suddenly, it is required? How, so suddenly and not on old-school systems like European politics? They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. Well, we have safety requirements on all sorts of systems so when it’s not new, why mention it? That is the only way to ensure that also unintentional harm can be minimized and prevented. You hinting at intentional harm that is OK then, like weaponised AI against humans read European citizens that have only slightly different opinions?
  3. Privacy and data governance: besides ensuring full respect for privacy and date protection unlike the GDPR then that allows about anything from (whose!?) law enforcement agencies, right?, adequate data governance mechanisms must also be ensured also non-existent in the current-day world…, taking into account the quality and integrity of the data which is crappy and crappy, respectively, almost everywhere – why not fix that first, eh? How then and if you’d know, why not already?, and ensuring legitimised access to data. Oh yeah, see the prev comment on GDPR.
  4. Transparency: the data, system and AI business models should be transparent. So far, proven impossible, since AI systems have been found to be able to discover how to lie to the investigators… Traceability mechanisms can help achieving this. Partially, most partially. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Impossible as 50% of humans are underaverage IQ, and AI systems may use (far, very far) above-average-IQ-reasoning. Somewhere, dumbing down the explanation will not work. Humans need to be aware that they are interacting with an AI system nice, but humans can be misled easily on this one, too, and must be informed of the system’s capabilities and limitations. Of course! As they are now informed of the huge and severe limitations of, e.g., EU lawmakers. Dare a test …!? How to explain the convergence criteria of neural nets? One can mathematically prove there’s no mathematical proof of neural nets’ trainability/effectiveness [if you don’t get that, you’re out of the discussion] so ‘limitations’..?
  5. Diversity, non-discrimination and fairness: Unfair bias must be avoided at last, fair bias is allowed! since e.g., this, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Yeah, yeah, what does that have to do with AI systems if all those thing are so absolutely e-ve-ry-where today, and that is allowed to continue [don’t say that it isn’t; all of the EU’s government and -s would be in lawsuits against every personhood of the planet including themselves over how today’s world works (if at all)]. Fostering diversity objection! extremely illegal against human rights, to be so parti pris! See here, AI systems should be accessible to all oh yeah, like today, right ..?, regardless of any disability the rare True That in the sea of this nonsense-or-impossibility, and involve relevant stakeholders throughout their entire life circle. Again, like today. NOT, why all of a sudden for AI, then?
  6. Societal and environmental well-being: AI systems should benefit all human beings no just the stakeholders each to their own; banks don’t work for society but for the shareholders period, including future generations unlike climate policies that are just there for the current 1% of course. It must hence nope, non sequitur be ensured that they are sustainable and environmentally friendly first require that of your EMP travel schedule ..!!. Moreover, they should take into account the environment ditto, including other living beings like, maybe even humans? Do animals count, otr only the cuddly ones of those? Insects? Funghi?, and their social and societal impact should be carefully considered. Apply that to the EU first, implying immediate abandonment of the EU altogether…
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes hear hear; now, since no-one knows how to do that [no you liar! I mean consultant; your ‘methods’ are provably inadequate so bugger off with your extortionist hourly rates]. Auditability, which enables the assessment of algorithms same. Simply not true with AI, data and design processes plays a key role therein some role maybe, but there’s much more and these means are very weak in this, plus this, especially in critical applications. Raising the question of whether AI should be allowed into those, and how you test in today’s non-AI [which is …!?!?] systems Moreover, adequate an accessible redress should be ensured. Also centuries of backlog with that, fix that first…

Leaving not much, eh? Not even a nice try; try again. And:

[Now that‘s readable – with the same results as the above]

Maverisk / Étoiles du Nord