Colluding AI

As more and more grunt work (like, so much that’s done in the intenisve people farms called ‘offices’) is replaced with AI, how soon will we find that some decision by a human, hardly in control anymore but totally reliant on the precooked algorithmic outcome provided by AI, will be contested in court – that will be presided over by a judge, hardly in control anymore but totally reliant on the precooked algorithmic outcome provided by AI, and the two colliding against humans’ interests…

Note that “of course”, there will be humans nominally handing out the final verdict(s), because so many (not yet) fought so hard (not enough yet) to keep a ‘human in the loop’. But having achieved not much more than the nominal thing, and there quickly being far too little humans with enough experience (how would they gain that, when they haven’t gone through the grunt work themselves, including being allowed to err sometimes or how would they otherwise have learned ..?) to be able to usefully overrule the AI. Usefully, in the sense that the AI will have all the better, rational even if outlandish arguments… No more gut feelings … That may be part of what makes us human; whaddabout Kahnemann’s 90% System 1 ..?

And then, still, what when AI finds it rational to re-introduce the death penalty ..? Swiftly executed, to preempt appeals?

Oh how bright is our future! Also:
[There was supposed to be a shut-down button somewhere in one AI/pillar at least… Now they switch each other On again …; Córdoba]

AI learning to explain itself

Recently, there was news again that indeed, ABC (or The ‘Alphabet’ Company Formerly Know As Google) was developing AI that could improve itself.
O-kay… No sweat, or sweat, qua bleak future for humankind, but …
Can the ‘AI’ thus developing itself, maybe be turned first to learn how to explain itself ..? Then, this [incl link therein!] will revert to the auditors’ original of second opinions … Since the self-explanatory part may very well be the most difficult part of ‘intelligence’, benefitting the most from the ( AI improving itself )2 part or what?

And:

[Improving yourself as the imperative; Frank Lloyd Wright’s Beth Sholom at Elkins Park, PA]

No Dutch AI

How far behind is the Dutch (startup) scene with AI ..?
That may seem kurt, but …
Really there is no sign of Dutch AI industry or even industriousness.

Unbefitting the Dutch, is it not? ‘We’ should have all the brains needed, the industriousness, the venturing spirit, the openness to things-new.
But apparently, ‘we’re still stuck in collectivist ideals, where rocking the boat is only allowed when for some naïve progress [Uhm this is no sligh to Boyan Slat; on the contrary I and everyone likes his ideas and heart and soul he puts into it]. When searching for ‘Dutch AI scene, hardly anything turns up. This among the hardly search results; ominously.

So, it’s a Shame. And Why ..!?
Yes I did list some why’s but they don’t cut it, against the Aye’s. We need a new élan! How to get such a thing going!?
And:
[If that is the neo-modernistest that you build / apparently want to spend your money on, then well you may be doomed indeed; Zuid-As Ams]

Visual on socmed shallowness

When considering the senses, is it not that Visual (having come to ther play rather late in evolution or has it but that’s beside the point, is it?) has been tuned to ultrahigh-speed 2D/3D input processing ..? Like, light waves particles who are you fooling? happen to be the fastest thing around, qua practical human-scale environmental signals – so far, yeah, yeah… – and have been specialised to be used for detection of danger all around, even qua motion at really high pace (despite the 24-fps frame blinking).
Thus the question arises: What sense would you select, when focusing on shallow processing of the high-speed response type? Visual, indeed. Biologically making it less useful for deep thought and connection, etc.

Now that the world has turned so Visual (socmed with its intelligence-squashing filters, etc.; AR/VR going in the same direction of course), how could we expect anything else that the Shallows ..? Will we not destroy by negative, non-re inforcement, human intelligence and have only consumers left at the will of ANI/ASI ..?

Not that I have the antidote… Or it would be to Read, and Study (with sparse use of visual, like not needing sound bite sentences but some more structured texts), and do deep, very deep thinking without external inference.
But still… Plus:

[An ecosystem that lives off nanosecond trading – no need for human involvement so they’re cut out brutally; NY]

When ABC– will use AI, success

So it turns out that the company formerly known as Google, may very well enter the job market. Qua brokerage.
In which it may succeed (it already caused to-be competitors’ stocks to nosedive, a little at least), when deploying smart AI solutions.

Let’s hope, then, that Alphabet Jobs [as it might be called in a stab’let at $AAPL ..?] will use the AI to bypass the most ridiculous aspects, that are many, of the current process. E.g., obliterating the tick box atrocity – as certainly, its own search capability will burn the fuses when trying to find anyone on this planet that fits the requirements for just any job description as billed by ‘recruiters’. Dropping the similar requirements also of ‘having ten years of experience with this totally unknown system that only three current sysadmins can handle and had been implemented only two years ago’, or the infamous but near-certain to surface ‘millennial with thirty years of industry experience, will work for barely entry-level / intern salary’. Don’t say these requirements aren’t realistic like being real over and over again. They are! They’re there, everywhere!

And what does it tell you that ‘we’ may need AI to overcome this stupidity ..?
That Disruption with a capital is desperately called for.

We can hope, can’t we?

And:

[Tomb of the Unknown Candidate. No don’t wry-smile, pay some respect…!; Paris]

Music to AI’s ears

Will AI eventually appreciate music ..?

Not just appreciate in a sense of experiencing the quality of it — the latter having ‘technical’ perfection as its kindergarten basement’s starting level only; where the imperfections are cherishable as huge improvements, yes indeed [Perfection Is Boring!] … but moreover, music appreciation having a major element of recognition, subconsciously mostly, of memories of times almost-im-memorial.

Of course, the kindergarten perfection gauging, AI will be able to do easily. Will, or does; simple near-algorithmic A”I” can do that today.

Appreciating imperfections, the same, with a slight randomiser (-recogniser) thrown in the algo mix.

But the recollection part, even at a conscious level requires memories to be there, and as far as AI goes (today) even ASI will have different memory structures since the whole facts learning processes are different. And don’t mention the subconscious side.

Yes, ASI can have a subconscious, of which we aren’t aware of even able to be aware [Note to self: to cover in audit philosophy development]. But when we don’t hear of this, was there a tree that fell in the forest?

I’m off some tangent direction.

What I started out to discuss is: At what point does music appreciation through the old(est) memories recall, become an element of ‘intelligence’ ..?

With the accompanying question, on my priority list when discussing AxI: Is it, for humans ..?

And a bonus question: Do you really think that AI would prefer, or learn earlier about the excellence, of Kraftwerk or Springsteen? Alas, your first response was wrong; Kraftwerk’s the kind of subtle intelligent hint-laden apparently-simple stuff that is very complex and also deeply human — which you perceive only when listening carefully over and over again till you get the richness and all the emotions (there they are!) and yearning for the days gone by when the world was a better place. Springsteen, raw and Original-Forceful on the surface — but quickly showing a (rational-level) algorithmics play with not as much depth; even the variations and off-prefection bits are well thought-out, leaving you with much less relatable memories if at all.

Your thoughts are appreciated. And:
[Appropriately seemingly transparent but completely opaque; some EU parliament (?), Strassbourg]

Imminent enrichment through AI — of jobs ..?

Anyone else feels like the breakthrough of AI in all sorts of jobs (yes, most certainly not only the bohrrring repetitive-manual-labour kind — that may be one of the kinds that comes much later in the sequence since it requires extremely sophisticated physical/intellectual (yes) interactions than previsouly thought (by humans))
is imminent?

And anyone see that the horror of replacement of humans XOR your co-workers is to come only (a bit) later, when AI-driven systems have become good enough to replace you, completely — leaving the spoils of labour to the (intensive people-farming) factory owners ..?
With in the shortish mean time, your job being ‘enhanced’ through AI, by the enrichment of having to deal less with the simple stuff and you having more time available to do more Intelligent (parts of) your job. Possile, on conditions of:

  • Such more intelligent parts of your job existing; a great many a manager may find there is no such thing, or the room for manoeuvre isn’t there;
  • You being able, capable, of performing such more intelligent job parts; with the focus on reporting (send/receive; hardly ever anything more than the extremely-simpleton processing in between) probably your capabilities have shrivelled into unusability;
  • Time availability is what holds you back so far; extending on the previous condition, you may find yourself to actually – be honest now! – already have had that time available but used it for busywork, like, being a Manager or so. And/or, by loafing or do I repeat myself. Now that you may get time available for Intelligent stuff, you may not notice that;
  • You getting paid more, or at least the same; as it turns out that the enrichment-by-cutting-out-the-bottom-part, leads to a serious pay cut as your Overlords now see your function as much less time-consuming or bottom-line-feeding. Especially the latter may turn out to be an eye-opener…
  • You getting sufficient time to build a new job; the creeping replacement of You by AI-based systems might speed up significantly as the first rewards transpire — to the Owners again — and hence the cry [not tag; ed.] for More may intensify the efforts to replace you ever more, funded by … your increased utility if at all, or the increasing utility of the you-replacing AI at least.

Suffice to notice that a priori it will be very, very difficult to meet all these conditions, if even anyone would try (apart from you, but you’re too singleton in this to pull that off). So…

Oh well, there’s always:
[A different look at Casa de Musica; Proto]

One extra for Two AI tipping point(er)s

To add, to the post below of a month ago.
This here piece, on how AI software is now writing (better) AI software. Still in its infancy, but if you recall the Singularity praise (terroristic future), you see how fast this can get out of hand. Do you?

The old bits:

You may have misread that title.

It’s about tips, being pointers, two to papers that give such a nice overview of the year ahead in AI-and-ethics (mostly) research. Like, this and this. With, of course, subsequent linkage to many other useful stuff that you’d almost miss even if you’d pay attention.

Be ware of quite a number of follow-up posts, that will delve into all sorts of issue listed in the papers, and will quiz or puzzle you depending on whether you did pay attention or not. OK, you’ll be puzzled, right?

And:
DSCN1441[Self-learned AI question could be: “Why?” but to be honest and demonstrating some issues, that’s completely besides the point; Toronto]

You Don’t Call The Shots

I.E., You Are Not In Control !

This, as a consequence of the ‘In Control’ definition. Where the controlling and ‘steering’ (what Steering Committees are about, if properly functioning … ) are the same.
But as explained previously, such steering doesn’t happen (is impossible) already in a Mediocristan world its complexity, let alone the mix-in (to say the least) with Extremistan that you’ll find everywhere and certainly in your business.

NO you can risk-manage your business to the hilt, or even make it extremely brittle, antiresilient by totalitarian bureaucracy that leaves no human breathing space but switches to full 100% bot-run enterprise, DAO-style ops (hence will fail with complete certainty when interacting with humans like, e.g., your clients),
because complete risk-managed stuff still weighs costs so is imperfect or isn’t…
And of the imperfection of fully-reactive quod non-‘security’, see the above and many of my previous posts…

So either way, things will happen that you didn’t order. Estimates run from 50-50 (where you have zero clue about which 50 you do control) to 90%, 95%, 99% not-your-call shots. The latter category since your brain is not wired [link: huh] to deal with more than 10% ‘free will’ and the rest is, as scientifically determined, reactive to the environment however clever and deep-minded you think yourself to be (the more the latter, the less you are … If you have to say you are wise, you aren’t). Which make the majority of what happens to you and your organisation, accidental and from the outside. Which is by the very definition not you being ‘in control’.

Despite all the ‘GRC’ liars that should be called out for that quality.

[Edited after scheduling, to add: In this here piece, there are very, very useful pointers to break away from the dismal Type I and II In Control (quod non) Statements of all shades. Should be studied, and seen to refer back to the foundations of auditing ..!]

Oh, and:
DSC_1033[Designed to belittle humans — failing since they’re still there…; DC]

Two AI tipping point(er)s

You may have misread that title.

It’s about tips, being pointers, two to papers that give such a nice overview of the year ahead in AI-and-ethics (mostly) research. Like, this and this. With, of course, subsequent linkage to many other useful stuff that you’d almost miss even if you’d pay attention.

Be ware of quite a number of follow-up posts, that will delve into all sorts of issue listed in the papers, and will quiz or puzzle you depending on wether you did pay attention or not. OK, you’ll be puzzled, right?

And:
DSCN1441
[Self-learned AI question could be: “Why?” but to be honest and demonstrating some issues, that’s completely besides the point; Toronto]