One extra for Two AI tipping point(er)s

To add, to the post below of a month ago.
This here piece, on how AI software is now writing (better) AI software. Still in its infancy, but if you recall the Singularity praise (terroristic future), you see how fast this can get out of hand. Do you?

The old bits:

You may have misread that title.

It’s about tips, being pointers, two to papers that give such a nice overview of the year ahead in AI-and-ethics (mostly) research. Like, this and this. With, of course, subsequent linkage to many other useful stuff that you’d almost miss even if you’d pay attention.

Be ware of quite a number of follow-up posts, that will delve into all sorts of issue listed in the papers, and will quiz or puzzle you depending on whether you did pay attention or not. OK, you’ll be puzzled, right?

And:
DSCN1441[Self-learned AI question could be: “Why?” but to be honest and demonstrating some issues, that’s completely besides the point; Toronto]

You Don’t Call The Shots

I.E., You Are Not In Control !

This, as a consequence of the ‘In Control’ definition. Where the controlling and ‘steering’ (what Steering Committees are about, if properly functioning … ) are the same.
But as explained previously, such steering doesn’t happen (is impossible) already in a Mediocristan world its complexity, let alone the mix-in (to say the least) with Extremistan that you’ll find everywhere and certainly in your business.

NO you can risk-manage your business to the hilt, or even make it extremely brittle, antiresilient by totalitarian bureaucracy that leaves no human breathing space but switches to full 100% bot-run enterprise, DAO-style ops (hence will fail with complete certainty when interacting with humans like, e.g., your clients),
because complete risk-managed stuff still weighs costs so is imperfect or isn’t…
And of the imperfection of fully-reactive quod non-‘security’, see the above and many of my previous posts…

So either way, things will happen that you didn’t order. Estimates run from 50-50 (where you have zero clue about which 50 you do control) to 90%, 95%, 99% not-your-call shots. The latter category since your brain is not wired [link: huh] to deal with more than 10% ‘free will’ and the rest is, as scientifically determined, reactive to the environment however clever and deep-minded you think yourself to be (the more the latter, the less you are … If you have to say you are wise, you aren’t). Which make the majority of what happens to you and your organisation, accidental and from the outside. Which is by the very definition not you being ‘in control’.

Despite all the ‘GRC’ liars that should be called out for that quality.

[Edited after scheduling, to add: In this here piece, there are very, very useful pointers to break away from the dismal Type I and II In Control (quod non) Statements of all shades. Should be studied, and seen to refer back to the foundations of auditing ..!]

Oh, and:
DSC_1033[Designed to belittle humans — failing since they’re still there…; DC]

On your own, or forever be weak

Just a note that ‘cyber’security vendors (that hate #ditchcyber) will not save you whatever their claims are. Because they live off the perpetuation of the problem, and will make you weaker by lack of upkeep of your strengths at whatever levels they were.
Just a note that this applies to ‘intelligent’ devices of whatever sorts, too. Like, The Shallows squared; Home voice-recognising butlering devices (is there a category name for those already? The Echo’s, Alexia’s, Home’s I mean) or the bots out there on the ‘net, self-driving cars, etc.etc.

So, ed-ju-cay-shun is still to be pursued, in all directions! And:
DSC_0711
[Yes art education as well, to not skew your persepctive…; DC sculpture garden]

Two AI tipping point(er)s

You may have misread that title.

It’s about tips, being pointers, two to papers that give such a nice overview of the year ahead in AI-and-ethics (mostly) research. Like, this and this. With, of course, subsequent linkage to many other useful stuff that you’d almost miss even if you’d pay attention.

Be ware of quite a number of follow-up posts, that will delve into all sorts of issue listed in the papers, and will quiz or puzzle you depending on wether you did pay attention or not. OK, you’ll be puzzled, right?

And:
DSCN1441
[Self-learned AI question could be: “Why?” but to be honest and demonstrating some issues, that’s completely besides the point; Toronto]

17 views

Just a tip: 2017 will be all about Augmented Reality. Maybe not VR, or is that a mere intermediate-phase subset. But AR.

When M$ adds some capabilities to WW10 (the platform, or binary),the ground swell (or is it an undercurrent, undertow?) is sure to grow.

So, Be Prepared, to see innovation in business software after the years of stale(mate) in ‘ERP’; no more database-system-usersystem stacking, but a mountain of data with lean and mean engines at/on the top now extended with AR to doddle around — and maybe … do something useful. All of business IT will dwindle in significance, so much less users as simple one-screwturn assembly line workers (to be (sic) replaced by robots anyway), but something-large-and-vague and payful, agile top users in the cloud atop. Top users only, be best and bright (self-declared), the shiny, überemployees, the stars. Huh. How’d they get there, be experienced enough, not be hindered by any wisdom … ? How they’d get so blinkered, conceited, presumptuous etc.? How’d they not get caught in the emperor’s new clothes?

I’ve been instructed (not) to be more positive, to not apply debit where debit is absolutely due… So, how can we turn today’s silo’d work into creative, innovative, flexible functions of tomorrow [and the day after; best wishes ;-] ..?

Plus:
20150311_122327_HDR[1]

[Your yacht may be a piece of art, but still…; Zuid-As Amsterdam (it was)]

Oh… Yes but still, now it’s true:

[Edited to add:

]

Rejoice, you the Puzzled

Unless you were doing stuff on Nocial Media (see tomorrow — never mind, I’m not one for linear Time), you may have missed or noticed (same) the release of a facsimile of one of the most veritable puzzles of the ages (Western world), in this here thing, which is posted here. Yeah, that’s how hyperlinks work, I kno, I kno.

So, now you the puzzlers AKA crytographers around the globe, may swoop down in even larger numbers than before, to crack the thing. Or, probably, not. And you knew about this whole thing.

Question: (Why | _ ) hasn’t already some Watson-class frigate AI tool (literally certainly not figuratively) been set loose on this ..? Lack of purpose? Of would it be a good ‘Turing test’ of sorts, if we test the capabilitites (learning/analysis time ..!?) of any AI tool, by the time it would take to make mincemeat out of the Manuscript — Duh-tch for tearing apart. Any attempt after the first successful, would need to be instructed not to find the solution out there on the ‘net, obviously (?) …

One may hope, may one not? And:
h7C312413
[Found it! This is what the Manu is about!!]

The legacy of TDoS

So, we have the first little probes of TDoS attacks (DoS-by-IoT). ‘Refrigereddon’.
As if that wasn’t predictable, very much predictable, and predicted.
[Edited to add: And analysed correctly, as here.]

Predicted it was. What now? Because if we don’t change course, we’ll achieve ever worse infra. Yes, security can be baked into new products — that will be somewhat even more expensive so will not swarm the market — but for backward compatibility in all the chains out there already, cannot be relied upon plus there’s tons of legacy equipment out there already (see: Healthcare, and: Utilities). Even when introducing new, fully securable stuff, we’re heading into a future where the Legacy issue will grow for a long time and much worse than it already is, before (need to be) huge pressure will bring the problem down.

So… What to do ..? Well, at least get the fundamentals right, which so far we haven’t. Like this, and this and this and here plus here (after the intermission) and there

Would anyone have an idea how to get this right, starting today, and all-in all-out..?

Plus:
20150323_213334
[IRL art will Always trump online stuff… (?); at home]

Bring on the Future; it belongs to ME

Some say self-driving cars / autonomous vehicles will take over driving as if they will take over all driving. But the intermediate phase [where autonomous persons and autonomous masters with slave (!) persons] will see ‘driving’ turn into a pastime, a hobby, of thrill seeker persons. Yes, even with Insurance rates getting somewhat higher. Not much higher, let alone skyrocketing, because the lone drivers that hold out, will find more and more very defensively behaving autonomous tin cans opposite them, scaring the latter (to steer) off the road…! Hence, aggressive drivers will not (provably) cause many accidents, autonomous vehicles will in all their panic. Hence, autonomous humans will not have staggering Insurance rates

and will keep on driving because of the fun of it, the feeling of independence and self-control, the thrill sought and found…

After human chess players could no longer win against ‘computers’, humans still play chess. After humans were outpaced by cars of all kinds, humans still try to win gold medals at the very event. After humans lost Jeopardy against Watson, humans still compete on ‘intelligence’ everywhere; opportunistically retreating ever further on the definition of ‘intelligence’.
Hence, humans will not drive ‘cars’ en masse, in the near+ future. But they will upgrade the purpose of driving, and will drive.

I sure will, for fun. With so many ‘lectric autonomous thingies on the road moving the sheeple, I’ll have or get myself all the road space I need… The road will belong to ME ME ME ..!

And
20150109_144728
[One perfect artist does NOT outdo others; Gemeentemuseum Den Haag]

Spinning Wheel — wait, for it: Clock or Counter-Clock ..?

Anyone noticed that IUs seem to make a thing of having replaced the clearly-archaic hourglass wait icon, with a spinning wheel — that was the Obvious part & mdash; but that the circle sometimes runs clockwise, sometimes counter-clockwise ..?
Part of the why is resolved, e.g., here, but the issue is that it seems to go all sorts of directions in/at all sorts of apps, sites, et al., as far as I can tell not seriously related to the linked explanation.

Yes, I’ve studied this here foundational theory, but also there, not much on directions. Didn’t even know Throbber was a thing.

Then, surely there’s an authoritative UX/GUI protocol (huh?) that has the definitive answers ..? Anyone ..? Oh well:

20160611_153611
[Keeps on [ slipping, slipping, slipping | turning ], [ back to | into ] the future circles; Stedelijk Amsterdam]

Needing trolley answers — NOW!

Needing your help on this. In two ways…:

  • How come all the ethicists dreaming up ever more complex versions of the Trolley Problem, but are only too gleefully snickering at n00bs-to-the-field that figure out the peculiarities as they are led through the many pitfalls in thinking — but never arrive at definitive answers themselves! and are just happy with ever further complicating the issues.
    Question is: How to bang their heads long and hard enough or, to give them a last chance, lock them up without food and drink until they deliver definitive answers? Left or Right, Yes or No, with ‘or’ being absolute XOR not ‘and a bit “and”, too’.
  • How does System 1 thinking, or System 2, tie into these sort of discussions ..? As said problems call for immediate decision (no time to wait for decades of completely useless non-answers from ethicists…), System 1 would probably have it, System 2 being too slow. How does System 1 respond in this arena, then ..? Should be tractable.
  • [Of course you didn’t expect me to stick to even my own ‘two’ of the intro, did you?] Is System 1 inherently more tied to the hunter-gatherer life that humanity has evolved in for so much longer, than the agri-society of late (10k years) ..? If so, in what way could we use such connection(s) and ramifications (…) to improve our responses to the above, and to society’s ails in general..?

OK, enough questions, possibly though certainly not certainly not answerable in a simple Comment … Hence:
dsc_0030
[Ah, the Classics, they would probably provide better, actual, solutions, wouldn’t they ..? Ancy-le-Franc encore]

Maverisk / Étoiles du Nord