Move; to Canadaya ..?

While discussing the options for those in developed countries that would not necessarily agree with the outcomes of recent or pending elections, of course Canada was on the table. Not quite in the Tim Horton / Hudson’s Bay / Blue Jays style, but rather as evac site. Not the Thinking Class leaving, but the retreat of the Others [needless to say, the 1%-and-up aren’t anywhere anymore already; they escape no matter which way the wind blows] is what we have seen with/before/at the Elections in this case; back into the countryside as if the cities aren’t the major country elements these days (‘states’ and electoral colleges as artifacts, makeshift solutions to early-days haphazard nationwide (then, more height than width) comms).

Or still, nevertheless, this here old (Spring) post may provide an option.
Which is perfectly possible; aren’t they where they’d retreat in the first place? But that would bring the ‘risk’ [ P(X)=1 ] that it turns out that the ones not retreating into the billyhills, can perfectly do without the retreaters [many letters in common with traitors], or even fare better.
Calling into question whether the pres that will ‘represent’ all, does, for all or doesn’t, for a majority (!) thus undermining the very idea of validity of the representer in that position and the systems/schemata of elections that brought him there despite the majority not wanting him.

Interesting.

May still bring the near-(sic) Yucatan arrangement closer.

Oh well, plus:
20160610_124406
[Defensible against those so utterly bluntly lied to, but also my next / client offices; Breda]

Lament / Where have ‘Expert Systems’ gone ..?

Those were the days, when knowledge elicitation specialists had their hard time extracting the rules needed as feed for systems programming (sic; where the rules were turned into data, onto which data was let loose or the other way around — quite the Turing tape…), based on known and half-known, half-understood use cases avant la lettre.
Now are the days of Watson-class [aren’t Navy ships not named after the first of the class ..?] total(itarian) big data processing and slurping up the rules into neural net abstract systems somewhere out there in clouds of sorts. Yes these won out in the end; maybe not in the neuron simulation way but more like the expert system production rules and especially axioms of old. And take account of everything, from the mundane all the way to the deeply-buried and extremely-outlying exceptions. Everything.
Which wasn’t what experts were able to produce.

But, let’s check the wiki and reassure ourselves we have all that (functionality) covered in “the ‘new’ type of systems”, then mourn over the depth of research that was done in the Golden Years gone by. How much was achieved! How far back do we have to look to see the origins, in post-WWII earliest developments of ‘computers’, to see how much was already achieved with so unimaginable little! (esp. so little computing power and science-so-far)

Yes we do need to ensure many more science museums tell the story of early Lisp and page swapping. Explain the hardships endured by the pioneers, explorers of the unknown, of the Here Be Dragons of science (hard-core), of Mind. Maybe similar to the Dormouse. But certainly, we must lament the glory of past (human) performance.

Also,
20150215_144700
[Is it old, or (still) new ..? Whatever, it’s prime quality. Spui, Amsterdam]

My daughter in law can't even develop a coherent model that unifies all fundamental theories

Trudy Helps

November 2, 2016 by Trudy

Dear Trudy,
My daughter in law is a lovely girl, but she can’t manage to develop a coherent, exact model that unifies the theory of general relativity with quantum mechanics. This bothers me enormously, e.g. at birthday parties with the family. How can I make clear that I would love to see het own string theory without hurting her feelings ..?

You b…, what are you doing? It will definitely destroy your relation with your daughter in law if you try to meddle with the way she likes to integrate scientific phenomena. If you want to have a shot at ever seeing that unification theory during childhood, you better distance yourself a little from her efforts.
trudie-660x386

[Original, in Dutch, on the Speld; translated with permission]

Fuzzy Vocabulary (Cross-)Boundaries

When discussing Risk …
There will always at some stage turn up a discussion (or multiple, if you’re Lucky; not) about the meaning of certain key words. Which is a pity, because … no, not because it distracts. Though it does, the main issue is that the secondary, meta, discussion about vocabularies is never / rarely resolved.
At strategic levels, talk is about risk appetite and risk tolerance, and foremost about business opportunities (of which the exitement is) spoiled by “risk managers” that point out the world might not be perfect and hence one is all but certain not to achieve the objectives. Smart business leaders push forward anyway, at best keeping the risks in the back of their heads while sanding off the rough edges of progress at that goes along all quite well. When strategies turn out to fail: Well, such is life as it has been since the dawn of humanity.
At tactical levels, talk is about risk portfolios and … not much, really; mostly project and program risks. Of the Boy Cried Wolf kind.
At operational levels, quasi-(sic!) quants do their stuff and come with all sorts of fabulous fables of formulas that wouldn’t stand scrutiny at the most basic of math levels. What idi.t would translate ‘High’ to ‘5’ and then multiply it with some other ‘4.5’ to arrive at a ‘22.5’ “risk” ..!? Heat maps are the reflection of the own moronic brain functioning onto what are supposed to be Managers’ levels of understanding. Though the outcome is correct, the origin of the reflection should be kept in mind instead of forgotten.

And all talk about ‘risk’ (‘operational risk’, even worse), ‘impact’, ‘High’, as though these were somewhat the same thing for all involved, disregarding most of time- and situation-variance or rather completely -determination. Right. Wrong. Just regurgitating definitions from ISO standards demonstrates to not understand the nature of the problem…

Any theoretical science logical-AND linguistics specialists that can help? And:
20161025_1442361
[Tinguley in a picture is quite different from the message of it …; Stedelijk Amsterdam]

Bring on the Future; it belongs to ME

Some say self-driving cars / autonomous vehicles will take over driving as if they will take over all driving. But the intermediate phase [where autonomous persons and autonomous masters with slave (!) persons] will see ‘driving’ turn into a pastime, a hobby, of thrill seeker persons. Yes, even with Insurance rates getting somewhat higher. Not much higher, let alone skyrocketing, because the lone drivers that hold out, will find more and more very defensively behaving autonomous tin cans opposite them, scaring the latter (to steer) off the road…! Hence, aggressive drivers will not (provably) cause many accidents, autonomous vehicles will in all their panic. Hence, autonomous humans will not have staggering Insurance rates

and will keep on driving because of the fun of it, the feeling of independence and self-control, the thrill sought and found…

After human chess players could no longer win against ‘computers’, humans still play chess. After humans were outpaced by cars of all kinds, humans still try to win gold medals at the very event. After humans lost Jeopardy against Watson, humans still compete on ‘intelligence’ everywhere; opportunistically retreating ever further on the definition of ‘intelligence’.
Hence, humans will not drive ‘cars’ en masse, in the near+ future. But they will upgrade the purpose of driving, and will drive.

I sure will, for fun. With so many ‘lectric autonomous thingies on the road moving the sheeple, I’ll have or get myself all the road space I need… The road will belong to ME ME ME ..!

And
20150109_144728
[One perfect artist does NOT outdo others; Gemeentemuseum Den Haag]

Spinning Wheel — wait, for it: Clock or Counter-Clock ..?

Anyone noticed that IUs seem to make a thing of having replaced the clearly-archaic hourglass wait icon, with a spinning wheel — that was the Obvious part & mdash; but that the circle sometimes runs clockwise, sometimes counter-clockwise ..?
Part of the why is resolved, e.g., here, but the issue is that it seems to go all sorts of directions in/at all sorts of apps, sites, et al., as far as I can tell not seriously related to the linked explanation.

Yes, I’ve studied this here foundational theory, but also there, not much on directions. Didn’t even know Throbber was a thing.

Then, surely there’s an authoritative UX/GUI protocol (huh?) that has the definitive answers ..? Anyone ..? Oh well:

20160611_153611
[Keeps on [ slipping, slipping, slipping | turning ], [ back to | into ] the future circles; Stedelijk Amsterdam]

Needing trolley answers — NOW!

Needing your help on this. In two ways…:

  • How come all the ethicists dreaming up ever more complex versions of the Trolley Problem, but are only too gleefully snickering at n00bs-to-the-field that figure out the peculiarities as they are led through the many pitfalls in thinking — but never arrive at definitive answers themselves! and are just happy with ever further complicating the issues.
    Question is: How to bang their heads long and hard enough or, to give them a last chance, lock them up without food and drink until they deliver definitive answers? Left or Right, Yes or No, with ‘or’ being absolute XOR not ‘and a bit “and”, too’.
  • How does System 1 thinking, or System 2, tie into these sort of discussions ..? As said problems call for immediate decision (no time to wait for decades of completely useless non-answers from ethicists…), System 1 would probably have it, System 2 being too slow. How does System 1 respond in this arena, then ..? Should be tractable.
  • [Of course you didn’t expect me to stick to even my own ‘two’ of the intro, did you?] Is System 1 inherently more tied to the hunter-gatherer life that humanity has evolved in for so much longer, than the agri-society of late (10k years) ..? If so, in what way could we use such connection(s) and ramifications (…) to improve our responses to the above, and to society’s ails in general..?

OK, enough questions, possibly though certainly not certainly not answerable in a simple Comment … Hence:
dsc_0030
[Ah, the Classics, they would probably provide better, actual, solutions, wouldn’t they ..? Ancy-le-Franc encore]

Help determine this rock

I have an inkling of what this piece of art means, but would there be anyone out there that, under strict confidentiality of course, could provide a full explanation ..?
In particular — but including full context — what the link is between a book, possibly obscure but a tip as pleasurable read and this in Joinville:
dsc_0794

It’s just all too odd to not have a connection … Including this, perhaps ..?

You're So Smart

In a reference to a song about me:
Most ab-original humans wouldn’t pass a serious Turing test.
Most serious AI trying to pass, would.

You, the select elite of my blog readers … Well, elite by numbers, mostly, or ..? And select, as in ‘clicked by error’..?

Just kidding, of course, off course.
What I meant was: as originally intended, Turing tests have become a hypothetical mind game ‘only’. Now that we’re approaching ‘intelligence’ of machines, like, graduating from ANI to AGI and on to ASI without a blink — not all of society will change at one instant to the next level, and then after some prep all of society will move to the next! Much more creepy stuff is out there without general (public its) knowledge than you can imagine (if anything) — suddenly we return to the thought experiment.
Acknowledging that we have never been able to give a sort-of extentional definition of ‘intelligence’, only an intentional one. Which may indeed suffice. Now that we’re accustomed, and into ethics discussions rather than did/did-not type of things (ex the laggerds who still can’t stand being surpassed by ‘dumb’ machines — calling them that, calls yourself ‘below’ (quod non) that…), we’ve seem to have made the question irrelevant. When a few still say that this sort of thing is impossible, others are already doing it and hardly anyone seems to care.

The latter part being the scary bit. Wait and see just will not be enough here, in particular RE settling the Ethics elements. It’s not only self-driving cars where momentum is out of human hands and into Technology’s… It’s everywhere.

To not be afraid — or to be but be brave and conquer your fears and Act, this:
20160820_150122
[Still recognisable as VR trompe l’oeil; Rijks Amsterdam]

Maverisk / Étoiles du Nord