Predicting fuzzy futures

As we approach another round of grand fuzziness in predictions of all sorts, e.g. for president’s elections in some corner of the world, it would be wise to not only take all (and I mean all) of Superforecasting to heart but also to consider helping extending the science of the trade.
By helping me out in finding pointers and content on, and subsequently developing on, the use of fuzzy logic in predictions. As ‘current’ truth values of future states of the world are all quite possible, and going forward even mutually exclusive states may, e.g. on some news, all become more likely, with combined likelihoods rising over 100%. Where FL can play a role to keep track. And we may have to revisit (practical use of) Markov chains with suitable noise-around-parameters built in… But let’s focus on FL first.
Of course, when the End Date, the horizon for some prediction timeslot nears, the choices will be driven to 100/0 — where the crazy idea of random selection (of ‘balls from a pot’) with replacement … with double replacement … [even tinkered with the idea of replacing the non-drawn colour with the drawn one every pick; was hard to think through] may come closer to the idea of starting with some hardly-educated guess and nudging either way on all news points as one goes along; doing a (much-)sort-of random walk from 50/50 to 100/0.

So, if you’d have info on the viability of either approaches, please do drop a note…! Already:
DSC_0606
[Free city map dispenser; Delft]

Watson’s ID

Does Watson have an identity? Because, when it (sic; why not ‘she’ ..?) is intelligent enough to make its own decisions, it may want to, or know ways to obtain, or be bestowed with, personhood of some sorts. To which it may need an identity, and according ID.
But that all hinges on the construct of a single, identifyable instance of <something>. And all sorts of fancy dancy press announcements — where one might ask ‘Where you’ve been to come to the show only now’ — regarding deploying ‘Watson’ in some confined business context seem to start to fly around; mostly with corporates having a dire need to blow over the news of their atrocious lack of morals — but what is it they use?
Most probably only a time share (think S/36 style) or copied-instance or copied-engine of the concept / most elaborately trained instance available.
Do we have a criminal / misdemeanour system in place already for such non-human persons? No, I don’t mean the sorely failed ‘corporate’ personhood approach as that’s a hoax. People still are in charge of corporates, and are punishable per (Board!) capita for anything that anyone does on behalf of their employer XOR they are fundamentally not allowed to act independently in any society.

Only now do we have new entities coming aboard that behave like individuals but have none behind them to cover for accountability … or they aren’t individual operators. So, no choice. But as yet, no legal system to operate in. Conundrum!

On a somewhat tangential (is it?) node: Yes, AlphaGo has beaten a human a couple of times, and the other way around now, too, but that doesn’t mean the game is lost (its interest); see Chess. And, ‘who’ has beaten the human player? Is it a ‘who’ or is it (not only) an ‘it’ or not even that, is it too abstract to say that a ‘robot’ that is in fact an ‘information system somewhere out there dispersed in place, maybe even in time’ has beaten a human..? AGI has no power plug, people!

Also,
The Church
[“The” Church, Ronchamps]

Short on tape

The title being a mere reference to Turing machines. Since I wanted to bring up the subject of short-sightedness of those that do not understand the fundamental nature of the Church-Turing thesis and Halting Problem deeply enough.
Because they, symptomatically, consider that humans can solve the problems associated with it hence any machine that would think similarly enough or better than humans, would have overcome the problems by sheer thinkpower. But that is simply wrong. Humans do not overcome the problem, they work around its applications — another element of what makes us human, maybe. And there is no guarantee whatsoever, or rather to the contrary, that any ASI will be able to do the same, in all situations — because any true ASI will explode to cover all of the universe hence also all of its problem areas, right ..? [Reference to Kurzweil’s books and ideas not really necessary, are they?]
Gödel’s Incompleteness isn’t just something that can be solved! It is!, whether that’s fortunate or not. And a world ‘beyond’ such axiomatic issues, well… Wovon man nicht sprechen kann

And Good versus Evil: Also not ‘solved’ by humans. And phenomenology — not something that the ultimate abstract of Hegelian Ratio can ‘solve’. And …

In similar vein (not?):
DSCN7008
[The eternal fight between Good and Evil, ratio versus original Natural brute force, Yin versus (!)(?) Yang; Sevilla]

Watson’s place to be

Two points re Watson here, one poignant, one solved:

  • Where is Watson? Because, it must run on some (i.e., enormous number of) core processors that physically are, somewheres (multiple). Would anyone actually know or otherwise, wouldn’t that be scary for all the idol-worshippers of individualised-robotlike AI ..?
  • The name, the motto. After Thomas J.’s … Think. Name, sole purpose. Nomen est omen. Capice ..?

So there you have it. The question remains Open. Until you provide me with some answer, possibly..?

Also:
000010
[Cogite, citius altius fortius! of the 1928 kind; Amsterdam of course]

Racing humanity against ASI

One thing still amiss in the discussions about the (near?) Future:
Whether Singularity/ASI will come before humanity leaving its biological substrate, or the other way around.

The first, leading to a dystopian future of humans initially being Machine’s pets but later (?) being discarded as inefficient nuisance. Even if only via Lanier‘s route.
The second giving some hope that humans may transform into ASI after which the age-old wars start all over again. Or, the first past the post takes all…

Yes, still ploughing through [and finding much want for evidence or less of it, and addition of great many ethics aspects] Kurzweil as here, here and here.
The first, in a great many previous posts on this blog. The second, too. I’m unsure how the future will play out. Now that Ray’s predictions, time-wise, seem to have fallen before (fallen non-behind) actuality maybe due to something with a financial crisis but Ray had demonstrated (?) that to not hurt the ‘real’ economy too much — from which either we will bounce back with a sprint to return to the (smooth..?) expo growth path, or we will prove not all that starts exponentially, will indefinitely continue that way. So we have lost already by not waking up fast enough or still have some time if only we’d wake up to join the discussion — not necessarily the Luddite revolt …

Your thoughts, please ..?

Oh and just wanted to add, for the relatively (very) short term: “If every instrument could accomplish its own work, obeying or anticipating the will of others, if the shuttle could weave, and the pick touch the lyre, without a hand to guide them, chief workmen would not need servants, nor masters slaves.” (Aristoteles) — taken to be positive. But just extend that; one could consider it short-sightedness by Ari to not have asked why there would be any chief workmen with the possibility that there would need not be any work at all leading to no income for all hence devastating poverty and starvation but for the few if any (sic) who ‘own’ the machines. Also:
DSC_0354
[The interesting about ‘life’ in Chalons-en-Champagne is just a tucked-away corner, nothing more — same for humanity in the near future ..?]

The Body of the Mind

Quick note: Still wondering whether the brain may function as smoothly when not connected in the most natural of ways with the body that feeds (…) it and vice versa.
Yes — decoupling of functions and/or replication of interfaces may solve some problems. But remain not ex ante sure whether in a logical sense, we miss something. The ultimate mimic game may still not be the real thing.

Sorry — some points cannot be dissed quite so easily, with stop gap arguments. And:

DSCN1315
[“It’s just a house, like a stone skin”…? Girona]

In support of TED ..?

A certain, distinguished ( ? ;-| ) R. Kurzweil in The Singularity Is Near has some off the cuff remarks about becoming pets, Ted style. (p.32)
Which is a sligh against (later!) sages of the Musk, Gates and Hawkins class — and fully unjustified. Since the ‘Theory of Technology Evolution’ is a quasi(sic)religious circular argument paraphrasing Hegel with apparently quite carefully selected and probably redacted statistics and some ontological Technology-defeatism in the undertow.

And the rest is consumable but only when considering these initial ails as foundations hence only a particular Kuhnian discourse. Interpuncted (Eldredge/Gould style) with badly under- or misinformed remarks. E.g., regarding chaotic processes resulting in smooth exponential trends being not a coincidence but an inherent feature of evolutionary processes (p.73) — based on a misunderstanding of evolution, probably, since numerous times, the latter is made to appear like some purposeful decision making by individual instances of, or species as actors instead of happenstance sheer survival-luck. How much of the overwhelming evidence must one disregard to arrive at the smoothness!

The list goes on. But I will not. (Nevertheless…) read the darn thing and smile! Naivety is a good read, too. And:
DSCN0806
[Yes, take your meds. No need to swallow just the blue pill that the above book might be, though. Barça, of course]

A horse needn’t be a horse off course

Maybe @DARPA can elucidate … Why would anyone need four-legged soldier-helpers ..? First there was robodog, then LS3 that failed so may end up in your next indeterminableoriginmeat-burger. Next, maybe, a fully armoured full exoskeleton.
Which might do away with the humanoid innards in the near future (after that), losing some great many pounds of ballast (similarly, are drone pilots as physically fit as the bunch out there in the air on their weapons platforms ..?) and also losing a great deal in time- and otherwise (situational-closeness hence -fine-granularity) challenged ethical perspective. I.e., no weak knees anymore just shoot whatever moves.

But, back to the Helper idea: Why ..!? Why four legs, or even two ..? Instability assured… And Nature has donned animals with legs to get over tree trunks and boulders and the like yes, but maybe only because natural evolution only happens from the happenstantial last known good configuration not the clean slate ‘we’ now have when designing <anything that may silently carry some possibly superhuman load over rough terrain>.
Which is to say: Aren’t hugely more simple (contradictio semantics intended) machines possible with … even yesterday’s technology, that can do the same with no legs or completely different configurations of them? E.g., have ‘spurious’ legs on the back to be able to roll over (on purpose!) and still walk on? Tracks ..? Number Five is Alive! Silent ops can be achieved easily, just ‘invite’ some Rolls-Royce experts…
One only needs to add a gun of whatever size, and some Autonomous in the ops, and hey presto! A possibly much lighter than average soldier (easily stacked even more uncomfortably (possible?) in some freight plane for transport to the theatre) carrying possibly more, and a bigger gun a piece. No weak knees, or remotely operated — wouldn’t limited-autonomy ‘soldiers’ be able to be steered in platoons at a time from far away (and far away from anything officer-like or mayhem may ensue [disclaimer: once briefly was one]) and have an easy development of ‘grounded-drone’ armies.

After which, the Singularity takes over these all. Or just a bunch of the most capable.
Or, nearer future, some party being more than average removed from the artificial intelligence of the S i.e., some rogue general. How to stop such a guy (F/M) ..?

OK, you know where to drop the Challenge prize money, thanks. … … Or, the whole thing’s just a hoax to throw researchers of the too democratic inclination, off path since the research into the above is already progressing impressively… And:
DSC_0991
[Hung from not hang over though that might (??) still apply to the operator; DC of course]

Somehow, related to Big Data analysis and actions

… Where Rembrandt had an idea of a painting in his head, and executed it in his peculiar way on canvas (after which it became relatively immutable — or paint it over), we now see tons of Big Data (i.e., tons of < whateveryou’dwanttocallit >) and we have to abstract the ideas from it.

So, a reversal. Comparable to the induction/deduction false (sic) counterpoints. The sic since BD gurus don’t seem to ‘get it’ when it comes to nuance over induction’s value.

Yes, I’ll close out for now already. Fireworks approaching, plus:
DSCN8357
[Almost-formless blobs, or this? Prefer this direction of design for the/our future… Van Nelle Rotterdam, of course; great architect’s name]

Big Data, Little Decision-making

Are you ready for the coming revolution? That is in the wings by way of the data deluge that will cripple your ability to accomplish anything because you’re overwhelmed with data (“information” quod non!) to act upon in masses so vast you can’t even begin to use actionable results from analysis of it in a way that actual decisions are reached, communicated, and put into actual action.
Yes, yes, some of you will say that AI will arrive just-in-time to save the day. But that is much more wishful thinking out of fear than realistic futuring. And no, the exponential growth of data cannot be caught up with by exponential growth of AI capabilities and -spread before you’ve drowned.

Anyone see a way out, other than just ignoring or stifling data growth until by the skin of our teeth we can continue..?

Oh well, this:
Kopie van DSCN7982
[Reckon you’ll win ..!? in Berlin]

Maverisk / Étoiles du Nord