Some notes on notes on Chollet

After you read this, you’ll get the following:

  • [After two empty lines] ‘seed AI’ may not be necessary. Think of how the Classics built their arches: The support may be removed. Same here; some ‘upbringing’ by humans, even opening the possibility of ethics education / steering;
  • Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment / A good description of a human from the perspective of a chimpanzee. – correct. As such, slightly ad hominem and we know what that is about (here);
  • If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do. – an interesting argument, as I had the idea of drafting a post about a new kind of ‘intelligence’, apart from the human/animal one.
  • Etc.

An interesting and profound read… Plus of course:
[“Intelligence”… Winter Wonderland London]

Eternal Life

Remember Castranova’s Synthetic Worlds? You should. If only because this, is, still, in The Atlantic [edited to add: with an intelligent comment/reply here] – because that is still around, it allows for comparison of origins, and the utopical futures as described in the book, with-or-to the current state of the world. Where we have both the ‘hardly any change noticeable in current-state affairs when compared with the moonshot promises of yesterday’ and the ‘look what already has changed; the whole Mobile world change wasn’t even in those rosy pictures’ and ‘hey don’t critique yet we may only be halfway through any major tectonic humanity shifts’.
Where the latter of course ties in with the revitalised [well there‘s a question mark attached like, what do you mean by that; is it ‘life’ and would we even want that] version in ‘singularity’ the extreme nirvana of uploaded, eternal minds. As if the concept of ‘mind’ or ‘intelligence’ would make any sense, in that scenario. And also, since this (pronounced ‘Glick’ one guru told me lately; the importance of continued education), where the distinction between ‘eternal’ and ‘forever in time’ clearly plays up as in this (same), against …

In circles, or Minkovsky‘s / Penrose‘s doodles [wormholing their value], the issue comes back to flatlander (i.e., screen) reality, if there is such a thing…
Oh well; leaving you with:
[Warping perspectives, in ‘meaning’, too; Salzburg]

Nation(state)s No More / Not Yet

Recently, Jamie Bartlett posted an excellent analysis of the probability of the return of the nation state of the future of the planet. If only to have so many ‘of the’s in a row.
Yes another one on the future of nation-states, now not from a bottom-up perspective but from an overall view.

The case is strong in that piece. But then, I had been having recurring … thoughts, about the evaporation of the legitimacy of the nation-state as well. Where my subconscious hinted, it was no clearer, that there was, and certainly is, a place in the discussions for on the one hand, Bruce Schneier’s ideas about sizes of societies and the rules one would need to organise them (which may read like a circular argument, I know), and on the other, various well-received (e.g., this) and hardly rejectable works on how we still roam the savannahs of today – at least in mind when operating in myriads of Sloterdijk’ian spheres (op.cit. in particular pp. 408–). And how e.g., cosmo- and anthropogenesis in religious books can be interpreted both as a coming of age of the well-developed human and ditto mind(s), indeed including the formation of societies and their rationale(s).

By which I mean that somehow, we indeed still have many traces of hunter-gatherer ethics deep down in our systems, now with a varnish of ‘development’ (quod non) into farmer/city-centered civilians, currently being thrusted in (evolutionary) asymptotically zero time past neoliberal capitalist/consumerist ego-only’ism into the frenzy of ‘tomorrow’ i.e., the post-singularity ASI age.
Shouldn’t we try to figure out some model of societal organisation that takes into account our heritage, and now that “we” have become sophisticated enough thinkers to finally see (macro-mass introspection-like) how we meddled along in the past from attempt to attempt, we now are also sophisticated enough to design our own macro-history future ..?

OK, that’s deep. In a way. In another:
[Whatever. This is what society wants … bread and circuses (squares?); NY]

Nationalistic AI fuzzing

No his is not about fuzzing data. It is about accidentally giving away that you don’t understand a subject, and not the stats involved.
It is about this report. That was reacted on in various press – though not nearly enough and I don’t even have shares – by some boiler room country-by-country comparison, even without much of conclusional calls to action, for all…

Also, hardly anyone notices the gross error in it all. Which is the lack of proper definition of ‘AI’, or more expectedly, the widespread panicking brainfreezes of the interviewed.
Which, summa summarum for brevity, created such massive distortions that the figures are grey noise at best.

“Isn’t that harsh ..?” Nope. I did some asking around, for a different purpose, and when even 1% of organisations would do anything with AI yet, that 1% would be rounded up. Ppl were just too afraid to tell the interviewers / pollsters that they had (have, probably) no. single. clue., and babbled their way out of it.

So, what was this title again of the infamous Public Enemy hit ..?
And:

[A prettified prison is your ASI future; Zuid-As Ams]

AI learning to explain itself

Recently, there was news again that indeed, ABC (or The ‘Alphabet’ Company Formerly Know As Google) was developing AI that could improve itself.
O-kay… No sweat, or sweat, qua bleak future for humankind, but …
Can the ‘AI’ thus developing itself, maybe be turned first to learn how to explain itself ..? Then, this [incl link therein!] will revert to the auditors’ original of second opinions … Since the self-explanatory part may very well be the most difficult part of ‘intelligence’, benefitting the most from the ( AI improving itself )2 part or what?

And:

[Improving yourself as the imperative; Frank Lloyd Wright’s Beth Sholom at Elkins Park, PA]

Visual on socmed shallowness

When considering the senses, is it not that Visual (having come to ther play rather late in evolution or has it but that’s beside the point, is it?) has been tuned to ultrahigh-speed 2D/3D input processing ..? Like, light waves particles who are you fooling? happen to be the fastest thing around, qua practical human-scale environmental signals – so far, yeah, yeah… – and have been specialised to be used for detection of danger all around, even qua motion at really high pace (despite the 24-fps frame blinking).
Thus the question arises: What sense would you select, when focusing on shallow processing of the high-speed response type? Visual, indeed. Biologically making it less useful for deep thought and connection, etc.

Now that the world has turned so Visual (socmed with its intelligence-squashing filters, etc.; AR/VR going in the same direction of course), how could we expect anything else that the Shallows ..? Will we not destroy by negative, non-re inforcement, human intelligence and have only consumers left at the will of ANI/ASI ..?

Not that I have the antidote… Or it would be to Read, and Study (with sparse use of visual, like not needing sound bite sentences but some more structured texts), and do deep, very deep thinking without external inference.
But still… Plus:

[An ecosystem that lives off nanosecond trading – no need for human involvement so they’re cut out brutally; NY]

One 000

Yes, celebrations … The one-thousandestest post on this blog… [Excluding the two cross-posts by others…]

Do I regret any of them ..? Nope. [Rounded down]
Do I regret having been early with signalling many developments ..? Nope. At worst, sometimes I may have been too early, with the post(s) having slid from memory (ah, shallows you are) when finally the world came ’round to see the point as pointed out by some random stranger top-notch journalist or guru.
Non, je ne regrette rien.

OK, yes, I’ll keep on truckin’ for a while.
On everything from metaphilosophical discussions down to bitwise details on phenomena of Information, Society, IoT, Privacy, Information Security (#ditchcyber) Oxford, and gadgetry. Plus:
[The allusion to ‘reflection’ (of the old in the new etc.) is purely accidental, of course; London a decade ago]

Music to AI’s ears

Will AI eventually appreciate music ..?

Not just appreciate in a sense of experiencing the quality of it — the latter having ‘technical’ perfection as its kindergarten basement’s starting level only; where the imperfections are cherishable as huge improvements, yes indeed [Perfection Is Boring!] … but moreover, music appreciation having a major element of recognition, subconsciously mostly, of memories of times almost-im-memorial.

Of course, the kindergarten perfection gauging, AI will be able to do easily. Will, or does; simple near-algorithmic A”I” can do that today.

Appreciating imperfections, the same, with a slight randomiser (-recogniser) thrown in the algo mix.

But the recollection part, even at a conscious level requires memories to be there, and as far as AI goes (today) even ASI will have different memory structures since the whole facts learning processes are different. And don’t mention the subconscious side.

Yes, ASI can have a subconscious, of which we aren’t aware of even able to be aware [Note to self: to cover in audit philosophy development]. But when we don’t hear of this, was there a tree that fell in the forest?

I’m off some tangent direction.

What I started out to discuss is: At what point does music appreciation through the old(est) memories recall, become an element of ‘intelligence’ ..?

With the accompanying question, on my priority list when discussing AxI: Is it, for humans ..?

And a bonus question: Do you really think that AI would prefer, or learn earlier about the excellence, of Kraftwerk or Springsteen? Alas, your first response was wrong; Kraftwerk’s the kind of subtle intelligent hint-laden apparently-simple stuff that is very complex and also deeply human — which you perceive only when listening carefully over and over again till you get the richness and all the emotions (there they are!) and yearning for the days gone by when the world was a better place. Springsteen, raw and Original-Forceful on the surface — but quickly showing a (rational-level) algorithmics play with not as much depth; even the variations and off-prefection bits are well thought-out, leaving you with much less relatable memories if at all.

Your thoughts are appreciated. And:
[Appropriately seemingly transparent but completely opaque; some EU parliament (?), Strassbourg]

Your unbody double

So, there now is a thing being Artificially Intelligent 3-D Avatars. As per here. How nice.
And then you realise time travel may be possible once you don’t have the physical duplication problem anymore. Though we still would have the other problems; bummer.

But still, one of the problems has been solved. The others, actually … may need re-study. Because, there may now be differences in travelling forward (possibility approaching, when ‘time’ in your physical life needs to stay synchronised in some form or another with others, and your AI3DAvatar can speed up ..?) but then, returning to Now might (creation of possibility here) be equivalent or the same [which aren’t] to travelling back in time. Duh. Too bad it’s still so hard to reason (positive-)logically and consistently about this.

And, it will make the ‘need’ to have dirty, planet-soiling flesh-and-blood humans around, much less. There’s no such thing required anymore as people being trapped in The Matrix and then wanting blue or red pills, but rather it’s the attachment of AI3DAvatars to the Singularity Machine; their subsumption into it (removing duplicate or false/inconsistent memories – that will be there IF the AI3DAvatar’s anything like you) leading to their disappearance — all they ever (in the future) were, had already been included (thought out on its own) by the SingMach.

For now, we’re still here; individually. And:
[“Tape”copies of the views from up there, will be loaded to your AI3dAvatar in a millisec; no need for that either; CNN Tower, Toronto]

Imminent enrichment through AI — of jobs ..?

Anyone else feels like the breakthrough of AI in all sorts of jobs (yes, most certainly not only the bohrrring repetitive-manual-labour kind — that may be one of the kinds that comes much later in the sequence since it requires extremely sophisticated physical/intellectual (yes) interactions than previsouly thought (by humans))
is imminent?

And anyone see that the horror of replacement of humans XOR your co-workers is to come only (a bit) later, when AI-driven systems have become good enough to replace you, completely — leaving the spoils of labour to the (intensive people-farming) factory owners ..?
With in the shortish mean time, your job being ‘enhanced’ through AI, by the enrichment of having to deal less with the simple stuff and you having more time available to do more Intelligent (parts of) your job. Possile, on conditions of:

  • Such more intelligent parts of your job existing; a great many a manager may find there is no such thing, or the room for manoeuvre isn’t there;
  • You being able, capable, of performing such more intelligent job parts; with the focus on reporting (send/receive; hardly ever anything more than the extremely-simpleton processing in between) probably your capabilities have shrivelled into unusability;
  • Time availability is what holds you back so far; extending on the previous condition, you may find yourself to actually – be honest now! – already have had that time available but used it for busywork, like, being a Manager or so. And/or, by loafing or do I repeat myself. Now that you may get time available for Intelligent stuff, you may not notice that;
  • You getting paid more, or at least the same; as it turns out that the enrichment-by-cutting-out-the-bottom-part, leads to a serious pay cut as your Overlords now see your function as much less time-consuming or bottom-line-feeding. Especially the latter may turn out to be an eye-opener…
  • You getting sufficient time to build a new job; the creeping replacement of You by AI-based systems might speed up significantly as the first rewards transpire — to the Owners again — and hence the cry [not tag; ed.] for More may intensify the efforts to replace you ever more, funded by … your increased utility if at all, or the increasing utility of the you-replacing AI at least.

Suffice to notice that a priori it will be very, very difficult to meet all these conditions, if even anyone would try (apart from you, but you’re too singleton in this to pull that off). So…

Oh well, there’s always:
[A different look at Casa de Musica; Proto]