Boring Under 30s …

Just when you thought about getting into it, maybe, from somewhere near the bottom… One should be careful to know what the bottom looks like.
Qua diving into ‘Data Science’ quod non, that so many have put their personal hopes in, but … tempted how and why ..?

Earlier, I posted this, on how all the Fourth Estate – as far as independent and also focused on others that might still be independent, now apparently unwanted and to be turned into 4thE sheeple – wrote about how one would have to slave oneself to death for the most minute chance of Making It.

Then, news came around that actually, it seems like the Model doesn’t work anymore… In this more recent piece, and various linked posts (and external articles) therein.

Today, even more support for the above warning. Maybe not for some (e.g., him), that had the appropriate insights long ago already and surfaced to surf, if we may express it that way, and only still need to get a suitable spot – or this, if you know the place or (have) be(en) there.

Also:

Now then, I’ll leave you with the Today’s Link to study and weep, and:

[Not quite the above, but close …?? Just North of Siena]

BrAin Training

When humans are much, much less far off from [other; ed.] animals than we commonly think, and a look at ‘presidents’ around the world may push that into a negative margin, there was this piece about how “AI” might better have a look at animals’ trained brains for a path forward from ANI to AGI.

Well, wasn’t it that the Jeopardy-winning “AI” system actually was 42 subsystems working each in their own direction ..?
How would ‘rebuilding’ a human brain also not be the same, at a somewhat larger scale ..?

Like, this post about ‘getting’ physics being done by some quite neatly identified parts of the brain — what about building massively (complex but) connected massive numbers of subsystems, all focused at particular areas of human thought (with each possibly developing their own specialty – not much asked when that already happens in larger neural nets by itself, eh?). Building from the ground up, as humans do when they develop their brains (recognising mama first, before papa, then saying ‘papa’ first, before ‘mama’). The map of brain regions is developing at light speed anyway; as here and here.

This may take decades of all-out development, like with humans. [Noting that with the explosion of complexity in society, the age of full development has jumped from adolescence to twenties, even when the latter also includes full development of a conscience and sense of responsibility/ies. Once, centuries ago, there was so little to learn about the world that many were done by adolescence-to-18th. Now, there’s so much already of basic stuff to ‘get’ society and one’s role in it, that the age should be much above 20; also: Life-long learning.
All moves to lower the age of … whatever, qua maturity e.g., in driving, drinking (hopefully separating the ages between the two so responsible behaviour in both and in combi is (not) developed properly), and criminal culpability versus youth crime, all backfire grossly already for this reason: Brains haven’t developed earlier, only opportunity and incompetence. By, among others but prominent, parental protective hugging-into-debilitation of generations of youths that haven’t learned to fence for themselves in hostile environments that still require cooperation to survive. Never having learned give-and-take (Give first !!! Duties before rights), means never having learned to be a responsible human. Which shows, in many societies.
Edited to add: this, about how a hand’s ‘nerves’ may learn about objects; any one that has dealt with baies, knows this drill….

Or, one stops the development after a handful of years, and ends up with ‘presidents’.
Or, one goes on a bit, beyond the proven ‘95% of human behaviour is reflexes on outside impulses the neocortex just puts a semi-civilised sauce on it’ onto e.g., Kritik der reinen Vernuft, Die Welt als Wille und Vorstelling, and Tractatus Logico-Philosophicus (plus the Theologico-Politicus I hope). To have a system with Explainability towards these masterpieces, among others, would be a great benefit to society. But I’m digressing; the Turing Test was about average humans, not us.

The bottleneck being the hardware, obviously. Plugging in USB/UTP cables between two systems isn’t as much fun as incepting/building human massively-complex systems-to-be-raised.
Also, there may be a build-by-biology versus build-by-design difference; have a look at the numbers here and you get what it would take. On the hardware side, things/boundaries are moving as well.
Edited to add: The flip side is that any above-such trained system, is most quickly and infinitly-copiable. Hence, should one go for ‘average human’ intelligence, or variate on purpose [who calls the shots on this, qua human-like-systems eugenetics ..??], or aim for the highest intelligence [of possibly non-human form] achievable? And what if the latter, and it turns out that decides to do away with stoopid humans quickly to protect the earth, its power supplies and its’self ..? What if too many of such extreme intelligencies prove to be too many / all, as whacko as many human ‘super’intelligents are ..?

Oh and I am aware that one doesn’t ‘need’ to rebuild a human brain, to get to something similar to human intelligence [Big-IF there’s such a thing; have a look around at your colleagues]; my point here is that we may want to strive for something similar and let it veer off as close to the Edge [not the browser, that’s a demo of non-intel] as possible to prevent it from developing ‘intelligence’ of a kind that we have no clue how to deal with — which would potentially make it much, much more dangerous.
What if the System were beyond, on a different path, of the Sensation/Impression–Grasping(? Verstand)–Understanding(? Vernunft) line of Kant (relevantexplained in the Kritik der Reinen Vernunft, I. Transzedentale Elementarlehre II. Teil Der Transzendentalen Logik II. Abteilung Die transzendentale Dialektik, Einleitung II Von der reinen Vernunft als dem Sitze des transzendentalen Scheins, C Von dem reinen Gebrauche der Vernunft B363/10-30, obviously)..? Our brain does work that way, but other substrates not necessarily should, too.

But there is no systemic logical block to this all, is there ..? Your thoughts, please.

[Edited to add: this and this.]
[Edited to add: this, on “AI” passing school tests but how (mostly!) irrelevant hat is.]
Plus, about the word before last:

Plus:

[Ah, music appreciation … there‘s one …; Aan het IJ, Amsterdam]

Simple, not simpler

“If you can’t explain it simply, you don’t understand it well enough” – Albert Einstein. Or was it Feynman? Or what was it, by Einstein, or ..?

“An alleged scientific discovery has no merit unless it can be explained to a barmaid” popularly attributed to Lord Rutherford of Nelson in as stated in Einstein, the Man and His Achievement By G. J. Whitrow, Dover Press 1973. Einstein is unlikely to have said it since his theory of relativity was very abstract and based on sophisticated mathematics.
to which I found
“Unrelated, but reminds me of the joke about the mathematicians who were trying to play a joke on their colleague in a bar and coached the “barmaid” to reply “one third x cubed” when they offhand asked her what the integral of x^2 was. When the colleague comes back and they try to play the prank she responds as they prompted her, and then nonchalantly adds, “plus a constant””.
Did you get the constant, or were merely reminded, or don’t know what it is about or don’t care ..? And:

“If I could explain it to the average person, I wouldn’t have been worth the Nobel Prize.” [Einstein, in various variants]
Hence:
“You should make things as simple as possible” – Albert Einstein — really ..?? From the man who gave this quote:
“It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” From “On the Method of Theoretical Physics,” the Herbert Spencer Lecture, Oxford, June 10, 1933.
Hence: “You should make things as simple as possible, BUT NOT SIMPLER

[My bold caps]; the purpose of this post. The ones to forget that, will be the onces laughed at for their sheer dunceness.

Also:

[Describe the full detail in ten words or less. At the Fabrique, Utrecht/Maarssen]

You wanted a model to play with ..?

You may have been attracted by the title – I won’t give away your name, address, phone and social security number, and credit card and bank accounts to the first bidder since I may not be in the business you assumed from the title.

Rather, I’m referring to the age-old stupidity of trying to capture complex systems in merely complicated models, not even to study and understand but toy around with it and at your pleasure do prognostic and prescriptive things with it. Which was found to be Wrong already long ago, as per e.g., this and this and this and …

Now, there´s the addition of this, listing among others “I think it was the physicist Murray Gell-man that said: “The only valid model of a complex system is the system itself.”” – my bold face [to insert an association with the difference between a font and a typeface …].

When Gell-Man states something, you better study it. This one, too.

I’ll leave you with:

[Look at those babies climbing up ..! Prague. No tricks, they’re there!]

Meta over pseudo – fail on fail

Hmmm…, There’s a lot of tok again about pseudonymity, lately. As if that would ever work. Same, with homomorphic encryption as a means to that end.
Both missing the mark, since both missing the point. The point being: Deanonymisation isn’t about backtracking the encrypted info, decrypting it or so, it’s about the metadata. It’s not about the ‘content’ of the data point; it’s about the class identifier (which you’ll need for any useful use of the data point – otherwise random data would do &ndash) that, linked with other-class identifiers, leads to just one (i.e., < 3 remember?) human having all those classifiers as attributes. Not the data (content) counts, but the use of the classifiers over them i.e. the meta- Now, that only regards data points ‘at rest’ rather: their static properties. Pseudo it may be, by the dictionary literal meaning of that adjective being: Seemingly, but not of fact. Seemingly, to the gullible that wants to believe in fairy tales. Yes, Euro-politicians indeed, in the GDPR.

[Edited to add: this proof.]

Add in ‘traditional’ metadata, info (like, classifiers) about the effects of the content on processing. When your doctor calls, and within ten minutes you call the STD clinic, the content of your call doesn’t need to be encrypted — the conclusion’s already there. That sort of thing. Metadata isn’t protected in the way as in and of itself it will not reveal anything of your persona. It is ‘protected’ (quod non) by being PII, but the identifying data (caller ID &c.&c.) is readily available in systems that every right and purpose to collect them (how else will you phone co. be able to bill you …?). Again, the content may be pseudo’d but – see above. And the sources of auxiliary data will be spread far wider and will be much less protected or less-quality-encrypted/pseudo’d then in the straightforward case, if at all.
Since metadata is partly-but-pre-processed conclusions i.e., information, it’s much more valuable to extortionist conmen, gov’t agencies and other parties of similar moral value. Note that these parties may not care if they inferred the wrong thing like in the above example, they don’t care about false positives but will make you pay anyway due to the difficulty of repudiation and through silently disqualifying you for all sorts of societal benefits like work, respectively and yes that’s too long a sentence. But hence, while you might be busy pseudo’ing as per above, if at all, the problem may not be in that even if you would have made any progress there.

Also: Privacy’s an emergent property. Take the easiest roads: Data minimisation; by design; by default, by rock solid information security. The latter, e.g., by the letter and spirit of the new one on the block cypher, 27701.

That’s it. No question marks, no call for response as you’ll not give any, anyway. Leaving you with:

[Ávila it is, protecting you through the ages. Hehhehheh… gotcha, it’s Monteriggioni in Italy, not quite the size eh?]

Meritocracy not working (anymore)

Had this post on destructed rebound options disabling lottery-shot society from succeeding. And also, more recently this.
Now [as of writing…], this: Not as much a closure of the outside, or re-entry, but a spongy moulding from the inside.

Recalling what one would have learned from Plato [despite this possibly sounding elitist, which may/may not bother you; either way bringing troubling assessment of your intellect (not being beyond IYI), and of mine ..?], about meritocracies versus equality. That there is no one solution that is best, anytime, anywhere.

With if analysed deeply enough, both approaches circle around a central issue, being the one of heridetaricity – a word now that I coined it – the point already made by some bloke under the name of Adam Smith: All’s well, markets can be perfect [which means REGULATED otherwise they’ll NOT be anywhere near perfect!] etc.etc., but the very fact that economic power amassed, can be handed over to next generations in a family, and often will be, undoing the tabula rasa assumed by meritocracy et al.
For many, this is the purpose of Life. E.g., see the standard immigrants’ response upon entering … the US or anywhere else if of clear mind]: “I may have to work hard all my life, but at least here this will enable my children to have a better life”. So, whether outright (and easily squandered, often will be) through Money, or more subtly, through ‘investment’ in education etc., life’s earnings will be driven forward — this may be seen in the abovesecondmentioned/lined article as the core problem.
Maybe ridiculously-tax the rich / their inheritances so pre-death they’ll be more altruistically bee-hiving ..? To go through the needle’s eye but then, hardly anyone is of suitable moral virtue (of any of the world’s wisdom traditions) anymore to be lured by that parable.

So, to solve meritocracy, one would have to live in a totalitarian global commy state that would allow individual strife but not pass-ons to next generations; all children to be raised by the state. Not a privacy-sensitive or Free utopia it would be.

Darn. What’s next ..?

[Edited to add: Well, there’s this monumental piece too easily ignored: Not only meritocracy, but also democracy going crazy … Truth to be told: Counterarguments don’t hold, are not productive, certainly not helping towards a synthesis…]
[Edited to add to the add: this, on similar lines.]
Edited to add to the add2: this, on how capitalism’s decline looks eerily like communism’s…]

This:

[Moneygrabbers feeling empty, copying the double edge (copying earstwhile simplemindtons-also-labor-projects), still feeling empty. And failing. Toronto]

The masses – suitably replaceable ..?

To start, “When experienced accountants were asked in a study to use a new tax law for deductions that replaced a previous one, they did worse than novices. Erik Dane … calls this phenomenon ‘cognitive entrenchment’.
and
They ‘travelled on an eight-lane highway’,” he wrote, “rather than down a single-lane one-way street. They had Range. The successful adapters were excellent at taking knowledge from one pursuit and applying it creatively to another, and at avoiding cognitive entrenchment. … They drew on outside experiences and analogies to interrupt their inclination towards a previous solution that may no longer work. Their skill was in avoiding the same old patterns. In the wicked world, with ill-defined challenges and few rigid rules, range can be a life hack.
[David Epstein; Range – he continues with a psych description of overtraining (humans), by the way, illuminating qua ML-not-moving-to-AGI]

Anyone recognise a. the accountants’ inability to change their so blatantly flawed trade (association, business model, rules, etc.etc.), b. the limits of still, everywhere too narrowly trained (big fat fact) AI systems ..?

Whereas the a., mostly led by dinos, will go the way of the dinos. Though huge beyond big, Schumpeter’s the latter-day equivalent.
The b., obviously, and also a chance to retain the head start that humans have over AI/ML — if and only if [here, desda for Dutchmen] humans move at blinding speed to change themselves, collectively and individually, overnight into something range-holding. Which, for the average human, may be much too difficult.

But you must! Or be replaced indeed by AI. Remember, you don’t need to outrun AI, just outrun your co-worker on this one. Open your mind ..!

On deaf ears. Too bad.
Well then:

[Art. May remind you of life; London]

Slotty McSlotfillface

This is your dream job when your profile fits the role perfectly“… This, in a job posting. Unsure from what millennium it was – not; of course it was one from a couple of weeks ago, but then, also from the time that Time forgot, when job seekers had to compete to fill drudge jobs. Had to be happy to deal with these people, even.

Hah. Hah. The organisations that aren’t aware of today’s seller market, will hire the dunces that are in the same boat. Which is sinking without any sliver of doubt, but they themselves may be the precious few unawares. As in the Dutch saying: A ship aground serves as a beacon at sea.
Big mistake. Huge. [Source: this on IMDb]

Also, what do you want; someone who considers the job a dream one, where all one would have to do is fit one’s profile in the role ..? How wrong is that on numerous aspects, in today’s world of Range ..?
Just don’t waste your and all our time as everyone’s joke of the day, ‘mmmkay ..?

And:

[If you know this is Nieuwegein, you know it’s at the Blockhead conference center, or so]

Nepino

Hey how is it that anyone would be able to act surprised when a project goes wrong or even at a tangent, when some previous projects did the same, similarly ..?

As if there is a project methodology that requires [hardcore concrete block gateway: IF not(done) Then None shall pass] to investigate all previous projects’ evaluations, to establish which risks re-occur with the current one, and mitigate the risks by implementing the lessons learned ..?
Since at least, all projects close off with such an Evaluation, right? Or [i.e., XOR] no discharge of the project manager; no pay. Evaluation, months after go-live, to establish full experience built with using/maintaining the new system and being able to fully gauge the benefits realised [mostly: not].

Prince II: …
Pino: Prince II In Name Only;
Nepino: Not Even Pino.

Dunces, all those project managers that don’t do the above correctly, and dwell in the above nepino.
Yes, there’s a few one can call ‘experienced’ – experience is learning from mistakes; ones own or preferably one’s next of organisational kin’s – but still they’ll be the few among many in any project team, and are so expensive that the less experienced are hired first. Because you don’t want a successful project, you want to have the PM slot filled. By a blockhead or whatever, why would you mind ..?

Yes indeed, Lessons will be repeated until they’re learned.
But now I’m repeating this. For you. Guess why.

Also in repeat:

[Yes Porto again. Better learn of your mistakes or I will keep on banging this drum.]

AInterpretability

For all those that promote ‘transparent’ AI: You want induction to work in a way that it never has in history. Logical fallacies prevent that – if you would want to sate that ‘logic has progressed’, congratulations; you have now declared your own incompetence in these matters.

Yes, there’s thoughtful content around re interpretability, like here [hey I noticed that Medium is becoming the prime platform for reading up on the latest AI developments in all sorts of areas and issues; maybe I should join the frey].
It even delves into ‘interpretability’. Rightfully; progress can be made in little steps there. Transparency, not so much. And it’s not ‘semantics’ because it’s semantics ..!

But the transparency wanters don’t see the difference. Being that what they want, is the black box explaining itself in terms humans can understand.
Which humans; those that are subject to its outcomes? That’ll be all in the lower “IQ” categories [yes I know that is a ridiculous metric mostly used by IYIs]; the upper regions [qua wealth, then, mostly inherited or swindled (qua long-term societal ethics) off the others] will always pay for ‘personal’ services as they’re too decadent to act themselves. But those are the ones already in trouble today.
Explaining: In what way? Revealing the [most often] ever-changing model used for one particular decision ..? The above article, among many, explains [duh] that this is a. difficult, b. an Art even to trained humans. No way that this will be done for all cases, always, everywhere. Because that is the very opposite of what ML deployments are for.

And again, the explainability is about understanding the model, which in turn is stacked upon the idea of modelling itself. Which was devised to understand some complex stuff in the first place, not to predict or so as it was clear from the outset that the model would be too simple always to be able to do that or the modelled would be simple enough (in a cyber(sic)netic sense) already to ‘use’ outright.

And so on, and so forth.

Induction simply doesn’t work for explanations. Summaries of stats is most one could get out of it.

If 50% of your AI project is in making anything work, the other 95% is in humans steering, making decisions, managing and controlling, and interpreting the project and its results.
It’s as if each and every medical study would have to give a complete explanation at grammar school level of all the statistics and (often linear) regression on the research data, over and over again. The difference being that with ML, parts of the maths involved can be proven to be nondeterministic and/or the inverse function (that would form part of the explanation of its functioning) simply doesn’t exist or has an infinite number of solutions.

I’ll stop now. Whatever turn you’d like to take, you’ll run into fundamental issues that will slow down and stop progress. Just swallow the idea that you may be building something that is as intransparent as human thought. Even when the latter would be the ‘rational’ part of thought, not counting the interaction with the seemingly irrational parts, interaction with the body, and with the infinite outside world, etc.

In the end [which is near, qua this post], the question keeps rearing its head: What do you want? – Which over and over again isn’t answered properly so no solution will fit.

Dammit. By trying to outline the scale of the problem, I already keep turning in all directions. I’ll stop now, with:

[Human Life is difficult to explain; Porto]

Maverisk / Étoiles du Nord