When humans are much, much less far off from [other; ed.] animals than we commonly think, and a look at ‘presidents’ around the world may push that into a negative margin, there was this piece about how “AI” might better have a look at animals’ trained brains for a path forward from ANI to AGI.
Well, wasn’t it that the Jeopardy-winning “AI” system actually was 42 subsystems working each in their own direction ..?
How would ‘rebuilding’ a human brain also not be the same, at a somewhat larger scale ..?
Like, this post about ‘getting’ physics being done by some quite neatly identified parts of the brain — what about building massively (complex but) connected massive numbers of subsystems, all focused at particular areas of human thought (with each possibly developing their own specialty – not much asked when that already happens in larger neural nets by itself, eh?). Building from the ground up, as humans do when they develop their brains (recognising mama first, before papa, then saying ‘papa’ first, before ‘mama’). The map of brain regions is developing at light speed anyway; as here and here.
This may take decades of all-out development, like with humans. [Noting that with the explosion of complexity in society, the age of full development has jumped from adolescence to twenties, even when the latter also includes full development of a conscience and sense of responsibility/ies. Once, centuries ago, there was so little to learn about the world that many were done by adolescence-to-18th. Now, there’s so much already of basic stuff to ‘get’ society and one’s role in it, that the age should be much above 20; also: Life-long learning.
All moves to lower the age of … whatever, qua maturity e.g., in driving, drinking (hopefully separating the ages between the two so responsible behaviour in both and in combi is (not) developed properly), and criminal culpability versus youth crime, all backfire grossly already for this reason: Brains haven’t developed earlier, only opportunity and incompetence. By, among others but prominent, parental protective hugging-into-debilitation of generations of youths that haven’t learned to fence for themselves in hostile environments that still require cooperation to survive. Never having learned give-and-take (Give first !!! Duties before rights), means never having learned to be a responsible human. Which shows, in many societies.
Edited to add: this, about how a hand’s ‘nerves’ may learn about objects; any one that has dealt with baies, knows this drill….
Or, one stops the development after a handful of years, and ends up with ‘presidents’.
Or, one goes on a bit, beyond the proven ‘95% of human behaviour is reflexes on outside impulses the neocortex just puts a semi-civilised sauce on it’ onto e.g., Kritik der reinen Vernuft, Die Welt als Wille und Vorstelling, and Tractatus Logico-Philosophicus (plus the Theologico-Politicus I hope). To have a system with Explainability towards these masterpieces, among others, would be a great benefit to society. But I’m digressing; the Turing Test was about average humans, not us.
The bottleneck being the hardware, obviously. Plugging in USB/UTP cables between two systems isn’t as much fun as incepting/building human massively-complex systems-to-be-raised.
Also, there may be a build-by-biology versus build-by-design difference; have a look at the numbers here and you get what it would take. On the hardware side, things/boundaries are moving as well.
Edited to add: The flip side is that any above-such trained system, is most quickly and infinitly-copiable. Hence, should one go for ‘average human’ intelligence, or variate on purpose [who calls the shots on this, qua human-like-systems eugenetics ..??], or aim for the highest intelligence [of possibly non-human form] achievable? And what if the latter, and it turns out that decides to do away with stoopid humans quickly to protect the earth, its power supplies and its’self ..? What if too many of such extreme intelligencies prove to be too many / all, as whacko as many human ‘super’intelligents are ..?
Oh and I am aware that one doesn’t ‘need’ to rebuild a human brain, to get to something similar to human intelligence [Big-IF there’s such a thing; have a look around at your colleagues]; my point here is that we may want to strive for something similar and let it veer off as close to the Edge [not the browser, that’s a demo of non-intel] as possible to prevent it from developing ‘intelligence’ of a kind that we have no clue how to deal with — which would potentially make it much, much more dangerous.
What if the System were beyond, on a different path, of the Sensation/Impression–Grasping(? Verstand)–Understanding(? Vernunft) line of Kant (relevantexplained in the Kritik der Reinen Vernunft, I. Transzedentale Elementarlehre II. Teil Der Transzendentalen Logik II. Abteilung Die transzendentale Dialektik, Einleitung II Von der reinen Vernunft als dem Sitze des transzendentalen Scheins, C Von dem reinen Gebrauche der Vernunft B363/10-30, obviously)..? Our brain does work that way, but other substrates not necessarily should, too.
But there is no systemic logical block to this all, is there ..? Your thoughts, please.
[Ah, music appreciation … there‘s one …; Aan het IJ, Amsterdam]