Intermission: Fuzzy determinant neural expert GMDH networks

For an intermission post:

Again, my hobby horse(s): The Mix. Squared.

  • Neural networks, a.k.a. Machine Learning, a.k.a. “AI”, on its own will never cut it. Since any NN is too simple, with its single firing function for all (nodes), even when slightly varied – even when varied per (modifiable/learnable?) node-specific parameter. And ML needs tons of data (acknowledging the trend towards better data over more) that simply isn’t available for a great many applications (as in: applying the ‘solution’ to some problem, ill-defined more often than well-) since reality is so many factors more complex than all the data generated/processed with IT since the Seventies, even if all that was still available. – intermission in the same; my guess is what happens on the Internet, doesn’t stay on the Internet; your data will be forgotten much earlier than before. Yes, paper-file cabinets held many more erroneous data points but how many ‘WORM’ discs even have become unreadable or barely readable? Historic data is less and less; end of interintermission – And “AI” isn’t, even ChatG turns out to be driven, as ever again! by humans that ironically had outsourced part of their processing to ‘AI’ systems. We’ll need at least ANI before it’s deployable; symbolic reasoning isn’t there yet if at all possible (though my hunch is that…). And convergence is still too shaky to be able to use anything really.
  • Deterministic programming (If-Then style) will not cut it, we know.
  • Fuzzy logic systems alone will not cut it, we know. They may serve as interpretations of noisy firing functions within ML or stand-alone, but take too much human engineering and … ‘statistics’ yes I’ll go wash my mouth now.
  • GMDH systems will not cut it, we know. E.g., from my graduation work some 30 years back. Only-slivers-of-results and much spurious correlations resulted, and fundamentally, that cnnot be solved by today’s faster/more number crunching the problem is in the inductive approach. Though it might help tons with ML training … Either seeding bare ML networks or training them to final state.
  • Symbolic reasoning (think (waaayyyy back to) LISP and Prolog – do you remember © Earth, Wind & Fire) is too cumbersome, and needs, now ever more inherently incomplete, modelling of the subsystem / subject matter at hand. And bring to the fore another hobby horse of mine: How abstraction, the loss of specificity of that (the largest common denominator quickly drops to zero, the least common multiple mushrooms – the ‘character’ or feature/phenomenological aspect disappears in triviality but the potential / riches and variability away from the model explode – unused ..! and the only true model of reality is reality itself (Gell-Mann)) but also the mix of the former with the emergence of properties, seems to make a logica jump that can’t be brute forced. As e.g., ‘privacy’ is an emergent property of data protection, for confidentiality and availability (!).
    And with much human knowledge being input into the system. Slllloooowwwww! And a quite good definition of Incompleteness – of the trivial kind not the real issue. [Interintermezzo again: Is imperfection in logical reasoning required, e.g., for ‘human intelligence’? À la requisite variety, in here (in Dutch but G Translate should work); a suitable subject for a uni full of PhDs I guess.] Nevertheless, this doesn’t cut it on its own.

Even treating half- or ‘fully-baked’ neural nets of whatever proportion or module stacking, as glorified If-Then systems – hey, indeed, the firing function is a superclass, generalisation of the 0%/100% condition in deterministic If-Then – will miss the abovementioned Emergent Property requirement that ‘intelligence’ needs. Unsure whether any philosopher has figured that out (like: proving that, if proof exists or is a (derivative) Halting Problem class, or runs into Gödelian issues) or what his (sic) name would be, but for me, it’s a fact.

Which brings me to:
We’ll need to mix systems. Not (only): stacking a variety of modules, but mixing in the modules as well. And learning – a life long, or at least some 25 years. Hoping for any success; watching current under-40yrs-old humans isn’t encouraging.
But after that, we could clone indefinitely quite quickly..? Maybe not, if the mind/body problem needs to be included – then, cloning will not work because every new copy is a different instance if only because they take different physical space, and can’t exist out of nothing in the exact same instant / quantum time leap. [Yes you read that correctly; time seems to come in quantum leaps as well.]
For this intermission post, I deviate. I’ll return for the home straight:

But mixing we must. Let’s start with bringing back abovementioned all too much abandoned programming / development foundations and languages. Thoughts ..!?

Oh, and of course:

[Even this has so many sides all essential to present the whole; for no apparent reason in the Wiesbaden museum]

Maverisk / Étoiles du Nord