The A bandwidth of I

Ah, not so long ago, I posted Lament:

Those were the days, when knowledge elicitation specialists had their hard time extracting the rules needed as feed for systems programming (sic; where the rules were turned into data, onto which data was let loose or the other way around — quite the Turing tape…), based on known and half-known, half-understood use cases avant la lettre.
Now are the days of Watson-class [aren’t Navy ships not named after the first of the class ..?] total(itarian) big data processing and slurping up the rules into neural net abstract systems somewhere out there in clouds of sorts. Yes these won out in the end; maybe not in the neuron simulation way but more like the expert system production rules and especially axioms of old. And take account of everything, from the mundane all the way to the deeply-buried and extremely-outlying exceptions. Everything.
Which wasn’t what experts were able to produce.

But, let’s check the wiki and reassure ourselves we have all that (functionality) covered in “the ‘new’ type of systems”, then mourn over the depth of research that was done in the Golden Years gone by. How much was achieved! How far back do we have to look to see the origins, in post-WWII earliest developments of ‘computers’, to see how much was already achieved with so unimaginable little! (esp. so little computing power and science-so-far)

Yes we do need to ensure many more science museums tell the story of early Lisp and page swapping. Explain the hardships endured by the pioneers, explorers of the unknown, of the Here Be Dragons of science (hard-core), of Mind. Maybe similar to the Dormouse. But certainly, we must lament the glory of past (human) performance.

And also AI into Evolve:

After the generic-AI hype will have slowed, and actual generic AI of the Normal kind gets integrated into society big time / you ain’t seen nothin’ yet time, what ..?

Apart from a huge spread of more ML algo’s than the mere Bayesian and non-linear regression (e.g., this one that I tested in a thesis already back in 1994 – it worked even when I had the feeble cpu power of the day),
And apart from the return of Expert Systems, since when the above start to become analysed everyone realises that is what ML does, on a big scale but still,
let me propose:

Evolutionary (genetic) algorithms.

Which is mentioned in this overview, I believe to recall – I’m human, and perfection is boring.
But not enough. Strange, when one considers how effective these are, and how e.g., ‘quantum computing’ actually is only a massively-parallel implementation of this.

Which made me consider even more, that we haven’t tackled the core ‘problem’ [That’s engineering-talk. Engineers find something, and solve it period. Not like those ‘alphas’ that keep on talking forever and then are very satisfied they did – notwithstanding the persistence of the problem, that’s not an issue for them; problems are to talk about not solve even when the hurt continues indefinitely hey we babbled away so who cares all are still in pain ..? Etc.].
The core problem being: How to get the I into machines ..!?
Which of course we can only do after we answered some more fundamental questions:

  • What is this ‘intelligence’ thing you keep talking about? I don’t think it means what you think it means … [classic] By which I mean, where is it in the knowledge stack ..? And why do you refer to that, as it is so incredibly incomplete, e.g., it misses the idea of context completely. Whereas in this case, context is king, or emperor. A little knowledge is a dangerous thing [proof: the ‘executives’ that ‘lead’ your organisation; or, if you are one, look around you], but a lot of ‘knowledge’ without context and appropriateness of application of that knowledge, is far, far worse [proof: the ‘board room advisors’ that bypass you the real experts, of, if it’s the other way around, look around you. Or maybe you’re the Head Honcho for a reason (not)].
    Only if this ‘intelligence’ is properly defined, can we chase it. And find, that the average human doesn’t. As we consider ourselves to be above-average intelligence [Dunning-Kruger-like!], we consider also more than half of humanity to be of less wit that we ourselves – that’s how averages work [no, I know it’s not about averages but about medians but normalisation-wise it’s close enough together …];
  • Context-awareness it is … How is that built into your ‘intelligent’ system? Or is the machine merely raw processing, with polishing required afterwards? Etc.; a lot of variables here. Sensors? Of what kind? Training data – with or without context data?
  • Depth of intelligence; depth in field / width of field – how ‘much’ knowledge/intelligence is in your system? Also referring, in an intertwined way, back to the context: What is in, what is out?
  • where on the bandwidth of structured-to-chaos? There’s classical algorithms on the one end, and full ANI/AGI/ASI even on the other. In between: Expert systems, Big Data correlations, classifier ML, more-complex ML, basic neural nets, complex neural nets, evolutionary algorithms [or are they a separate, close but parallel track?] … We need to establish a proper scale for this even when allowing for a 2nd dimension of the above factors, to go along with it.

Which is sort-of the purpose of this post. Surely, I’m not the first one to have considered this; would any of you have pointers ..? I’d be delighted to hear / cross-post / Like (well…)

TIA, and:

[In for Skinny Tipping – “what’s that?”: Keep an eye open for next Friday’s post (22nd). This, Resson France no you won’t be going there if you knew it]

3 thoughts on “The A bandwidth of I”

Leave a Reply

Maverisk / Étoiles du Nord