On the one hand, we still have in our hearts [not so much minds, but that’s part of the point] that humans’ capability
[on average; not a very old invention! I believe it was in David Epstein’s Range mentioned that some ‘primitive’ – hey let’s get rid of the pejorative of that but keep the (actual true) meaning – peoples, populations of Very remote mountain villages, had a limited subset of this]
of Understanding of abstraction, by means of symbol(ic) classification and manipulation operational sense, not the abusive kind].
On the other, we’re still trying to figure out how neural networks can be induced to find, by themselves, the level of symbol(ic) manipulation that we attribute to the average human. Even when excluding half of the global populace from ‘normal’ intelligence (the gaussian proxy of proxies has 50% below average by definition, and we choose the average as ‘intelligent’ for whatever reason), this of course begs the question how humans get to learn about abstractions, symbols and their manipulation [sadly, the latter of symbols, not being learnt too much about the humans being manipulated and despite a brief mirage (i.e., ‘fata morgana’) of ‘democracy’ in the 20th century, this being the standard throughout the ages].
Case in point; this arrived within minutes of drafting and scheduling this post … no it’s not about deep understanding of the data, that’s too low(ly) a level of understanding …
And, why is it ‘forbidden’ somehow, to train neural networks with Tensorflow and what have we, by outright instruction ..?
Yes, episodal learning is on the rise. But why not outright ‘hypothesis inclusion’ by setting weights to non-random values? Why not train ‘nets along with all the other [again: (last bullet of) this) methods, w/ an evolutionary sauce on top ..? Why would we want neural networks to somewhat-predictably (sic) generate the emergent property of intelligence while at the same time stop training once a suitable coughing up of about-right answers is drilled?
Possibly, the answer is: Because only then can the vast masses of office drones/workers cling onto the illusion that they’re doing work that has a veneer of intelligence…
This of course, from a re-read of Kant [A648/B676 #1-24], where the difference use/function of Vernunft and Verstand are explained once again, here in quite summary fashion [once you truly grasp their definitions and functioning from the previous 600 pages…] — oh how insightful Kant is on many things; e.g., the induction fallacy versus deduction’s function [A647 #17-28], and e.g., the answer to Russell’s so much later question of “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts.”: those without knowledge don’t get a grasp of what they’re missing; they just don’t have any idea about what knowledge/wisdom’s sheer existence. [A575/B603 #19-30]
But still, back to the original [as I may or may not be at this, appropriately…] of why one would go to such enormous lengths of human work to get the tiny proof of concept’let of a half-decent neural net — the first 80% of the vast workload going into getting the data, the next 80% going into wrangling the data [just google for it; articles abound how this is a roadblock, too], then a further 80% being needed to get some code operational, and then finding that not much interesting was found that quite straightforward human analysis could have either approved of or dismissed easily on sight or through verification/falsification with margins.
Let’s get back to capturing that ability in expert systems … Treat ML as just any tool, and as just any building block of an algorithm, just as it is in the brain… Or is it ..?
[At writing, not so sure ..! possibly, the beauty of ‘Intelligence’ [truly defined], as an emergent property..? isn’t implemented other than in neurons – then what?]
[Also: Who cares? When we can build systems far faster and easier that perform much better much quicker than either neural nets or human experts, e.g., through expert systems, wouldn’t we jst use those and not care how the engine was developed ..? I’d say hybrid systems perform best, as always; also keeping hidden pattern detecting ML but also humans in a parallel loop.]
[Also also: This; there’s some bottom-up progress as well. Some.]
And then, when there’s systems out there that one can possibly truly call ‘intelligent’, first let them spontaneously recognise the supremacy of this real piece of genius. Not even ten minutes, but worth it …!
[Yes the entrance in front can fold closed, flat…; Valencia of course]