AInterpretability

For all those that promote ‘transparent’ AI: You want induction to work in a way that it never has in history. Logical fallacies prevent that – if you would want to sate that ‘logic has progressed’, congratulations; you have now declared your own incompetence in these matters.

Yes, there’s thoughtful content around re interpretability, like here [hey I noticed that Medium is becoming the prime platform for reading up on the latest AI developments in all sorts of areas and issues; maybe I should join the frey].
It even delves into ‘interpretability’. Rightfully; progress can be made in little steps there. Transparency, not so much. And it’s not ‘semantics’ because it’s semantics ..!

But the transparency wanters don’t see the difference. Being that what they want, is the black box explaining itself in terms humans can understand.
Which humans; those that are subject to its outcomes? That’ll be all in the lower “IQ” categories [yes I know that is a ridiculous metric mostly used by IYIs]; the upper regions [qua wealth, then, mostly inherited or swindled (qua long-term societal ethics) off the others] will always pay for ‘personal’ services as they’re too decadent to act themselves. But those are the ones already in trouble today.
Explaining: In what way? Revealing the [most often] ever-changing model used for one particular decision ..? The above article, among many, explains [duh] that this is a. difficult, b. an Art even to trained humans. No way that this will be done for all cases, always, everywhere. Because that is the very opposite of what ML deployments are for.

And again, the explainability is about understanding the model, which in turn is stacked upon the idea of modelling itself. Which was devised to understand some complex stuff in the first place, not to predict or so as it was clear from the outset that the model would be too simple always to be able to do that or the modelled would be simple enough (in a cyber(sic)netic sense) already to ‘use’ outright.

And so on, and so forth.

Induction simply doesn’t work for explanations. Summaries of stats is most one could get out of it.

If 50% of your AI project is in making anything work, the other 95% is in humans steering, making decisions, managing and controlling, and interpreting the project and its results.
It’s as if each and every medical study would have to give a complete explanation at grammar school level of all the statistics and (often linear) regression on the research data, over and over again. The difference being that with ML, parts of the maths involved can be proven to be nondeterministic and/or the inverse function (that would form part of the explanation of its functioning) simply doesn’t exist or has an infinite number of solutions.

I’ll stop now. Whatever turn you’d like to take, you’ll run into fundamental issues that will slow down and stop progress. Just swallow the idea that you may be building something that is as intransparent as human thought. Even when the latter would be the ‘rational’ part of thought, not counting the interaction with the seemingly irrational parts, interaction with the body, and with the infinite outside world, etc.

In the end [which is near, qua this post], the question keeps rearing its head: What do you want? – Which over and over again isn’t answered properly so no solution will fit.

Dammit. By trying to outline the scale of the problem, I already keep turning in all directions. I’ll stop now, with:

[Human Life is difficult to explain; Porto]

Leave a Reply

Maverisk / Étoiles du Nord