Biasin’ -out.

Yes, yes, there’s still discussion about biases and how ‘Ethical AI’ should be pleonastic.

1. Though it isn’t, and inherently can’t be. For one, since ‘ethics’ is (factually) about discussing and debating dilemmas ad nauseam without providing a clear-cut decision – whereas human behaviour of the non-pathological kind is about something else… For another, any ‘ethical’ decision is completely and utterly individual or (as in ⊻ !) some moral judgment by some other(s – usually group with identification < 1) imposed upon the subject-not-free-agent; certainly when as should be the case each and every time the context (i.e.,complete and utter physiological and mental history plus present circumstances) is individual beyond knowing (being beyond symbolic descriptions).

2. The bumrush to some form of ‘ethical neutrality’ is impossible for the same reason, and may also not be enough, or too much, since what is described in this piece: All training is from mere pre-AI live situations, one by one, but new tools bring amplification. So the Longread isn’t just correct; its title understates the problem. Yes, the amplification as mentioned may seem minute, but the subliminal programming [that takes place for a fact] of the Longread will be persistent enough to loop back into the data and into the nnets/algorithm(s). Self-sustaining, -increasing separation of classes; creating barriers (mostly non-explicit but why care?) that governments have been created for to outlaw.

3. Not that I’m happy about the above. But now what? This:

[Lift the bottom, not suppress the top, in living quality; Rotterdam]

Leave a Reply

Maverisk / Étoiles du Nord