Training your way out of bias

No, this is not about bias in data that you train your “AI”i.e.ML on. I’ve posted (not nearly) enough about that.

It’s more about pointing to an HBR article that should be(come) very influential…
As it was already known that any ‘training’ was about as ineffective as one could get, qua e.g., security awareness transfer [posted about that already, too], this piece elucidates why; among others, because it finds those sent to ‘training’ a priori suspect i.e. guilty and punishes, both creating counter-emotions. But read the whole thing; worth it!

Also, in regard to this earlier post: Thinking in win-lose terms … one is all that you can score [ref. Frankie gtH, Two Tribes; yes you got that or you’re n00bish like here, 10th paragraph].
This may be corrected by the above pointer. But be careful; be very careful. People resent nudging; when they find ou about that, they’ll balk against the brainwashing. ‘tSeems like one has to change little habits, big habits, unconscious ones and blatant ones, of every individual individually, and at societal levels — all and all quite a complex/wicked problem (with this), even leaving aside the thingy about manipulating society towards some better ‘good’ which of course leads to the Utopia dystopia, if it were achievable at all. Like, any strive for an ideal society optimises its positive effects at about the infliction point i.e., halfway through its implementation; after that the negatives begin to outdo the positive effects…

Enough for now; leaving you with:

[Don’t get defensive …! Nancy]

Leave a Reply

Maverisk / Étoiles du Nord