Adversarial, edgy bias continuation

What a title… It’s about how deterring edge cases and ‘adversarial’ examples [not the quotes] will let your system perform better … in test environments. But how this will also enforce and re-inforce biases.
Like:
Edge cases give results with too much noise – throw them out, please, as they distort [??] the quality of training hence of the system-to-be. And we need to go into production.
Adversarial examples aren’t edgy, they sit in the middle [by any dimension(s)] of the training set. Hence look OK, but also result in too much noise if suitable; you don’t want your confusion matrix to be awkward, right?

This leads to two turbid troubles:
Stripping too much noise will render your model valid in smaller and smaller contexts – do you tell your client? Bias may ensue, as you dumb down the system to dumb stereotypes only.
Societal effects ensue. If one is discriminated against for living in a certain neighbourhood, e.g., gentrification efforts (sic) will fail. ‘Adversarial’ newcomers will be kicked out; not by incumbents but by systems, on credit scoring and the like. The incumbents will not be able to ‘escape’ in flip side.
This leads to perpetuation of learned discrimination, or even strengthen it; somewhere here, the term ‘survivorship bias’ creeps in.

Feel free to elucidate, or counterpoint, in comments …

And:

[These Tom, Dick and Harry wouldn’t let you in for sure: No class. Uhm, monai would fix this…; Rijks Amsterdam]

Leave a Reply

Maverisk / Étoiles du Nord