How to drown

You will be drowned in a river 2in deep on average. 2in being societally acceptable as average depth, whatever the consequences of you being put in the 50y deep spot. Or you were in the 1in spot but all were moved.

That’s how unbiasing goes. It’s still aggregate discrimination. And also this one: No data point can tell you which distribution it belongs to. Which implies, flipped, that no statistics can tell you whether one outcome is part of it. An individual case can never be assessed to be part of some statistical population (sub)class and then be treated accordingly.
Thus obliterating any validity of statistical justice. Only (root) cause analysis may help, and then you will must decide on a whim whether the cause is culpable and even if then have to decide whether you’d want some [with 100% certainty later backfiring] win-lose correction or win-win, which you’ll have to seek out and strife [no, not strive] for.
Implicating that human analysis and thought needs to be applied; good! but also with the flip side that said human is personally liable for any mistakes, for the full damage amount. No hiding behind organisation-has-personhood lies. Full liability, for the full amount of damage, direct or indirect, however unquantifiable [to be overcompensated just to be on the safe side].

Hence, also this: The right to be forgotten; what if some accidental individual-only specific characteristic tilted your training and you’ll have to redo it all again…

To conclude about all of the above: Why do ML-statistics if you have no logical hence legal grounds to use the results ..?

Oh well;

[Your unbiasedness for lunch; Paleis het Loo]

Leave a Reply

Maverisk / Étoiles du Nord