Recently, there appears to be a bit of a hubbub about the fairness that would not be in AI systems’ outcomes. I know I know that’s a vast understatement, and a major branch of professional (~ ethics) development in the AI arena in general.
To which the venerable (heh) Jad Nohra added this: A simple … but correct and large part of the puzzle. Not all of the puzzle, as e.g., this also is part of it [unfairness in life, reflected and any bias correction will be parti pris in itself, and as such very time-bound and (extremely) discriminatory when seen over more than the shortest run].
But indeed, companies are too cheap to deal with all the variety that life brings. Or is about, even. Diversity as a societal riches, of (course! of) the non-monetary kind but that’s obvious.
Now, the point of this post: Is it only cheapness of companies (and public organisations that through their very purpose should’ve known better), or … doesn’t it help that we (the world) have that GDPR thing ..? Because the latter has in Chapter 2, Article 5 sub 1(c), that blessed remark about data minimisation (with an s not z of course; a good exposé is here).
Which means that the above ‘companies’/organisations, are actually doing it right … to a certain extent.
Or .. to what extent? If one were to expect that because of some parameters, one would chance to be binned in some ‘circumspect’ category and hence denied equal treatment, one would want to fill out additional disculpatory fields or plaintext in an application. Assuming one would be allowed to do so through having the field(s) available, and possibly (sic) even processed, in he first place.
This would be in itself a case of qui s’excuse, s’accuse or where does the cycle end.
So, asking nothing superfluous – from a design perspective with efficiency in mind, over possible unfairness towards losers disenfranchised applicants as externalities – fits with GDPR nicely, and does efficiently what is expected of e.g., commercial systems that need to minimise risks overall, not just qua unfairness.
Asking about-all that could potentially be relevant either for the core purpose XOR (??) for fairness, may be a GDPR-noncompliance burden on the great many that had no need to fear the biased-system monster. And still might not achieve the unbias one seeks [also, the above/here link].
Result: Be somewhere in the middle. Aristoteles’ Middle not just halfway you m.r.n …
Which in itself is a Question.
Transparent explainability may be a step in the right direction.
Who will testify the correctness of the transparency provided [in times of current US ‘leadership’ qua perjury, anything might go ..?] and how will a sound and firm basis for such opinionation ;-/ be obtained? Accountants have a. experience with comparable ‘opinionation’; b. a strictly, blinkered financial focus (and experience) even with environmental reporting etc.; c. no (sic) education on, tools for or experience with anything approaching ethical reasoning. The latter, a fact despite their some possible sputtering otherwise. Ethicists also not: a. they have a sense of what they’re dealing with; b. but zero (sic) practical life experience included hence; c. they keep on debating grave issues without ever reaching a definitive conclusion. At least accountants can be given credit to deliver a final ‘one’liner statement that is such a conclusion. Even if they so often (yes) miss the mark, at least they still dare to state their opinion but hey watch the caveats attached in the end they got burned so very often that they daren’t say anything anymore.
I’m digressing.
What more would we need? Transparency, either system-ically or on each individual case, only goes so far; is after-the-factverdict. Balance built in … but how? Balance, bolted on (input, system, output sides as candidates) … but how?
Frameworks, rules (of thumb / yardsticks) of fairness to all stakeholders including businesses and their managers that don’t get paid to be fair just profitable [however unfortunate that hard fact] – multivariate game play / ‘Pareto’-like optimisation seems to be too complex to get into the heads of even the most advanced minds [with only a slight overlap with the managerial class (of which ‘executives’ are a full subset)] so we’d need a System to do Monte Carlo or other simulation … *sight; even more complex-to-understand systemmery.
Just a suggestion: Would it help to require a two-step process for any ‘AI’- or algorithm-driven decision ..? Whereby the algo categorises first, and then the ‘victim’ gets a chance nay is obliged always to elucidate pro … or contra, non-trivially but measured to the outcome. [A snag in the line of reasoning there, but not unassailable I think.] Exculpatory data only entered when needed, otherwise, minimised to GDPR’s desires. This, beyond the EU idea of requiring reasonably serious manual intervention in algo-driven decisions.
Whatever. The two-step idea, at least.
I’ll go abend handler now, with:
[The Whatsitcalled buidling; München]
2 thoughts on “The battle: Fairness contra ‘GDPR’”