From the scares of ‘AI’ HR algorithms that result in biased outcomes … some musings; your (constructive ..!) comments appreciated:
Biases (i.e., errors!) in input of course will result in biases in outcomes. Overstretched classification statistics (what the ‘algorithms’ mainly do, all too often) of course will lead to improper (biased) outcomes.
Those can be solved. With some difficulty, as here.
But, apart from the obvious misses (link: et al., more have been known in particular in the field of credit scoring), what if the algorithm is 100,0% unbiased and it finds socially-unwanted outcomes ..?
To use the above miss as an example (not because I believe this might be the case but only as an example that mysogenists might use …): What if it turns out there’s hidden traits in some gender’s workers that results in their eventual performance lacking compared to other candidates’ ..? E.g., possibly becoming pregnant may be picked up as an indicator of future maternity leave, leading to productivity losses and possible costs of hiring temp replacements, flatter experience growth curves etc. No this shouldn’t be allowed to be let factored in – although the company that prunes their system to not let this count, will not get any return for their consideration which makes the co less financially competitive than their cheating competitors that are still out there quite a lot and in the end, only the financials count. Don’t cheat yourself in believing otherwise.
But when one starts tweaking away the unwanted outcomes, where does one end? And who checks that the tweaking is correct and unbiased, and with continued-learning systems does not creep back in (and at what levels of deviation will one re-balance)? Who checks that the inputs are correct anyway, both for training and in future use? Because, on hard requirements the biases can be implicit but hard, like age biases. Any system will pick up that ‘3 to 5 years experience’ will mean all seriously-experienced, extremely fast and efficient and probably highly motivated, loyal and non-career-chasing workers over 40 will not be considered. As is done by human selectors now (as is outright illegal but what do you do), often working in a much more inflexible-algorithmic fashion than the better AI systems.
Tweaking the inputs, will result in ineffective systems and the whole exercise was to find ‘rules’ that were not easily derterminable, right? When you start on this road, your system will deliver whatever you want it to, not necessarily unbiased. And probably very intransparent.
Tweaking the system or the outputs is introducing biases of your own, badly controllable. Hence probably, very intransparent.
Tweaking doesn’t seem like a path to follow. Strange. (..?)
And how does one get societally-wanted biases in, like quota? They are unfair against others – how can I help that by accident, I was born a white male!? Should I be punished for that by being considered less than others? Because that is what happens, for a fact and if you deny that you’re just incompetent to discuss the whole subject. That I am over 50 and a white male should not be allowed to be Bad Luck or you’re much, very very much worse than your average mysogenist, you ageist sexist racist ..!
Now, can we first changes cultures, one person at a time, and then train systems to mimick/outdo humans ..? That’d be great.
Edited to add: Marc Teerlink’s view on things (in Dutch no less – how much Intelligence is needed to understand that? Give Google Translate a spin and … not much I fear).
And so is:
[For 10 points, comment on the femininity of the curves but also the masculinity of the intended imposing posture; Amsterdam – yes I mean the building or are you actively searching for the wrong details?]
One thought on “Overeffectively fair and transparent”