Accountants’ morale: Don’t let them near a daycare facility

OK, that title may have been a rather cheap bait, but you took it.
As is now discussed, it seems that the morale of accountants (here in NL – one can guess that elsewhere the same goes to varying degrees) is under scrutiny, after time and time again non-performance of proper straight back / spineless financial audit work, leads to upheaval over, among others, partner fees – with underling fees in tow, too – versus morality.

The comparison with daycare timeliness morality, so elegantly explained here, is obvious – if you want to see it.
Like, when the decision was: ‘I work for the general public, so I make sure to serve their best interest. I will be impartial to “client’s” (quod non) pressures – do I sign or not’, the outcome was as intended by societal structures. Just have a look back at agency theories.
When now the decision is: ‘The ‘client’ is the one who pays me. I decide in their favour of course. The golden rule: Whoever has the gold, makes the rules’, the outcome is as intended by the principals. Only the role of the principal has factually gone over to what were the agents to be checked upon, to be kept at a leash.

And when the money is that good, very much literally like the title of this yes feel the empowerment of the opening part, there indeed is no return to anything like moral value and being a firm pillar of the establishment through one’s qualities not one’s wallet. Worth oppressing the underlings for, right?

Should be locked up, that lot. And:

[If only they’d go to the opera more, they’d learn about morality. Valencia]

Overeffectively fair and transparent

From the scares of ‘AI’ HR algorithms that result in biased outcomes … some musings; your (constructive ..!) comments appreciated:

Biases (i.e., errors!) in input of course will result in biases in outcomes. Overstretched classification statistics (what the ‘algorithms’ mainly do, all too often) of course will lead to improper (biased) outcomes.
Those can be solved. With some difficulty, as here.
But, apart from the obvious misses (link: et al., more have been known in particular in the field of credit scoring), what if the algorithm is 100,0% unbiased and it finds socially-unwanted outcomes ..?

To use the above miss as an example (not because I believe this might be the case but only as an example that mysogenists might use …): What if it turns out there’s hidden traits in some gender’s workers that results in their eventual performance lacking compared to other candidates’ ..? E.g., possibly becoming pregnant may be picked up as an indicator of future maternity leave, leading to productivity losses and possible costs of hiring temp replacements, flatter experience growth curves etc. No this shouldn’t be allowed to be let factored in – although the company that prunes their system to not let this count, will not get any return for their consideration which makes the co less financially competitive than their cheating competitors that are still out there quite a lot and in the end, only the financials count. Don’t cheat yourself in believing otherwise.
But when one starts tweaking away the unwanted outcomes, where does one end? And who checks that the tweaking is correct and unbiased, and with continued-learning systems does not creep back in (and at what levels of deviation will one re-balance)? Who checks that the inputs are correct anyway, both for training and in future use? Because, on hard requirements the biases can be implicit but hard, like age biases. Any system will pick up that ‘3 to 5 years experience’ will mean all seriously-experienced, extremely fast and efficient and probably highly motivated, loyal and non-career-chasing workers over 40 will not be considered. As is done by human selectors now (as is outright illegal but what do you do), often working in a much more inflexible-algorithmic fashion than the better AI systems.

Tweaking the inputs, will result in ineffective systems and the whole exercise was to find ‘rules’ that were not easily derterminable, right? When you start on this road, your system will deliver whatever you want it to, not necessarily unbiased. And probably very intransparent.
Tweaking the system or the outputs is introducing biases of your own, badly controllable. Hence probably, very intransparent.

Tweaking doesn’t seem like a path to follow. Strange. (..?)

And how does one get societally-wanted biases in, like quota? They are unfair against others – how can I help that by accident, I was born a white male!? Should I be punished for that by being considered less than others? Because that is what happens, for a fact and if you deny that you’re just incompetent to discuss the whole subject. That I am over 50 and a white male should not be allowed to be Bad Luck or you’re much, very very much worse than your average mysogenist, you ageist sexist racist ..!

Now, can we first changes cultures, one person at a time, and then train systems to mimick/outdo humans ..? That’d be great.

Edited to add: Marc Teerlink’s view on things (in Dutch no less – how much Intelligence is needed to understand that? Give Google Translate a spin and … not much I fear).

And so is:

[For 10 points, comment on the femininity of the curves but also the maculinity of the intended imposing posture; Amsterdam – yes I mean the building or are you actively searching for the wrong details?]

The faster meme (fake news)

So… What if fake news is a new genus / family / order / class / phylum of hyperfast mutating and spreading virus-like sound (?) bites/bytes or more accurately the same of memes..?

Like in: Globally hyperconnected socmed environments, intermeshed. If that is a word.
This would open up a better understanding of how they operate, and how they can be countered. E.g., quarantaine doesn’t work as they are out there spreading before being detected (anywhere), and remedies will not work since any cure may come too late – victims have become immune to the antidote, and mutations will have overcome the cure before it’s spread wide enough. Immunisation e.g. by inoculation may also not work, as one is unsure what would actually make immune, with the high speed of mutations. One cannot block (all) the benign that could become virulent and unbenign, as that would shut down the whole ecosystem.

Now what ..?

And, for the time being, as long as you’re not zombified yet:

[Yeah, man! Unedited pic from Toronto, where apparently the good stuff is to be found at a random street corner …?]

Accountancy in error

Since all the big data tools have delivered, is, mostly, visualisation tool’lets, and superfluous commas, though, truth to be told, I am an Oxfordian+ comma fan, …
Wouldn’t it be nice if accountants (of the ‘external’, certifying type and stripe) were able and allowed to not just give one vague, all too vague, ill-understood indicator of reliability..? Like, not just ‘reasonable’ [for those needing clarification, I suggest this] or or not. With all the caveats swept under the rug.
Wouldn’t it be possible to deliver a graph, with x being the error margin still ‘possible’ (either absolute, or qua significance/materiality threshold) and y being the chance of such an error still being in the accounts, somewhere. Delivering, hopefully, some form of sigmoid picture. Wouldn’t that be much more informing than just the near-always Pass verdict?

Just musing. Plus:

[Downtown Waiblingen – varied, fun. The Optik being Binder, not blind(er); no accountancy there…]