The Dutch Central Bank released a discussion paper on general principles. On AI in Finance. Here.
Oh my. What a great attempt it seemed. But by simply reading it, before studying, one finds … the usual suspect #fail’s.
Like, the decades-old big Error of non-orthagonality (Basel-II big-time #fail in Ops Risk, remember?). The principles have been grouped to backronym into Safest: Soundness, Accountability, Fairness, Ethics, Skills, Transparency. See? Already there, one can debate ad infinitum – and yes I suggest to those that want to take the DNB paper seriously to do so and leave the rest of the world to get on with it w/o being bothered by the ad … nauseam.
1) Ensure general compliance with regulatory obligations regarding AI applications.
2) Mitigate financial (and other relevant prudential) risks in the development and use of AI applications.
3) Pay special attention to the mitigation of model risk for material AI applications.
4) Safeguard and improve the quality of data used by AI applications.
5) Be in control of (the correct functioning of) procured and/or outsourced AI applications.
Nothing here that needs discussion as it’s all generic for decades. So, if you’d need to point this out, the suggestion is that this hadn’t been arranged for all the trillions of LoC now defining the finance industry [humans are just cogs in the machine that may turn a little screw on the production lines here and there, but preferably not too much since they have been known to be major #fail factor #1 by a huge margin]. Oh dear.
And, if there would be a reason to re-iterate now instead of always: What is different with AI systems ..?
6) Assign final accountability for AI applications and the management of associated risks clearly at the board of directors level.
7) Integrate accountability in the organisation’s risk management framework.
8) Operationalise accountability with regard to external stakeholders.
The same. But presuming the old 3LoD thinking still holds sway at the paper issuer’s, one might better first improve that into something above-zero relevant or effective.
9) Define and operationalise the concept of fairness in relation to your AI applications.
10) Review (the outcomes of) AI applications for unintentional bias.
Notice how thin the ice suddenly becomes when AI-specifics come into play… The “concept of fairness” you say ..? You mean, this pointer to inherent (sic) inconclusiveness ..? What, when the fairness to the shareholders prevails? Not much, eh? So this one leaves the finance corp.s off the hook for whatever society thinks. A superfluous principle, then, given today’s finance corp.s practices.
Unintentional bias? Good. Intentional bias is in, then. Same conclusion. The idea that a human in the loop would be any good, or have the slightest modicum of effectiveness (towards what??), has ben debunked already for such lengths that it seems the authors had missed the past thirty years of discussions re this.
11) Specify objectives, standards, and requirements in an ethical code, to guide the adoption and application of AI.
12) Align the (outcome of) AI applications with your organisation’s legal obligations, values and principles.
Hahahahaha !!! Ethics in finance …!!! Given the complete insult of the “bankers’ oath” which would get an F- grade when proposed by any undergrad freshman, how can one truly believe an ethical code might be worth the paper / digits it’s written on/in ..!? Wouldn’t anyone proposing such a thing (in effect, the Board of the regulator that is personally accountable) be forced to step down by shown lack of competence?
And the alignment is easy, once one sees that “anything that’s not explicitly illegal” will be pursued to the values and principles of bonus maximisation through short-term profit maximisation. No, that is a fact, still. If you don’t see that, refer to the previous.
13) Ensure that senior management has a suitable understanding of AI (in relation to their roles and responsibilities).
14) Train risk management and compliance personnel in AI.
15) Develop awareness and understanding of AI within your organisation.
Again, nothing new. Again, this hasn’t happened anywhere in the finance industry ever. In particular 13) … See 11)-12).
16) Be transparent about your policy and decisions regarding the adoption and use of AI internally.
17) Advance traceability and explainability of AI driven decisions and model outcomes.
Ah, finally, things get interesting. Note the aspirational explanation that is in the discussion paper. But this leaves all one would actually want to discuss, out here, unfilled of proposals or lines of thought.
Which is why the discussion paper is a 0.3 version at best. Almost all of it is a summary of how things should have been for decades [remember, banks were early adopters of ‘computers’] but apparently (and known from first-hand practice) weren’t and aren’t, with a minimal tail of actual discussion items. If this were meant to just launch the backronym, one should’ve used a one-pager.
Oh well. Plus this:
[Now that’s principle(d) design; Utrecht]
 As if anyone would need it – society runs on the implicit contrat social already for centuries and longer. If one needs a specific oath, something terribly specific would have to be the case, and specific implications as well. E.g., the military oath. I have pledged the officers’ oath already, and would be very severely insulted if the suggestion would be to have laid that aside like a coat upon leaving active service to the constitution.