How much an architect designs foundations

… Like, not very much.
An architect does not give a hoot about how the foundation of a new office block looks — have a look at any underground office parking garage and see how much effort the starchitect put into that [exceptions noted]. Yes, functionality-wise it’s all good, mostly, but the architect that wants to design a Statement building, hardly will use all his (?) pastel sketch skills on the form of the piles driven into the ground, eh? As long as it’s functional bedrock to build Beauty on, in the eyes of the beholder [i.e., infatuated critic(s) not so much the general public sometimes] and of the architect’s bill, not much thought is given to foundations.

The same [we’re coming to the pointe of this post yes finally] goes for information security.
From a strategic perspective, operational risks are mundane things to be managed i.e., controlled, subdued, not sexy or attention-worthy (!); don’t bother me with the details. Get it fixed period
From an ORM perspective, the same goes down (sic) towards IRM/infosec: Don’t bother me with the details: Get it fixed semicolon cheaply. I don’t want to hear complaints how difficult it all is and how much budget you are short. You don’t need to do anything beautiful or clever, just pour concrete. I want to do whatever business frivolity I get into my head [rather: some ‘boardroom consultants’ hah the oxymoron even without the oxy, often, mess with / push into my head]. Why should I care for the foundation?

You get it now, I guess.
So:

[One lives on the inside but still, wants to have a nice look onto it from the outside, too, sometimes, no? A what about water-proofing your … floating in sometimes troubled waters? Amsterdam Omval area]

Dreaming of oversight, AI version: the wrong dream.

The Dutch Central Bank released a discussion paper on general principles. On AI in Finance. Here.
Oh my. What a great attempt it seemed. But by simply reading it, before studying, one finds … the usual suspect #fail’s.

Like, the decades-old big Error of non-orthagonality (Basel-II big-time #fail in Ops Risk, remember?). The principles have been grouped to backronym into Safest: Soundness, Accountability, Fairness, Ethics, Skills, Transparency. See? Already there, one can debate ad infinitum – and yes I suggest to those that want to take the DNB paper seriously to do so and leave the rest of the world to get on with it w/o being bothered by the ad … nauseam.

Soundness:
1) Ensure general compliance with regulatory obligations regarding AI applications.
2) Mitigate financial (and other relevant prudential) risks in the development and use of AI applications.
3) Pay special attention to the mitigation of model risk for material AI applications.
4) Safeguard and improve the quality of data used by AI applications.
5) Be in control of (the correct functioning of) procured and/or outsourced AI applications.
Nothing here that needs discussion as it’s all generic for decades. So, if you’d need to point this out, the suggestion is that this hadn’t been arranged for all the trillions of LoC now defining the finance industry [humans are just cogs in the machine that may turn a little screw on the production lines here and there, but preferably not too much since they have been known to be major #fail factor #1 by a huge margin]. Oh dear.
And, if there would be a reason to re-iterate now instead of always: What is different with AI systems ..?

Accountability:
6) Assign final accountability for AI applications and the management of associated risks clearly at the board of directors level.
7) Integrate accountability in the organisation’s risk management framework.
8) Operationalise accountability with regard to external stakeholders.
The same. But presuming the old 3LoD thinking still holds sway at the paper issuer’s, one might better first improve that into something above-zero relevant or effective.

Fairness:
9) Define and operationalise the concept of fairness in relation to your AI applications.
10) Review (the outcomes of) AI applications for unintentional bias.
Notice how thin the ice suddenly becomes when AI-specifics come into play… The “concept of fairness” you say ..? You mean, this pointer to inherent (sic) inconclusiveness ..? What, when the fairness to the shareholders prevails? Not much, eh? So this one leaves the finance corp.s off the hook for whatever society thinks. A superfluous principle, then, given today’s finance corp.s practices.
Unintentional bias? Good. Intentional bias is in, then. Same conclusion. The idea that a human in the loop would be any good, or have the slightest modicum of effectiveness (towards what??), has ben debunked already for such lengths that it seems the authors had missed the past thirty years of discussions re this.

Ethics:
11) Specify objectives, standards, and requirements in an ethical code, to guide the adoption and application of AI.
12) Align the (outcome of) AI applications with your organisation’s legal obligations, values and principles.
Hahahahaha !!! Ethics in finance …!!! Given the complete insult[1] of the “bankers’ oath” which would get an F- grade when proposed by any undergrad freshman, how can one truly believe an ethical code might be worth the paper / digits it’s written on/in ..!? Wouldn’t anyone proposing such a thing (in effect, the Board of the regulator that is personally accountable) be forced to step down by shown lack of competence?
And the alignment is easy, once one sees that “anything that’s not explicitly illegal” will be pursued to the values and principles of bonus maximisation through short-term profit maximisation. No, that is a fact, still. If you don’t see that, refer to the previous.

Skills:
13) Ensure that senior management has a suitable understanding of AI (in relation to their roles and responsibilities).
14) Train risk management and compliance personnel in AI.
15) Develop awareness and understanding of AI within your organisation.
Again, nothing new. Again, this hasn’t happened anywhere in the finance industry ever. In particular 13) … See 11)-12).

Transparency:
16) Be transparent about your policy and decisions regarding the adoption and use of AI internally.
17) Advance traceability and explainability of AI driven decisions and model outcomes.
Ah, finally, things get interesting. Note the aspirational explanation that is in the discussion paper. But this leaves all one would actually want to discuss, out here, unfilled of proposals or lines of thought.

Which is why the discussion paper is a 0.3 version at best. Almost all of it is a summary of how things should have been for decades [remember, banks were early adopters of ‘computers’] but apparently (and known from first-hand practice) weren’t and aren’t, with a minimal tail of actual discussion items. If this were meant to just launch the backronym, one should’ve used a one-pager.

Oh well. Plus this:
Your neighbour's design
[Now that’s principle(d) design; Utrecht]

[1] As if anyone would need it – society runs on the implicit contrat social already for centuries and longer. If one needs a specific oath, something terribly specific would have to be the case, and specific implications as well. E.g., the military oath. I have pledged the officers’ oath already, and would be very severely insulted if the suggestion would be to have laid that aside like a coat upon leaving active service to the constitution.

As-a-CPA-Service

When the economy is moving to platforms as the core structure, can we do e.g., AI / data analysis as a service, white labelling some stuff that all accountants will have to go through, to get State-of-the-Art to them ..?

This could go in all sorts of directions, but preferably in the one that has loosely (sic) coupled systems [in the small-scope sense, not the cybernetics sense], one or a couple for every stage of the audit process. E.g., for the Know-Your-Customer Deep Dive Stage (after the KYC before acquiring a client, and the acquiring) have some process analysis tools; for the risk analysis / audit planning have … [I don’t see too much in this plane yet qua tooling!]; for the data-oriented (all-data checking) have the Mindbridge’s et al. for the crunching. The latter pointing to a. having nothing qua overarching workflow management tools yet [No, not the standard workflow things; I mean auto/AI-gated stuff far beyond the triviality of ‘RPA‘ (UI Path and others) that is just handy but trivial automation; unsure whether this is a step ahead already], b. intricacies of learning / feedback loops year over year.

And then, all this should, could be white-labelled up in the Cloud somewhere (in the EU) of course; ease of maintenance and interfacing.

But heck, this is just dropping the idea, right?

[Drafted 22 May, so in the mean time there may have been some developments. Hope so.]

And:

[But for this, no cloud-based thing suffices, you must do a site visit to check on quality; Valle dell’Acate]

Naming conventions

Have naming conventions already sprung up around home eavesdropping listening devices ..?
Since ‘Alexa’ and ‘Siri’ may be too boring.

Then I would want to call it Computer, since then I’d be able to all the time say ‘OK Computer’. Not even for this. But certainly for this, that one would get a lot when one would use its (‘her’? oh, this may be taken to expect ‘his’) ‘services’ too often / too broad-ly.

I’ll stop now. For the vrijmibo – Friday’s Afternoon Drinks.

What ..? Now already? Yes; one’s taste buds are best in the second half of the morning. Later, your taste will develop in correlation with the number of drinks you can remember to have had. Also, I checked where it is 17:00h now on this site and got confirmation.

And:

[Off-‘Broadway’ Rafael Masó i Valentí, Farinera Texidor, Carrer Santa Eugènia 15, Girona though officially at Passatge de la Farinera Teixidor, 4]

Ethics for lunch

When strategy is culture’s breakfast, ethics are your ethics’ lunch – you’re so much more biased by very accidental culture than you think.

Those involved in anti-biasing AI, tend to belong to a very small group of ‘first world’ affluent (sic) secondary-productive counterweights.
Very small group, since a minute fraction of the Western world pop, let alone the latter already being vastly outnumbered by the rest;
First-world, since very bound up in ‘Western’ countries;
Affluent, since apparently with (mental) time on hand to be vocal about it, and travel the world (by plane, almost always) to be/speak at seminars and conferences…;
Secondary-productive, since a. with time on one’s hand, b. living off others’ production life;
Counterweights, since in contra against societal developments. [Also: re secondary]

As was with the ‘Universal’ humans rights [revelation: They weren’t, and aren’t; they’re still cultural imperialism!], now with the downsides of e.g., AI deployment.

No, I’m not advocating to not listen.
No, I’m not advocating to ignore or silence / fight back against them.
No, I am for balanced views, that not hinge on win/lose but on win/win. If you are still on the win/lose path(s), you represent, are Old thinking and need to change yourself first before communicating with others; otherwise it’s only the veneer that you want to shove onto everyone not real content.


And, at an angle but far from a perpendicular one, ethics are swamped by more immediate needs. Noticed that in many AI discussions, it turns out that humans have all of their existence, and the aeons before, always survived only by ‘good enough’ pattern inference, the strive for perfect having been eradicated biologically (Evolution) as being too expensive to survive ..?
Real people, like your parents, live in a world where culture, education (narrow and wide sense!), religion (ritualised wisdom), environment (‘local’ climate and societally) and the course of life have made them to what they are. All individuals, with a staggering complexity of reflexes and instincts plus a minute layer of conscious behaviour.

When you badger on about ‘ethics’, you’re only us-them’ing on them, near-certainly in a very bad, caricatural way; also perpetrating the very crimes you accuse them of (almost exclusively: falsely). If in your bubble’let, you think yourself the majority. Remember Twain’s quote: Whenever you find yourself on the side of the majority, it is time to pause and reflect.


Also, how do you un-bias when today’s biases are tomorrow’s mainstream ethics? Don’t think that will not happen. It already does. How are you sure your current-day ethics aren’t tomorrow’s absolute abhorrent crimes against humanity? You can’t be, and should realise it can get that bad in a couple of decades or less.


And do you really believe society can be engineered ..? Have you studied history, both near and far, and what have you found other than dismal totalitarian dictatorships ..?


Now, rethink. Again, when you do, you may find yourself in a new position or not who would I be to claim either one will happen …
But think you must. The one that doesn’t, is Wrong.

On a cheerful note:

[Him again, with a message. DC]

Off goes your home security camera; in a blink

Turns out we don’t even need smart kids to pull this one off – just smart activists. As exepelainifyed here (also in the comments), there are numerous ways to throw (e.g., home security) surveillance cameras off-guard or rather, off-line qua function. Not needing to get close with a spray can anymore, but like firearms replaced hand to hand battle (not mano al mano), so your plastic doorbell gets wrecked remotely.

Surprising no-one, I guess, of the ones who pay attention in the way of knowing that technology will fail. Surprising all that it’s this easy. Somehow, also it’s comparable with throwing off your servant-to-the-lazy-as-dead, like here. [If you have use for these, you might as well take yourself out of the gene pool before polluting the planet to pieces.]

Both issues pointing at the human weakness, of mind. So often setting too many limitations and requirements, so often setting too little of both. Let’s hope Technology (this kind) doesn’t find out (it will) or we’re all doomed before we humanity have destroyed life on earth anyway. Again raising the question: when we will with certainty do the latter, shouldn’t we ensure that the Big S metasystem can take over, not needing traditional generational ‘life’ anymore ..?

Oh well. This:

[Zero reason for using this pic. But it’s nice qua graphics. Just like your dysfunctional home security system. Baltimore of course.]

Fork-tongue Loyalty

How is it possible, other than by monopolistic oligopoly, that ‘loyalty’ programs by e.g., airliners and supermarkets, aren’t; in fact, seem to be the opposite ..?

Take airlines: Sure, some travel a lot, and the same for private purposes on their reward points.

But… What are the airlines rewarding? Not loyalty. The very same customer will fly whatever takes them to their business (..!) destinations cheapest, excepting a few kickback discounts and overrulings by potentates that want to keep collecting private miles thus robbing their employers of savings. And points, sometimes. If it isn’t enough punishment already to those left behind, that others are away on fancy trips at company’s expense, the same travellers also get the points? The ‘draconian’ measure here and there where points are retrieved by the employer and allotted to all employees (or returned to the airlines for money, one guesses), makes sense from a fairness perspective.
Don’t fool yourself into believing there’s loyalty from customer companies towards any airline. It’s about money and nothing else.

And, what is loyalty? Is it measured by the number of miles one flies? As per the above; business traveller dominate the landscape (those who really fly a lot on private expense, switch to jetsharing as fast as they can anyway – about as disloyal as one can define), and as said they don’t care the least. And they take travel in short bursts, like, for only five to ten years; before and after that, they very very very often travel far, far less, for less, or immediately jump ship to a low-cost cattle-class start-up.

Yes, the key point is there: Five to ten years only – what absurd definition of ‘loyalty’. E.g. (!) personally, I’ve been on preference with one airliner (only) since my first flight back in ’69. How is that loyalty rewarded? You guessed it: By scoff ever more. I flew on the airline before the vast majority of its current employees were even born, and yet ‘nobody knows you’ let alone rewards anything for such long loyalty. Instead of giving one more and more points when one returns again and again over the years regardless of a temporary bump-up due to business travel, it is joiners that get the free points bonus and the rest may as well not exist.
Customers are hardly-graciously allowed to be loyal, but the peasants mustn’t annoy.

Where did Marketing change to not tell any longer that the 80% customer base is much more valuable than the 20% new/other customers you’re chasing, with retainer costs at 20% comp. to the 80% cost applied to the prospects?
If you tell me that airliners don’t have the records from 20, 30, … 50 years back and/or can’t match, then that’s their problem.

Same for supermarkets. Some discounts here and there are nice, but definitely the same for everyone, and the lift/shift part is obvious (sometimes ridiculously so) with the /retention part being nowhere.
Complain that it is the customers that walk away for the slightest discount at your competitor? Consider the flight of fantasy [I know…] that you made them disloyal with your complete disinterest and inattention…

Oh well:

[Your ‘loyalty’ will leave you in the middle of nowhere; somewhere in Iowa of Ohio or has anyone the actual coordinates? Oh wait is it here: https://www.donqinn.net/ ..?]

Data, so perishable an asset

As we stated earlier, data is not the new oil. Or gold.

In fact, data is not only perishable as in losing its value over time.
It is also not the definitive, runaway network effect-affected first-is-winner-takes-all. As displayed in this great piece of analysis.

Now, this would be manageable, workable into a ‘new’ ‘model’ if not from the link, I also read, quite in the distance, also a (very early) warning sign that the current Theory of Firm, which we critiqued before, may need to take into account that also internally, ‘organisations’ need to pay attention; at some point [oddly named such; it’s more of an n-dimensional vague area] size doesn’t matter. Edited to add: also remember the tidbits that the Great casually sprinkled around on the subject of ‘information’ as the key concept for future, fluid-all-around, intra- en exo-business structures in this, plus in this and this. Apologies that these three go so much beyond this mere detail of ‘data’ or ‘information’. ‘tSeems that just as everyone is starting to grasp that data is nothing, information is everything [even when this, even as severely limited it is], the new <see above failphrase> gets attention. Some. Too much already. Let’s quit and move on to the Great’s ideas of reorganising organisational life.
Just can’t really pinpoint how this would work out, but Menno Lanting‘s Oil tankers and Speedboats is involved here, too.

In the hope that you help with the pinpointing, awaiting your illuminating comments …

Probably in vain [ heck that song is about me…! ].

But oh wait; there’s a moderation of the above. Pre-publishing, the idea of Seed Data came to me.
By which I mean that one can start any unicorn, if one has one’s hands on sufficiently large data sets to get off the ground. No bootstrapping without boots — only if one has at least sufficient data to train some neural net on basics, can one proceed with early adoptors (not adopters or adaptors ..!) to develop the system further, and by doing so gain both additional well-curated data and a close following that profits most – and hopefully will be vocal about your service(s). But you’ll need the Seed Data to do anything at all at the starting gate, you just can’t invent a system and train it on air.

But now I’m done, hence:

[Freudian, the Belfort at Lille … No I don’t have a horizon issue when photographing …!]

“Ik stuur wel een tikkie”

Tsja, het was in het spel tikkertje toch echt dat je ‘tikkie’ zei als je iemand uitschakelde.
Waarmee “Ik stuur wel een tikkie” betekent dat je, als rechtgeaarde mobster, een huurmoordenaar afstuurt op degene die je de melding doet.

Weet niet of dat de bedoeling was.

Nou ja, whatever.

Dus, behalve dat we met z’n allen inmiddels helemaal krankjorem zijn geworden van al die (malle) eppies, zijn we ook door de letters heen om ze te onderscheiden en verzinnen we (?) de bijpassende nonsens, c.q. laten we dat gewoon gebeuren alsof we diep in de DSM-IV zitten.

Nou ja, whatever.
En:

[En dan stiekum ééntje losdraaien; Berlijn]

What did you just #include ..??

Back to the post from a week ago …

Where the issue was to know your data, and how that still is a problemo big time, if you can get any sufficient data of seemingly appropriate quality at all.
There’s a flip side, on the ‘engine’ side [ref of course Turing machines but why do I write this you knew that duh], where you #include just about anything of value [won’t link back to that post of mine again…!]; being standard libraries and some some times pretty core stats stuff of which you also don’t know the quality.
The standard things like printf() I guess will by now be hashed out. The stats stuff however … I’d say we’ll hear a lot again about bystanders using the stats but not seeing that there’s possibly some obscure shortcut in them that plays out in destructive ways many <time>s from now. It starts with no Bessel correction on RootMeanSquareError / Variance / StdDev calculations. As if degrees of freedom have no meaning — where on the contrary they ‘come from’ practice into stat mathematics…

Now, having taken you from universalia to particulars, let’s return. Have you checked the proper programming of what is asserted to be the functionality of the very stats / data processing / ML / Deep Learning / AI that you’re using for the edge-of-the-art übercompetitive hence most-profitable lab environment pilot plaything systems of yours..?

I’d think not.
At your peril.

‘Open Source’ doesn’t relieve you of your duties to see, to ensure, to prove to yourself, that at least someone else has falsified the code.
Now what ..?

Whereas, flip side, there may be much more pre-cooked functions out there that you just haven’t looked for hard enough but that are of top-notch quality. Like, in a similar field, I hear too little ‘use’ of this [ don’t just jump to 1:44+ ] and this [ 2:48+ ].

Now, on a positive note:

[Easily recognisable and known to all, right? Bouvigne, Brabant of the North flavour]

Maverisk / Étoiles du Nord