Down with/on Agile; Waterfall still’s around

On how Edgyle Skrrum Deaf Obs squares off with ‘traditional’ methods.

How not everything’s well- or better-suited to ‘agile’ change methods. Since that is a solution, to a problem. Not just to whatever it finds in its way.
Like, this here treatise: silo-busting as the goal/means; correct.

When suddenly, I realised that there’s another way of looking at the divide. Remember the traditional waterfall model that turned into a V when one would add appropriate levels of testing, as in:

[Plucked off the ‘net I know not-nice ‘reuse’ right?]

Thre you have it: In the Old model, in the bottom there was (is!) Coding as one block. This used to be a very, very large block. Filled with hackers [original meaning ..!] that coded away like madmen (madwomen, not often) and kept the doors closed to outside scrutiny. That knew their way around version control, for their own benefit. But were, in moderntalk, rather nontransparent. Products, ditto. Yes, they did ‘unit’ tests on whatever chunks of code they themselves considered ‘units’. Proud if not too many Warnings were left [if you have no clue what a clean compile is; go to programming 101 summer school…], not bothering too much if otherwise.
After that, not much room (time) to test left, eh? Just see that something reasonably recognizable to the end users was demonstrable hey we got this deadline on our hands.
Resulting in lots of ill-documented (but content-wise sometimes brilliant) code and systems; does the term ‘legacy’ ring a bell ..? [Oh, you mean ‘technology debt’ – you must be from the ’00s .. qua birth or qua evolution into something proxying sentience.]

Now, with scrummy methods, that is the part most improved. I.e., I don’t want a customer journey I want a pound of potatoes. So, only the sprints should be different from the old design methods that, when somewhat efficiencised and effectivised, won’t have to be changed that much at the ‘higher’ levels. Breaking up the coding part (only; well, almost) into manageable chunks. To be able to manage the programmers, maybe even more than the programming.

But also, managing away the hacker excellence of yesterday. Turning all into a mush of mediocrity. Being ‘lead’ by Bill Lumbergh.
Second to which: Has audit access improved? Like, the big stick to keep all in check, and to check that all do actually implement all those controls best implemented deep down in technology or they’re so weak. Yes, some-exceptional-times security officers are allowed into the èdzjile offices but, as above (and below (the pic)), no-sprint-fitting infra stuff still is problematic. Security things, often are in that category. Now what?

Whatever? Just that rien n’a changé.

I also wanted to include a link to yet another masterpiece. Like this. There you go, you’re welcome.
Topping this, even, didn’t realise that was possible. And this, magnificent but already paling in comparison.

[See? There is ways in between as well. Ottawa]

Oh and an end kicker, the summary:

The case of the pyramids

Well, no. Not of the pyramids. We did that one yesterday / below.
Today’s a new day, hence we move the the case part, though in a similar fashion:
But the authors of a recent paper argue that Wallace Donham, the man credited with establishing the case method as a force at HBS in the 1920s, had evolving views of business education that have never been surfaced, and that contradict the sense that management lessons should be viewed through the narrow lens of the case study. … In the upheaval, he says, Donham saw the limits of the approach he had championed. Strangely, Donham’s apparent change of heart is not recognized in conventional histories of HBS and its iconic case method, … … The modern world had developed “a creed of competitive business morality,” he wrote. Values, he observed, were being “politely bowed to, and then handed over to the clergy to be kept for Sundays.’”

Oh well, yet again, read the full piece, and understand that we are on the brink of letting suchly trained, rewrite the world in AI systems. Including the morality bits [minute crumbs] – spelling the end of the above-mentioned Values. Alisdair, John and many, many others would turn in their graves [as far as they’re there already; TL;DR I didn’t check – sorry!]; if there is any hope for humanity [both dimensions], it’s in enslavement. Of the AI system(s), of course.

Quite a though swing, right? Happy for it. And:

[Be free to enjoy the view, if you can; DC]

How much an architect designs foundations

… Like, not very much.
An architect does not give a hoot about how the foundation of a new office block looks — have a look at any underground office parking garage and see how much effort the starchitect put into that [exceptions noted]. Yes, functionality-wise it’s all good, mostly, but the architect that wants to design a Statement building, hardly will use all his (?) pastel sketch skills on the form of the piles driven into the ground, eh? As long as it’s functional bedrock to build Beauty on, in the eyes of the beholder [i.e., infatuated critic(s) not so much the general public sometimes] and of the architect’s bill, not much thought is given to foundations.

The same [we’re coming to the pointe of this post yes finally] goes for information security.
From a strategic perspective, operational risks are mundane things to be managed i.e., controlled, subdued, not sexy or attention-worthy (!); don’t bother me with the details. Get it fixed period
From an ORM perspective, the same goes down (sic) towards IRM/infosec: Don’t bother me with the details: Get it fixed semicolon cheaply. I don’t want to hear complaints how difficult it all is and how much budget you are short. You don’t need to do anything beautiful or clever, just pour concrete. I want to do whatever business frivolity I get into my head [rather: some ‘boardroom consultants’ hah the oxymoron even without the oxy, often, mess with / push into my head]. Why should I care for the foundation?

You get it now, I guess.

[One lives on the inside but still, wants to have a nice look onto it from the outside, too, sometimes, no? A what about water-proofing your … floating in sometimes troubled waters? Amsterdam Omval area]

Dreaming of oversight, AI version: the wrong dream.

The Dutch Central Bank released a discussion paper on general principles. On AI in Finance. Here.
Oh my. What a great attempt it seemed. But by simply reading it, before studying, one finds … the usual suspect #fail’s.

Like, the decades-old big Error of non-orthagonality (Basel-II big-time #fail in Ops Risk, remember?). The principles have been grouped to backronym into Safest: Soundness, Accountability, Fairness, Ethics, Skills, Transparency. See? Already there, one can debate ad infinitum – and yes I suggest to those that want to take the DNB paper seriously to do so and leave the rest of the world to get on with it w/o being bothered by the ad … nauseam.

1) Ensure general compliance with regulatory obligations regarding AI applications.
2) Mitigate financial (and other relevant prudential) risks in the development and use of AI applications.
3) Pay special attention to the mitigation of model risk for material AI applications.
4) Safeguard and improve the quality of data used by AI applications.
5) Be in control of (the correct functioning of) procured and/or outsourced AI applications.
Nothing here that needs discussion as it’s all generic for decades. So, if you’d need to point this out, the suggestion is that this hadn’t been arranged for all the trillions of LoC now defining the finance industry [humans are just cogs in the machine that may turn a little screw on the production lines here and there, but preferably not too much since they have been known to be major #fail factor #1 by a huge margin]. Oh dear.
And, if there would be a reason to re-iterate now instead of always: What is different with AI systems ..?

6) Assign final accountability for AI applications and the management of associated risks clearly at the board of directors level.
7) Integrate accountability in the organisation’s risk management framework.
8) Operationalise accountability with regard to external stakeholders.
The same. But presuming the old 3LoD thinking still holds sway at the paper issuer’s, one might better first improve that into something above-zero relevant or effective.

9) Define and operationalise the concept of fairness in relation to your AI applications.
10) Review (the outcomes of) AI applications for unintentional bias.
Notice how thin the ice suddenly becomes when AI-specifics come into play… The “concept of fairness” you say ..? You mean, this pointer to inherent (sic) inconclusiveness ..? What, when the fairness to the shareholders prevails? Not much, eh? So this one leaves the finance corp.s off the hook for whatever society thinks. A superfluous principle, then, given today’s finance corp.s practices.
Unintentional bias? Good. Intentional bias is in, then. Same conclusion. The idea that a human in the loop would be any good, or have the slightest modicum of effectiveness (towards what??), has ben debunked already for such lengths that it seems the authors had missed the past thirty years of discussions re this.

11) Specify objectives, standards, and requirements in an ethical code, to guide the adoption and application of AI.
12) Align the (outcome of) AI applications with your organisation’s legal obligations, values and principles.
Hahahahaha !!! Ethics in finance …!!! Given the complete insult[1] of the “bankers’ oath” which would get an F- grade when proposed by any undergrad freshman, how can one truly believe an ethical code might be worth the paper / digits it’s written on/in ..!? Wouldn’t anyone proposing such a thing (in effect, the Board of the regulator that is personally accountable) be forced to step down by shown lack of competence?
And the alignment is easy, once one sees that “anything that’s not explicitly illegal” will be pursued to the values and principles of bonus maximisation through short-term profit maximisation. No, that is a fact, still. If you don’t see that, refer to the previous.

13) Ensure that senior management has a suitable understanding of AI (in relation to their roles and responsibilities).
14) Train risk management and compliance personnel in AI.
15) Develop awareness and understanding of AI within your organisation.
Again, nothing new. Again, this hasn’t happened anywhere in the finance industry ever. In particular 13) … See 11)-12).

16) Be transparent about your policy and decisions regarding the adoption and use of AI internally.
17) Advance traceability and explainability of AI driven decisions and model outcomes.
Ah, finally, things get interesting. Note the aspirational explanation that is in the discussion paper. But this leaves all one would actually want to discuss, out here, unfilled of proposals or lines of thought.

Which is why the discussion paper is a 0.3 version at best. Almost all of it is a summary of how things should have been for decades [remember, banks were early adopters of ‘computers’] but apparently (and known from first-hand practice) weren’t and aren’t, with a minimal tail of actual discussion items. If this were meant to just launch the backronym, one should’ve used a one-pager.

Oh well. Plus this:
Your neighbour's design
[Now that’s principle(d) design; Utrecht]

[1] As if anyone would need it – society runs on the implicit contrat social already for centuries and longer. If one needs a specific oath, something terribly specific would have to be the case, and specific implications as well. E.g., the military oath. I have pledged the officers’ oath already, and would be very severely insulted if the suggestion would be to have laid that aside like a coat upon leaving active service to the constitution.


When the economy is moving to platforms as the core structure, can we do e.g., AI / data analysis as a service, white labelling some stuff that all accountants will have to go through, to get State-of-the-Art to them ..?

This could go in all sorts of directions, but preferably in the one that has loosely (sic) coupled systems [in the small-scope sense, not the cybernetics sense], one or a couple for every stage of the audit process. E.g., for the Know-Your-Customer Deep Dive Stage (after the KYC before acquiring a client, and the acquiring) have some process analysis tools; for the risk analysis / audit planning have … [I don’t see too much in this plane yet qua tooling!]; for the data-oriented (all-data checking) have the Mindbridge’s et al. for the crunching. The latter pointing to a. having nothing qua overarching workflow management tools yet [No, not the standard workflow things; I mean auto/AI-gated stuff far beyond the triviality of ‘RPA‘ (UI Path and others) that is just handy but trivial automation; unsure whether this is a step ahead already], b. intricacies of learning / feedback loops year over year.

And then, all this should, could be white-labelled up in the Cloud somewhere (in the EU) of course; ease of maintenance and interfacing.

But heck, this is just dropping the idea, right?

[Drafted 22 May, so in the mean time there may have been some developments. Hope so.]


[But for this, no cloud-based thing suffices, you must do a site visit to check on quality; Valle dell’Acate]

Naming conventions

Have naming conventions already sprung up around home eavesdropping listening devices ..?
Since ‘Alexa’ and ‘Siri’ may be too boring.

Then I would want to call it Computer, since then I’d be able to all the time say ‘OK Computer’. Not even for this. But certainly for this, that one would get a lot when one would use its (‘her’? oh, this may be taken to expect ‘his’) ‘services’ too often / too broad-ly.

I’ll stop now. For the vrijmibo – Friday’s Afternoon Drinks.

What ..? Now already? Yes; one’s taste buds are best in the second half of the morning. Later, your taste will develop in correlation with the number of drinks you can remember to have had. Also, I checked where it is 17:00h now on this site and got confirmation.


[Off-‘Broadway’ Rafael Masó i Valentí, Farinera Texidor, Carrer Santa Eugènia 15, Girona though officially at Passatge de la Farinera Teixidor, 4]

Ethics for lunch

When strategy is culture’s breakfast, ethics are your ethics’ lunch – you’re so much more biased by very accidental culture than you think.

Those involved in anti-biasing AI, tend to belong to a very small group of ‘first world’ affluent (sic) secondary-productive counterweights.
Very small group, since a minute fraction of the Western world pop, let alone the latter already being vastly outnumbered by the rest;
First-world, since very bound up in ‘Western’ countries;
Affluent, since apparently with (mental) time on hand to be vocal about it, and travel the world (by plane, almost always) to be/speak at seminars and conferences…;
Secondary-productive, since a. with time on one’s hand, b. living off others’ production life;
Counterweights, since in contra against societal developments. [Also: re secondary]

As was with the ‘Universal’ humans rights [revelation: They weren’t, and aren’t; they’re still cultural imperialism!], now with the downsides of e.g., AI deployment.

No, I’m not advocating to not listen.
No, I’m not advocating to ignore or silence / fight back against them.
No, I am for balanced views, that not hinge on win/lose but on win/win. If you are still on the win/lose path(s), you represent, are Old thinking and need to change yourself first before communicating with others; otherwise it’s only the veneer that you want to shove onto everyone not real content.

And, at an angle but far from a perpendicular one, ethics are swamped by more immediate needs. Noticed that in many AI discussions, it turns out that humans have all of their existence, and the aeons before, always survived only by ‘good enough’ pattern inference, the strive for perfect having been eradicated biologically (Evolution) as being too expensive to survive ..?
Real people, like your parents, live in a world where culture, education (narrow and wide sense!), religion (ritualised wisdom), environment (‘local’ climate and societally) and the course of life have made them to what they are. All individuals, with a staggering complexity of reflexes and instincts plus a minute layer of conscious behaviour.

When you badger on about ‘ethics’, you’re only us-them’ing on them, near-certainly in a very bad, caricatural way; also perpetrating the very crimes you accuse them of (almost exclusively: falsely). If in your bubble’let, you think yourself the majority. Remember Twain’s quote: Whenever you find yourself on the side of the majority, it is time to pause and reflect.

Also, how do you un-bias when today’s biases are tomorrow’s mainstream ethics? Don’t think that will not happen. It already does. How are you sure your current-day ethics aren’t tomorrow’s absolute abhorrent crimes against humanity? You can’t be, and should realise it can get that bad in a couple of decades or less.

And do you really believe society can be engineered ..? Have you studied history, both near and far, and what have you found other than dismal totalitarian dictatorships ..?

Now, rethink. Again, when you do, you may find yourself in a new position or not who would I be to claim either one will happen …
But think you must. The one that doesn’t, is Wrong.

On a cheerful note:

[Him again, with a message. DC]

Off goes your home security camera; in a blink

Turns out we don’t even need smart kids to pull this one off – just smart activists. As exepelainifyed here (also in the comments), there are numerous ways to throw (e.g., home security) surveillance cameras off-guard or rather, off-line qua function. Not needing to get close with a spray can anymore, but like firearms replaced hand to hand battle (not mano al mano), so your plastic doorbell gets wrecked remotely.

Surprising no-one, I guess, of the ones who pay attention in the way of knowing that technology will fail. Surprising all that it’s this easy. Somehow, also it’s comparable with throwing off your servant-to-the-lazy-as-dead, like here. [If you have use for these, you might as well take yourself out of the gene pool before polluting the planet to pieces.]

Both issues pointing at the human weakness, of mind. So often setting too many limitations and requirements, so often setting too little of both. Let’s hope Technology (this kind) doesn’t find out (it will) or we’re all doomed before we humanity have destroyed life on earth anyway. Again raising the question: when we will with certainty do the latter, shouldn’t we ensure that the Big S metasystem can take over, not needing traditional generational ‘life’ anymore ..?

Oh well. This:

[Zero reason for using this pic. But it’s nice qua graphics. Just like your dysfunctional home security system. Baltimore of course.]

Fork-tongue Loyalty

How is it possible, other than by monopolistic oligopoly, that ‘loyalty’ programs by e.g., airliners and supermarkets, aren’t; in fact, seem to be the opposite ..?

Take airlines: Sure, some travel a lot, and the same for private purposes on their reward points.

But… What are the airlines rewarding? Not loyalty. The very same customer will fly whatever takes them to their business (..!) destinations cheapest, excepting a few kickback discounts and overrulings by potentates that want to keep collecting private miles thus robbing their employers of savings. And points, sometimes. If it isn’t enough punishment already to those left behind, that others are away on fancy trips at company’s expense, the same travellers also get the points? The ‘draconian’ measure here and there where points are retrieved by the employer and allotted to all employees (or returned to the airlines for money, one guesses), makes sense from a fairness perspective.
Don’t fool yourself into believing there’s loyalty from customer companies towards any airline. It’s about money and nothing else.

And, what is loyalty? Is it measured by the number of miles one flies? As per the above; business traveller dominate the landscape (those who really fly a lot on private expense, switch to jetsharing as fast as they can anyway – about as disloyal as one can define), and as said they don’t care the least. And they take travel in short bursts, like, for only five to ten years; before and after that, they very very very often travel far, far less, for less, or immediately jump ship to a low-cost cattle-class start-up.

Yes, the key point is there: Five to ten years only – what absurd definition of ‘loyalty’. E.g. (!) personally, I’ve been on preference with one airliner (only) since my first flight back in ’69. How is that loyalty rewarded? You guessed it: By scoff ever more. I flew on the airline before the vast majority of its current employees were even born, and yet ‘nobody knows you’ let alone rewards anything for such long loyalty. Instead of giving one more and more points when one returns again and again over the years regardless of a temporary bump-up due to business travel, it is joiners that get the free points bonus and the rest may as well not exist.
Customers are hardly-graciously allowed to be loyal, but the peasants mustn’t annoy.

Where did Marketing change to not tell any longer that the 80% customer base is much more valuable than the 20% new/other customers you’re chasing, with retainer costs at 20% comp. to the 80% cost applied to the prospects?
If you tell me that airliners don’t have the records from 20, 30, … 50 years back and/or can’t match, then that’s their problem.

Same for supermarkets. Some discounts here and there are nice, but definitely the same for everyone, and the lift/shift part is obvious (sometimes ridiculously so) with the /retention part being nowhere.
Complain that it is the customers that walk away for the slightest discount at your competitor? Consider the flight of fantasy [I know…] that you made them disloyal with your complete disinterest and inattention…

Oh well:

[Your ‘loyalty’ will leave you in the middle of nowhere; somewhere in Iowa of Ohio or has anyone the actual coordinates? Oh wait is it here: ..?]