Ethics for lunch

When strategy is culture’s breakfast, ethics are your ethics’ lunch – you’re so much more biased by very accidental culture than you think.

Those involved in anti-biasing AI, tend to belong to a very small group of ‘first world’ affluent (sic) secondary-productive counterweights.
Very small group, since a minute fraction of the Western world pop, let alone the latter already being vastly outnumbered by the rest;
First-world, since very bound up in ‘Western’ countries;
Affluent, since apparently with (mental) time on hand to be vocal about it, and travel the world (by plane, almost always) to be/speak at seminars and conferences…;
Secondary-productive, since a. with time on one’s hand, b. living off others’ production life;
Counterweights, since in contra against societal developments. [Also: re secondary]

As was with the ‘Universal’ humans rights [revelation: They weren’t, and aren’t; they’re still cultural imperialism!], now with the downsides of e.g., AI deployment.

No, I’m not advocating to not listen.
No, I’m not advocating to ignore or silence / fight back against them.
No, I am for balanced views, that not hinge on win/lose but on win/win. If you are still on the win/lose path(s), you represent, are Old thinking and need to change yourself first before communicating with others; otherwise it’s only the veneer that you want to shove onto everyone not real content.


And, at an angle but far from a perpendicular one, ethics are swamped by more immediate needs. Noticed that in many AI discussions, it turns out that humans have all of their existence, and the aeons before, always survived only by ‘good enough’ pattern inference, the strive for perfect having been eradicated biologically (Evolution) as being too expensive to survive ..?
Real people, like your parents, live in a world where culture, education (narrow and wide sense!), religion (ritualised wisdom), environment (‘local’ climate and societally) and the course of life have made them to what they are. All individuals, with a staggering complexity of reflexes and instincts plus a minute layer of conscious behaviour.

When you badger on about ‘ethics’, you’re only us-them’ing on them, near-certainly in a very bad, caricatural way; also perpetrating the very crimes you accuse them of (almost exclusively: falsely). If in your bubble’let, you think yourself the majority. Remember Twain’s quote: Whenever you find yourself on the side of the majority, it is time to pause and reflect.


Also, how do you un-bias when today’s biases are tomorrow’s mainstream ethics? Don’t think that will not happen. It already does. How are you sure your current-day ethics aren’t tomorrow’s absolute abhorrent crimes against humanity? You can’t be, and should realise it can get that bad in a couple of decades or less.


And do you really believe society can be engineered ..? Have you studied history, both near and far, and what have you found other than dismal totalitarian dictatorships ..?


Now, rethink. Again, when you do, you may find yourself in a new position or not who would I be to claim either one will happen …
But think you must. The one that doesn’t, is Wrong.

On a cheerful note:

[Him again, with a message. DC]

Off goes your home security camera; in a blink

Turns out we don’t even need smart kids to pull this one off – just smart activists. As exepelainifyed here (also in the comments), there are numerous ways to throw (e.g., home security) surveillance cameras off-guard or rather, off-line qua function. Not needing to get close with a spray can anymore, but like firearms replaced hand to hand battle (not mano al mano), so your plastic doorbell gets wrecked remotely.

Surprising no-one, I guess, of the ones who pay attention in the way of knowing that technology will fail. Surprising all that it’s this easy. Somehow, also it’s comparable with throwing off your servant-to-the-lazy-as-dead, like here. [If you have use for these, you might as well take yourself out of the gene pool before polluting the planet to pieces.]

Both issues pointing at the human weakness, of mind. So often setting too many limitations and requirements, so often setting too little of both. Let’s hope Technology (this kind) doesn’t find out (it will) or we’re all doomed before we humanity have destroyed life on earth anyway. Again raising the question: when we will with certainty do the latter, shouldn’t we ensure that the Big S metasystem can take over, not needing traditional generational ‘life’ anymore ..?

Oh well. This:

[Zero reason for using this pic. But it’s nice qua graphics. Just like your dysfunctional home security system. Baltimore of course.]

Fork-tongue Loyalty

How is it possible, other than by monopolistic oligopoly, that ‘loyalty’ programs by e.g., airliners and supermarkets, aren’t; in fact, seem to be the opposite ..?

Take airlines: Sure, some travel a lot, and the same for private purposes on their reward points.

But… What are the airlines rewarding? Not loyalty. The very same customer will fly whatever takes them to their business (..!) destinations cheapest, excepting a few kickback discounts and overrulings by potentates that want to keep collecting private miles thus robbing their employers of savings. And points, sometimes. If it isn’t enough punishment already to those left behind, that others are away on fancy trips at company’s expense, the same travellers also get the points? The ‘draconian’ measure here and there where points are retrieved by the employer and allotted to all employees (or returned to the airlines for money, one guesses), makes sense from a fairness perspective.
Don’t fool yourself into believing there’s loyalty from customer companies towards any airline. It’s about money and nothing else.

And, what is loyalty? Is it measured by the number of miles one flies? As per the above; business traveller dominate the landscape (those who really fly a lot on private expense, switch to jetsharing as fast as they can anyway – about as disloyal as one can define), and as said they don’t care the least. And they take travel in short bursts, like, for only five to ten years; before and after that, they very very very often travel far, far less, for less, or immediately jump ship to a low-cost cattle-class start-up.

Yes, the key point is there: Five to ten years only – what absurd definition of ‘loyalty’. E.g. (!) personally, I’ve been on preference with one airliner (only) since my first flight back in ’69. How is that loyalty rewarded? You guessed it: By scoff ever more. I flew on the airline before the vast majority of its current employees were even born, and yet ‘nobody knows you’ let alone rewards anything for such long loyalty. Instead of giving one more and more points when one returns again and again over the years regardless of a temporary bump-up due to business travel, it is joiners that get the free points bonus and the rest may as well not exist.
Customers are hardly-graciously allowed to be loyal, but the peasants mustn’t annoy.

Where did Marketing change to not tell any longer that the 80% customer base is much more valuable than the 20% new/other customers you’re chasing, with retainer costs at 20% comp. to the 80% cost applied to the prospects?
If you tell me that airliners don’t have the records from 20, 30, … 50 years back and/or can’t match, then that’s their problem.

Same for supermarkets. Some discounts here and there are nice, but definitely the same for everyone, and the lift/shift part is obvious (sometimes ridiculously so) with the /retention part being nowhere.
Complain that it is the customers that walk away for the slightest discount at your competitor? Consider the flight of fantasy [I know…] that you made them disloyal with your complete disinterest and inattention…

Oh well:

[Your ‘loyalty’ will leave you in the middle of nowhere; somewhere in Iowa of Ohio or has anyone the actual coordinates? Oh wait is it here: https://www.donqinn.net/ ..?]

Data, so perishable an asset

As we stated earlier, data is not the new oil. Or gold.

In fact, data is not only perishable as in losing its value over time.
It is also not the definitive, runaway network effect-affected first-is-winner-takes-all. As displayed in this great piece of analysis.

Now, this would be manageable, workable into a ‘new’ ‘model’ if not from the link, I also read, quite in the distance, also a (very early) warning sign that the current Theory of Firm, which we critiqued before, may need to take into account that also internally, ‘organisations’ need to pay attention; at some point [oddly named such; it’s more of an n-dimensional vague area] size doesn’t matter. Edited to add: also remember the tidbits that the Great casually sprinkled around on the subject of ‘information’ as the key concept for future, fluid-all-around, intra- en exo-business structures in this, plus in this and this. Apologies that these three go so much beyond this mere detail of ‘data’ or ‘information’. ‘tSeems that just as everyone is starting to grasp that data is nothing, information is everything [even when this, even as severely limited it is], the new <see above failphrase> gets attention. Some. Too much already. Let’s quit and move on to the Great’s ideas of reorganising organisational life.
Just can’t really pinpoint how this would work out, but Menno Lanting‘s Oil tankers and Speedboats is involved here, too.

In the hope that you help with the pinpointing, awaiting your illuminating comments …

Probably in vain [ heck that song is about me…! ].

But oh wait; there’s a moderation of the above. Pre-publishing, the idea of Seed Data came to me.
By which I mean that one can start any unicorn, if one has one’s hands on sufficiently large data sets to get off the ground. No bootstrapping without boots — only if one has at least sufficient data to train some neural net on basics, can one proceed with early adoptors (not adopters or adaptors ..!) to develop the system further, and by doing so gain both additional well-curated data and a close following that profits most – and hopefully will be vocal about your service(s). But you’ll need the Seed Data to do anything at all at the starting gate, you just can’t invent a system and train it on air.

But now I’m done, hence:

[Freudian, the Belfort at Lille … No I don’t have a horizon issue when photographing …!]

“Ik stuur wel een tikkie”

Tsja, het was in het spel tikkertje toch echt dat je ‘tikkie’ zei als je iemand uitschakelde.
Waarmee “Ik stuur wel een tikkie” betekent dat je, als rechtgeaarde mobster, een huurmoordenaar afstuurt op degene die je de melding doet.

Weet niet of dat de bedoeling was.

Nou ja, whatever.

Dus, behalve dat we met z’n allen inmiddels helemaal krankjorem zijn geworden van al die (malle) eppies, zijn we ook door de letters heen om ze te onderscheiden en verzinnen we (?) de bijpassende nonsens, c.q. laten we dat gewoon gebeuren alsof we diep in de DSM-IV zitten.

Nou ja, whatever.
En:

[En dan stiekum ééntje losdraaien; Berlijn]

What did you just #include ..??

Back to the post from a week ago …

Where the issue was to know your data, and how that still is a problemo big time, if you can get any sufficient data of seemingly appropriate quality at all.
There’s a flip side, on the ‘engine’ side [ref of course Turing machines but why do I write this you knew that duh], where you #include just about anything of value [won’t link back to that post of mine again…!]; being standard libraries and some some times pretty core stats stuff of which you also don’t know the quality.
The standard things like printf() I guess will by now be hashed out. The stats stuff however … I’d say we’ll hear a lot again about bystanders using the stats but not seeing that there’s possibly some obscure shortcut in them that plays out in destructive ways many <time>s from now. It starts with no Bessel correction on RootMeanSquareError / Variance / StdDev calculations. As if degrees of freedom have no meaning — where on the contrary they ‘come from’ practice into stat mathematics…

Now, having taken you from universalia to particulars, let’s return. Have you checked the proper programming of what is asserted to be the functionality of the very stats / data processing / ML / Deep Learning / AI that you’re using for the edge-of-the-art übercompetitive hence most-profitable lab environment pilot plaything systems of yours..?

I’d think not.
At your peril.

‘Open Source’ doesn’t relieve you of your duties to see, to ensure, to prove to yourself, that at least someone else has falsified the code.
Now what ..?

Whereas, flip side, there may be much more pre-cooked functions out there that you just haven’t looked for hard enough but that are of top-notch quality. Like, in a similar field, I hear too little ‘use’ of this [ don’t just jump to 1:44+ ] and this [ 2:48+ ].

Now, on a positive note:

[Easily recognisable and known to all, right? Bouvigne, Brabant of the North flavour]

Biasin’ -out.

Yes, yes, there’s still discussion about biases and how ‘Ethical AI’ should be pleonastic.

1. Though it isn’t, and inherently can’t be. For one, since ‘ethics’ is (factually) about discussing and debating dilemmas ad nauseam without providing a clear-cut decision – whereas human behaviour of the non-pathological kind is about something else… For another, any ‘ethical’ decision is completely and utterly individual or (as in ⊻ !) some moral judgment by some other(s – usually group with identification < 1) imposed upon the subject-not-free-agent; certainly when as should be the case each and every time the context (i.e.,complete and utter physiological and mental history plus present circumstances) is individual beyond knowing (being beyond symbolic descriptions).

2. The bumrush to some form of ‘ethical neutrality’ is impossible for the same reason, and may also not be enough, or too much, since what is described in this piece: All training is from mere pre-AI live situations, one by one, but new tools bring amplification. So the Longread isn’t just correct; its title understates the problem. Yes, the amplification as mentioned may seem minute, but the subliminal programming [that takes place for a fact] of the Longread will be persistent enough to loop back into the data and into the nnets/algorithm(s). Self-sustaining, -increasing separation of classes; creating barriers (mostly non-explicit but why care?) that governments have been created for to outlaw.

3. Not that I’m happy about the above. But now what? This:

[Lift the bottom, not suppress the top, in living quality; Rotterdam]

Friction kept Tech in control, and humans able to cope. No more of that.

It crossed my mind [remember, this post was drafted/scheduled over two months ago why am I adding this I don’t know] that all those new ‘innovations’ like switching to non-cash money, will not only have to be ‘repaired’ with all sorts of weird gizmos/apps and what have we. They will also, by reducing ‘friction’ indeed, make the economy [! As if there is such a thing ! Yet another construct of apparent emerging existence which when studied, is not] more efficient.

But that’s not the purpose of Humanity. It’s only the mere purpose of Technology. Friction is naughty, we learned from ‘science’ when that got a foothold in peoples’ minds – during the Enlightenment which may seem like a wry joke designator now. Hence, we should do away with friction.
But friction is also what kept humans in the loop, and all of them; making it possible to slow down the creep of Technology through all that human life is, to a speed of societal(+) change that could be understood by the vast majority, so all could evolve (i.e., have time enough to adapt or (humanity-wise) gracefully disappear to be replaced by born- / grown-up more modernly [should be a word]) into knowing the world as it was (‘is’) found, and controlling the world as such. These were close enough to linear times to keep track. [I know, at closer inspection, it’s just that time still stretched the exponential function wide enough to be proxied by linears but then, that’s how it was so why bother claiming that “it actually was exponential already then but nobody noticed” – yeah, so? That changes history ..?] Friction thus slowed individualism and detachment. Which seems to be the phase of development that the Enlightenment / Industrial R(??)evolution brought, though punctuated by setbacks. Now, we’re onto the ‘Second’ industrial revolution or did I miss some subspecies? where things are speeding up so fast that one cannot cope. Only native- or born-digitals will, or ..?


A side note: The whole idea that only digital natives can fully grasp the new world, is of course nonsense, without any evidence. Having a solid foundation in the pre-[any]development world, gives a sound basis for grasping what’s new and what’s not. See social media: Only those that know a world without them, have a clue about their pros and vastly outnumbering, swamping-away-with cons. More youth than ever, seem to be unable to deal ‘correctly’ with new stuff like social media. ‘tIs more that generations after the ones that meet new developments first, aren’t all hyped up about them as they’re just there, duh, and what’s all the fuss about – hence need not dwell in it but can lay them away as mere tools in the pool of many w/o too much FOMO or need for self-actualisation through me-too’ing of the desperate-for-the-tiniest-sign-of-affection/tiny-group-attachment kind [edited to add this; a quite different angle, into the same]. Yes I already noted that e.g., those that understand and have actual experience with coding and cabling, understand the intricacies of new tech that seems to be so far abstracted from those. Yes the fraction of the pop that is that group (every generation has them), dwindles over time. But the reliance on the ‘lower’ layers of the stack grows ever further; squared with the understanding-group dwindle, this bodes not well:


Now that friction is declared the enemy — by whom, may I ask, and with what right do you claim to represent humanity? what track record of representation and wise stewardship would you be able to show ..?? — will Technology be controllable in the near future, or have we (gradually already) lost control and is Tech (in its widest sense, naturally – huh accidental contradictio) beyond control as in the philosophy-of-science sense? And this, getting worse as Tech developments are beyond human adaptability capabilities? Maybe not for a few, not necessarily outside tech (undercast) – Law #2 of this –, but for the masses. When they can’t follow, adapt, remain in control; who will …?

All this, to be multiplied (outer-poduct sense) by this (is Seth‘s) post on scarcity (if it’s mine, it’s not yours; the physical natural-world reality) versus (?) abundance-through-sharing. The latter hinging on inifite zero-loss copyability. I guess, until copying is found out to not lead to physical quality loss but to ‘value’ loss, like, a secret is worth something to someone but not when everyone already knows [some FOMO value may remain but only temporarily]; what if someone made an investment and cannot recoup the least? Only attention remains as ‘currency’ then, not something to look forward to. A mix it will be.

Ah. No dystopia at:

[Barcelona harbourfront; if this disturbs you (which can be in a positive, negative, or confusing way), that’s what art is about so it succeeds]

Don’t try this at home, kids

Despite their popularity back in the days, and despite that not being sustainable clearly, I still wonder what happened to this show that portrayed some innovators to today’s common youth++ mental maturity.
That gave us hosts of these and thees. That was unintentional. And this was recent (??), and a good explanasummary.

The point being; they were ahead of time. Now everyone has followed suit. In politics, qua composed mindsets. Can we not have The Movie of that, please?

Cheers, mate:

[Interesting. Maybe not (long-term) clever. But interesting. Madrid]

You included what ??

Now that Summertime is upon us, the time is there, to consider what you have been up to. And what your New Business Year’s Resolutions are for when people … sorry I mean co-workers, return from their well-deserved holidays away from you. So that next round of business activity, you’ll do better.

We need to have a talk.
I mean, qua data processing the new way, as in ML. In the global data swamp, how come good data is hard to come by? Why do we still struggle to get the right data, in sufficient quantities?
[Let alone the biggo problem of integrating the developed system’let into the landscape of old monolithic operational systems, that actually do something.]
We blogged earlier about WEC. Which assumed one has the right data, in the right quantities; only the quality per data point needed tweaking.

But then, what happens when one doesn’t have quite the right data, but a couple of proxies here and there… What will you be doing …?? And what will you do with the outcomes …?
Like, this: Your time series will (sic) have massive distortions unless you watch your step very, very carefully. And this, even worse.

But worst of all, … being in a tunnel qua vision, and not seeing the wider context. Then, biases and other total system distortions like too-weak-proxy errors will occur. And not only render your efforts much less useful if at all, but also will discredit your work/methodology and that of all others in the field as well.

So … Are you feeding your quants the right info? Qua context and (relevance of) data? Qua relevance of the system altogether [beyond mere fitness in the IT landscape]?
Reconsider … In the end, They‘ll find the one common factor among all fail factors re your AI project(s), being You.

On the cheerful side:

[With a vast collection of things that also didn’t work (anymore); Edinburgh]

Maverisk / Étoiles du Nord