… Yes ..? The laudable efforts

Somehow, I’m unsure that this is presented as a [laudable and] serious effort and now gets a humorous twist where it also lists this that wasn’t funny.
Let’s not forget this even.

And does Nway have this ..?

Hey people …! When one wants to make fun of international affairs, let’s stick to the lesser issues like global warming or the plastic soup.

And:
[Bet you don’t have clubs like these in nor-way, eh? Proud wearer.]

Magical cars

Yes I’m back on the magic of ‘self-driving’ cars again. Like here, latest, and many times before. Where the focus has been on the Supply side, a lot (as here, 1:04), there’s also a Demand side of course. Where things (also) aren’t in blissful ex-ante-blameless expectation for the new Auto’s.

As is clear from the following, a blatant plagiarism from Seth Godin’s unsurpassable blog:

Magical technologies

Cars are many times more dangerous than airplanes. More dangerous per mile, more dangerous to bystanders, more dangerous in every way.

And yet there are very few people who say that they are afraid of being in a car. And yet we spend a fortune on the FAA and more than $8 billion a year in the US on the security theater we do before every flight. We carry life jackets on planes even though they’re needed about once every 33,000,000 flights.

That’s because flying is magic and driving is just riding a bike or a horse, but with a motor.

The challenge of the self-driving car isn’t that it’s a car with no driver. Actually, the self-driving car is an airplane with wheels.

Magical technology.

When a magic technology (one that we don’t believe we can understand) arrives and it feels like life or death, our instinct is to freak out, to make up stories, and to seek reassurance. Vaccines have had this challenge for generations. Because they’re long-lasting, involve a shot and feel like magic, we treat them totally differently than the unregulated market for placebos and patent medicines, regardless of their efficacy.

If you’re lucky enough to invent a magical technology, be prepared for a long journey. Decades ago, I worked with Isaac Asimov and Arthur C. Clarke on two different projects. Asimov was truly embarrassed that he was afraid to fly. And Clarke was famous for saying, “Any sufficiently advanced technology is indistinguishable from magic.”

What he left out was, “Magical technologies that involve media-friendly disasters are the hardest ones to sell.”

To which one only needs to add: Who has the need to sell ..? Or make it so that you don’t need to sell (much)?
And one needs to add:

[If only these would be available @1000km ranges, electric]

The battle: Fairness contra ‘GDPR’

Recently, there appears to be a bit of a hubbub about the fairness that would not be in AI systems’ outcomes. I know I know that’s a vast understatement, and a major branch of professional (~ ethics) development in the AI arena in general.

To which the venerable (heh) Jad Nohra added this: A simple … but correct and large part of the puzzle. Not all of the puzzle, as e.g., this also is part of it [unfairness in life, reflected and any bias correction will be parti pris in itself, and as such very time-bound and (extremely) discriminatory when seen over more than the shortest run].
But indeed, companies are too cheap to deal with all the variety that life brings. Or is about, even. Diversity as a societal riches, of (course! of) the non-monetary kind but that’s obvious.

Now, the point of this post: Is it only cheapness of companies (and public organisations that through their very purpose should’ve known better), or … doesn’t it help that we (the world) have that GDPR thing ..? Because the latter has in Chapter 2, Article 5 sub 1(c), that blessed remark about data minimisation (with an s not z of course; a good exposé is here).

Which means that the above ‘companies’/organisations, are actually doing it right … to a certain extent.
Or .. to what extent? If one were to expect that because of some parameters, one would chance to be binned in some ‘circumspect’ category and hence denied equal treatment, one would want to fill out additional disculpatory fields or plaintext in an application. Assuming one would be allowed to do so through having the field(s) available, and possibly (sic) even processed, in he first place.
This would be in itself a case of qui s’excuse, s’accuse or where does the cycle end.

So, asking nothing superfluous – from a design perspective with efficiency in mind, over possible unfairness towards losers disenfranchised applicants as externalities – fits with GDPR nicely, and does efficiently what is expected of e.g., commercial systems that need to minimise risks overall, not just qua unfairness.
Asking about-all that could potentially be relevant either for the core purpose XOR (??) for fairness, may be a GDPR-noncompliance burden on the great many that had no need to fear the biased-system monster. And still might not achieve the unbias one seeks [also, the above/here link].
Result: Be somewhere in the middle. Aristoteles’ Middle not just halfway you m.r.n …
Which in itself is a Question.

Transparent explainability may be a step in the right direction.
Who will testify the correctness of the transparency provided [in times of current US ‘leadership’ qua perjury, anything might go ..?] and how will a sound and firm basis for such opinionation ;-/ be obtained? Accountants have a. experience with comparable ‘opinionation’; b. a strictly, blinkered financial focus (and experience) even with environmental reporting etc.; c. no (sic) education on, tools for or experience with anything approaching ethical reasoning. The latter, a fact despite their some possible sputtering otherwise. Ethicists also not: a. they have a sense of what they’re dealing with; b. but zero (sic) practical life experience included hence; c. they keep on debating grave issues without ever reaching a definitive conclusion. At least accountants can be given credit to deliver a final ‘one’liner statement that is such a conclusion. Even if they so often (yes) miss the mark, at least they still dare to state their opinion but hey watch the caveats attached in the end they got burned so very often that they daren’t say anything anymore.
I’m digressing.

What more would we need? Transparency, either system-ically or on each individual case, only goes so far; is after-the-factverdict. Balance built in … but how? Balance, bolted on (input, system, output sides as candidates) … but how?
Frameworks, rules (of thumb / yardsticks) of fairness to all stakeholders including businesses and their managers that don’t get paid to be fair just profitable [however unfortunate that hard fact] – multivariate game play / ‘Pareto’-like optimisation seems to be too complex to get into the heads of even the most advanced minds [with only a slight overlap with the managerial class (of which ‘executives’ are a full subset)] so we’d need a System to do Monte Carlo or other simulation … *sight; even more complex-to-understand systemmery.

Just a suggestion: Would it help to require a two-step process for any ‘AI’- or algorithm-driven decision ..? Whereby the algo categorises first, and then the ‘victim’ gets a chance nay is obliged always to elucidate pro … or contra, non-trivially but measured to the outcome. [A snag in the line of reasoning there, but not unassailable I think.] Exculpatory data only entered when needed, otherwise, minimised to GDPR’s desires. This, beyond the EU idea of requiring reasonably serious manual intervention in algo-driven decisions.
Whatever. The two-step idea, at least.

I’ll go abend handler now, with:

[The Whatsitcalled buidling; München]

An ocean of clean, cooling-wise

Just wondered why this wasn’t much, much bigger in the news than that other environment thing with the plastic collection.
This SoundEnegry thing being about cooling with heat. Where we seem to have quite a supply of heat around the world, especially in the longer term. And yes, it seems to work; units have been deployed and even Forbes magazine had a favourable review.

If this works at larger scale too, … then we have a Solution of massive proportions. And think of the event of having an actually useful implementation of something of the Stirling engine principle(s) ..! A big performance, and a huge one to add.

Yes this needs more publicity, funding, client orders, etc. …

Oh, and:

[Before the Netherlands’ North Sea coast looks like this, the Netherlands’ Curaçao Westpuntbaai…]
[Sorry for the analog pic; my own from when pixels weren’t a thing but analog was]

Orgs don’t give a hoot about security [or privacy, or (ops) risk management]

Case in point: Have you ever seen a job (opening) profile for a ‘first line’ position where awareness and active promotion towards staff of information security and privacy, was listed at all ..?

Now there’s two Buts [with not two but (sic) one T] you will inevitably come back to me with and I’m not even going to use <ol>:
1. “… But those are not even tasks of the first-line manager” – you see how wrong that is when I write it out for you. If your org doesn’t have infosec and privacy at the core of each and every first-line managers’ tasks, your org will not only be completely non-‘compliant’ with any standard or requirements [since these all have awa and control in the first few chapters that define what needs to be done; the simpleton rules in the back are always just examples to pick from] but will also fail big time as the vastly major part of your infosec / privacy will not be controlled and the outside attackers (act of man, acts of nature) will be.
2. “… But other ‘staff’ departments also don’t have their requirements in the job profile” – Yes they do. Nothing on ‘cooperative attitude’, ‘works well with people’ …!? Those are to-control requirements from the HR department or what ..? Managing budgets not a thing from Finance? To work within industry compliance requirements not being from Compliance ..? And the list goes on and on. But Infosec, or Privacy – not so much.
Now, if your organisation would care for infosec+ in a proper way and would actually want to do what needs to be done, the first 80% would be with … first-line ‘management’ to take execution into their realm. Which only happens if one hires the right people, that understand and execute along their job requirements. When there’s no infosec in those, nothing will be measured and only what gets measured, gets done [yeah, yeah, excepting the precious few shiny exceptions; those are Leaders not managers where of the former you have only a handful and of the latter a vast mass drowning out the former qua effectiveness].
QED

Now what ..? Well, obviously: Put infosec/privacy awa and control into each and every job profile / requirements. It’ll take some time [in which you can start to train current ‘managers’ in the frivolous art of], but then in some future your organisation will be the better for it. Now Go.


Oh wait! There’s ops risk too! For that, watch/listen to this short clip. The venerable mr Sidorenko gives most helpful pointers not only about the above (two minds thought in parallel ways) but also on actual, effective, implementation of both.

Better use an overall, integrated re-organisation of job descriptions. Aligning the above with all that has been collected [collectible, big-if one paid attention] over the past 5 or 6 decades of management studies and necessary improvements, that haven’t as yet indented any scratch into the, wholesale outdated, military-command-and-control style of management prevalent since WWII. But the latter has gotten outdated – the military (grand) strategies it was found to be effective for, have changed already (again) decades ago, and some military have changed, a bit, but not much. The new style of management should in a way revert to normal, to pre-Taylorían/Scientific-Management days – the better half thereof not the atrocious dictatorial half that was Industrial Revolution type extortion.
Then, from there, build a new management strategy. That fits with the current-day complex society, with complex interrelations of shifting panels of cooperating nucleï of near-cooperative efforts à la original theory of firm. That has been tried before, and failed due to premature [of the times] launch, but now stands a better chance than anything.

Where this leaves the incumbents … well, there’s places for them. Big-if (again) they’re flex enough to landslide to looser-coupled, less-controlled (!!, in a COSO sense), higher-risk structures. Think of the unsurpassed JK Galbraith in Organization Design: An Information Processing View of, yes, 1974! As here. Hey, the Man even has interesting notes on organisational response to the Internet that not many had heard of when he wrote [last page of] this. Has anyone ever done a proper analysis of the merits for today’s Questions, and/or gauge where org developments may have gone too far hence a backtrack on certain Galbraith’ian developments (after which, re-development in other directions ..!?) may be helpful?

Letting the original idea’lets get out of hand a bit, I’d better rest my case now, with:

[For no specific reason whatsoever, like instruction or so, from Bouchard Aîné Fils indeed hope they don’t mind the ©-thing]

Deaf sec oops

Is DevSecOps a thing yet?
Maybe. It is out there. With a laudable purpose (here), that however is not very well reflected elsewhere (on that site). When one refers to Krav Maga (as here), is one unaware that thing is known mostly if not exclusively in a few hardcore infosec circles and not many place else ..?

As said: laudable. But let’s stick to the DevOps (+ ‘Agile’ dev ..!) world to embrace sec. May I add a want for inclusion of audit hooks ..? [At procedural levels, too]

Meanwhile, NL ‘suffers’ from a bad winter dip:

How long … driving decision time / from data driven to driven by data

How long will it take, until driving is not (only) of the ‘auto’ kind [if I’m in a self-driving car – the very definition of the term ‘automobile’ – I will be the one driving; I will be the only one with a Self], but some industry can return to its technology testbed front ..?

Seriously; at what point will teams in the sport decide their drivers do more damage than good ..? This is just the beginning of course. From data driven to driven by data.

At what earlier point, will ‘self-driving’ cars be allowed to enter the races? At first, they’ll be at the back, driving too carefully. Then, some move up the field, driving a human off course at every stupid decision. #disqualified. Then, they move into the lead. Not fair, with their superhuman responses! Or … at what point will ramdon obstacles appear on track, to throw off the auto’s and profit from human responses (still) and this being sold as the way in which the software may leran to deal with real-life driving conditions?

Has anyone ever done a study into this ..? If you’ve seen something, say something: yes I’d like to hear from you.

Bonus:
Scarabées Rhinocéros, 1897-1899
[Not your average trophee; Musée Lalique, Wingen-sur-Moder]

AI into the undermaincurrentstream

To the surprise of no-one – who took care to not listen to mediahumping ‘pundit’fakes –, AI seems to be coming into the Bashing phase. That’s not in the Hype Cycle as despite its name and (‘cycle’-)implied continuity, only has two phases but three points. But you get it, the Bashing phase is the downward slope. Now so declared by me.
And it’s where AI now gets into. E.g., Dutchpersons read here. And you can read anywhere, actually. Bashing i.e. calling quits to the hype and mirroring in anti-hype.

But just because attention takes a nose dive, doesn’t mean ‘nothing’ happens anymore, and ‘nothing’ is or will be successful. On the contrary; despite the hype (sic) being unsustainable not because there’s no value but because that’s how hypes work, we will now get into the undercurrent development phase where, without much notice, AI systems will enter production in a most unsusceptible way, everywhere. “The impact of new technology is overestimated for the short run, but underestimated for the long run” goes here more than ever (well…) before. Point being; all the books you read [I take it that you did, otherwise you might as well not take part in the discussion b/c we too don’t like you to be found out to be an empty vessel] don’t project the changes they have (mostly not so benign, as here (of 2016 already) and many other deep dives) to be realised in a couple of years even. Which in these times would already be ‘long run’ maybe. How ‘many’ years did the iPod/iPhone (..!), or Fubbuck, or Alphabet, etc., take to break through and grab everyone’s attention and enslavement in data/profile shackles?

Until AI is mainstream again, having crowded out all other ‘solutions’ to problems you didn’t even know you have today. Then, it’ll be unstoppable. See the litt.
[Not Louis – also not at DunLewis I mean Ludwigsburg:]Baden-Württemberg

Maverisk / Étoiles du Nord