Blog

That’s what you get …

… when you are too scared by anything new, to fully embrace it before it overruns you.
As a pointer that the arms’ race of deployment of AI in security (offense/defense) is almost lost. Depending on which side you are.

That there was such an arms’ race in the brewing, I need not repeat for y’all that paid attention, to the many news items re that in the past couple of quarters. From fundamental, policy-level discussions and debate all the way to operational stuff, the subject has been on the radar for many on many aspects.
Not that much on arms’ flipping, by the way, that is happening now in a sort of collateral-damage way. Adding to the loss of other methods there.
And now then, also here here, in the twist one sees the example of the attack side getting smarter than your defense though possible.

Wasn’t it that AI was possibly deployed by both the offense and the defense, and that the first serious entrant would win, winner takes all style?
Well, the conclusion seems obvious, qua who’s first out of the gate here. It’s over; get over it.

[ Some may suggest that maybe, some agencies may have defensive AI stuff that they keep secret as not to be outdone by AI yet. No need to remind you that security by obscurity has never worked, and may be turned against the defender, when not the methodology has been leaked long time before suitable use hopefully to all the public so secret counter-attacks on the shelf may be prevented. ]

For the moment then, as long as we last:

[Back to analog surfing / sailing, then; NYNY]

The blog of blogs

Blanding down a message to be comforting to all potential readers, isn’t my thing. Posts here are meant to be tease from slumber, to opine for discussion and to stimulate to think. This means the language has a distinct style, being direct, and apologetic nor understated, also elliptical and/or long-winding, stopping short of using the correct ‘prolix’ since not many readers may understand the QED in this – not for simpleton readers.
Oh and pics are almost all my own – almost all unedited if you didn’t notice… – and selected to be at least slightly relevant to the post concerned. Maybe.
And, let’s not forget my seriousness: “Let silence be your general rule; say only what is necessary and in few words” (one of my heros, Epictetus). Yes, isn’t it ironic, don’t you think?
Have fun!

Attack Thee

A major, huge, missing thing in ‘attack trees’ [aren’t they related to access path analysis?] is that they only depict the ‘opportunity’ part of perpetration, and have nothing on the Motivation of Rationalisation parts (as in this easy explanation). And hey, the latter points at insiders, too, that are so often not to be found in attack trees. Why?
That’s two things that broaden the context to anything realistic. So that, e.g., the following can be applied better:

Which goes way back to the physical realm. Allowing for controls to be seen not only as lines of defence [indeed, not the outright stupid kind], but also being of various categories, for differing purposes. To enrich your protection beyond mere data-oriented classical (info)sec which is but an operational subset of what one want, qua information security in its broader scope for the enterprise; figuratively and literally, when combined with this masterpiece method, as rightfully and correctly promoted by this peer.

So, attack trees yes, but why only now, and weren’t you using them already for a long time, implicitly? When not if, not, how can you ever have given any serious opinion about the Design of the control system (being the opinion of its potential Operating Effectiveness!), let alone its Actual Operating Effectiveness which is a mere afterthought when the Design and Implementation are A-OK. If either of the latter isn’t tip-top, actual operating effectiveness is theoretically impossible.
Also, include the various costs of control figures [introducing reasons you can’t achieve perfection by this reason of needing infinite budgets for achieving that, throwing out the baby with the bandwidth bathwater], and Time, as in trend analysis and second-order errors in that.
The more detailed your model, the more rigid it will be. The more comprehensive, the more … it may be inexact but that’s the price of ‘de-modelling’ i.e. making something applicable in reality. Either your model is perfect [into analysis paralysis] OR it makes sense [better be roughly right than a rabbit in the headlights].

Well, leaving you with:

[Awww, isn’t it beautiful, even from a late-80s analog pic? Pierre Blanche near Courchevel 1850 (most recommended)]

Is progress still Solid ..?

Yes another aside: How’s things with Solid, and why aren’t you onto that yet ..?
Since, it’s ever more clearly needed, and wanted (?), and a seriously viable product. Though the (‘net)powers that be, may not necessarily want it.

Oh well, there’s always:

[Currently live, usage unknown. Not sure this is an improvement in dough though; Albert Heyn Amstelveen]

You do NOT want AI

As was in some recent waste of power/time by marketing apparently-dunces, “… like Neural Networks, Polymorhic [sic] Sensors, Machine Awareness and Automated Data Monitoring. These techniques all use AI.”
OK. If you believe that, you’ll believe anything. Like, elected presidents are better than monarchies – have a look around the world and weep.

The point being,
a. ‘AI’ is what is still outside any machine’s abilities however complex, by definition. All that machines do, is ML. Yes, even ‘Watson’ (which is …!?) beating humans at Jeopardy is but a flimsy, pathetically failed attempt at a Turing test. [I don’t mean this one.] The missing part is not even that training is on past data (sic) and the future is by sheer logic different from that, but the also missing part, huge, is Random Context. ‘AI’ is still trained in closed environments, including supervision over Right and Wrong outcomes even if through automated learning from feedback loops. Indeed, not Good and Evil even ..! But then, you haven’t understood Nietzsche did you? And remember these quotes, relevant when you see it. Returning to the subject, context is still King [heh], and differentiates the Artificial, the Machine Learning, from the I of Intelligence. The latter ai’s have it. All, and yes I mean ALL, current-day software is insufficiently context-aware or, if approaching context awareness [much like we on Earth approach Proxima Centauri – yeah, not much effect, eh?] like in ‘autos‘ still much too little so (follow the link and weep).
b. … the latter also points to the second part of the point that is being: Intelligence seems to need morals and ethics, that we humans (and your political opponents that I shall not consider under this header) seem to have naturally. Right? Not right as a system choice?
c. Don’t know / not applicable / no opinion.

Hence, do you really want AI, or will you be satisfied with ML that takes over all the mundane tasks that bore humans to death? Not like this Boring Company but like accountancy where Intelligence may be reserved for the human overlords after all. Yes, you may snicker. But the truth is: Your job will be disrupted once it’s rid of the mundane stuff and hopefully you have developed some superiority over the remains. Which is inherently uncertain.
Hence, you may not want this. QED i.e. I rest my case as in:

[Better align with what goes on here; Startup Village Amsterdam]

Done, with the droning thing.

Well, that escalated quickly.
Only recently, I posted this here thingy that had been lingering for a long time in the back of my mind. And in repeated discussions with various peers. About how things, as in state of the art AI things, converge to bring smart systems to the vineyard.

The post had an open end, how all things put together woud not exist yet in full.
Negative time. As per this: the solution, deployment-ready. Sans the microlocal antidote delivery, that is. But we can consider the viability a closed issue.

Yeaj! There goes my idea of being in the vanguard. But happy that a. I wasn’t a fool, if anyone had noticed .. my posts, b. this may finally get off, and be true innovation helping eco-friendly(er) viticulture.

De blije Avg-partijcommissaris

Hoorde een vraag over de positionering van de Functionaris voor de Gegevensbescherming. Het antwoord (uit een zaal) was ongeveer: In de 2e lijn – u weet wel, als in de 3LoD-Flut (1e alinea van dit, en dit). Omdat, in termen van ‘de’ zaal, de FG een tool is van management om de klantvragen te beantwoorden en zo wat, en niet meer dan dat.

Rrrright. Voor wie te bescheten was om de Avg zelf gewoon eens te lezen. En eventuele aanvullende jurisprudentie. Enig idee wat het begrip Onafhankelijkheid inhoudt, wat Toezicht inhoudt ..? Wat het inhoudt dat (niet of) bestuurderen op het matje te roepen zijn als ze zich, vanuit hun onmiddellijke of middellijke (!) taken, hoofdelijk aansprakelijk maken voor privacy-misstappen in de organisatie. Ja, ook vanwege de middellijke besluiten liggen zij wat erg dicht bij het hakblok. Met de rewards, komen de risks.

Waarmee het aloude perspectief van een land iets naar het Oosten interessant wordt, als spiegel. Denk eens terug hoe het ook alweer was; de commissaris van de (dus: communistische) partij die als toezichthouder op het recht in de privacyutopische leer blijven van alle medewerkers in traktorfabriek nummer 43 werd geplaatst. Met sanctiemogelijkheid van buitenaf geëffectueerd.
Het enige verschil met utopischeprivacy is dat de sanctiemogelijkheden nu weleens zwakjes zouden kunnen zijn, cost of doing business. En het enige verschil is dat de traktorfabrieken [ja natuurlijk met een k en niet met een c! we houden het wel histories korrekt ja!] zelf hun partijcommissaris mogen kiezen, en budgetteren en belonen, en mogen inruilen. Maar da’s marginaal.
Niet marginaal is dat het Recht op zo een pietluttig puntje zo diepe greep heeft. Alles en iedereen is schuldig totdat het tegendeel met meters privacydossier is aangetoond nee bewezen.

Wie er meer gebelanceerd mee omgaat, heeft het wat beter begrepen. Maar wie de uitgangspunten niet kent en/of terzijde schuift, zal met de meest hilarische karikatuur van de Partij te maken krijgen. Hoe is het eigenlijk met de directie van de AP; zijn daar al kundige opvolgers aangesteld van de vertrokken kundige bestuurderen …?

Nou ja. En:

[Proper protection is the point; Segovia]

Droning in Wine

Not drowning – drink less, taste more.
Sometimes, ideas have to ripen to get the full palette of primary to tertiary flavours. Unlike wine, however, the process for ideas is more interpuncted qua quality. But certainly not the smooth ride any G hype cycle curve might suggest – though the suggestion that all subjects mentioned on any, would move at the same speed, is incorrect as the time to maturity is in fact indicated – though so seriously obscured that intent may be presumed as not to be held accountable afterwards for the imminent misses. Nor is the ride along the curve smooth in time for any subject; this suggested by the uniformity of the curve that all subjects are pressed onto. As if all reach the same height of hype.

But that’s beside the point now. What I wanted to mention, is when things get unstuck. Like, I had this idea about
[ drone (swarm or flight pattern) + cameras + visual pattern recognition also outside humans’ vision + some quite trivial machine learning on known patterns ]
equals
[ automated disease detection in their earliest developments, hyperlocally ]
equals
[ hyperlocal antidote delivery by … drones again ].
In itself, OK. When all built robust enough (weather, handling).

But I got stuck in not knowing whether or how this could work.
Along came these great ones.
And these.
And now, all sorts of things come together.
Only need to get these people on board, or their constituents. And work out the details of the potential impact – environmental, but also investment/cost/benefit-wise, and qua impact on labour markets. Dedicated (year-round ..?) and experienced labour may be (very) expensive or not, or available in abundance or the contrary. No business without balance, right?

Oh well. Leaving you with:

[Not your average flat terrain, too; Quinta do whatsitsnameagain oh yeah Vallado, Douro – most heartily recommended!]

MITmoral Machine: Wrong system?

Guess we all noticed the somewhat-groundbreaking results of the MIT Moral Machine survey around the world, on the trolley problem and how would you have reacted.
But, apart from the also noted possibly severe bias (towards male, highly educated, affluent hence probably more cosmopolitan, and responding in the first place), there is another aspect:

The survey allowed, maybe even asked, for the brain’s System II (re: Kahneman) to kick in. You know, the cognitive, thinking part that includes ethical and moral reasoning. This, demonstrated by the linkage to language, both on the input side and on the output side. Especially the latter would invalidate the results vis-à-vis practice.

Because in practice, like, human practice, and auto-cars (auto-mobile, remember? “autos”!?), only System I is / will be fast enough to respond in time. Hence, don’t ask but test human responses! And even when in some fancy-dancy VR environment, one cannot preclude a hint of reality-disbelief in the back of ones’ minds (where the appropriate processing takes place!), uncanny valley skews, etc.
And then, “I panicked” is a perfect excuse. Will it be for autos? Legally, too ..? Who will have panicked? etc.

Yes, people panick and respond intuitively, too. But wasn’t the auto trained to produce System II results for System II and System I occasions? If the latter would be captured under IF SystemIPanic() Then HandBackControlToCluelessHuman(); – emphasis mine, because of this (1st paragraph) – Then translate the pilots’ experience to autos, please:
The more you rely on the auto aspect of autos, and no-one is not aiming at Level 5 independence, the more clearly only the ever more distinctly panicky of System I decision failures will be pushed to the human that still will need to be fully equipped in the cockpit (and think of this, too) by the way contra each and every designer’s wishes, at the ever more latest of latest moments,
the less the human will be able to generate System I responses as these come from experience. So, when not if human override is required, failure will be ever more certain. Can one then still blame the human for a. not having had the ability to get experience properly, b. being called in too late (for almost certain), c. the actual actions taken that transpire, afterwards, to have been an ethical decision? Or will HandBackControlToCluelessHuman(); mean: GiveUpAlreadyAsTheHumanWillAlsoBeClueless()?

OK. The future is bright…? And:

[Just your everyday rush hour in Baltimore]

Maverisk / Étoiles du Nord