Not sure … about the mix of AI, privacy values, ethics, BD

Recently, I was informed about this. With the blueish table spraking a recall (the way the brain does) of this and in particular, this [downloadable here].
This, the latter in particular, being about how ‘privacy’ as an issue(s), depends on its definitions – both formally, and emotionally.

I   s u g g e s t   y o u   s t u d y   i t   f u l l   d e t a i l  yes that’s a lot of   but definitely worth it. The study I mean.

Now, with the inroads made by Big Data (i.e., mudane profiling now with greater tools for [towards] greater fools), and this being turned into ‘AI’ quod non, we need clarity more than ever.
The Internet has just too small a margin to scribble down my proof – I’d say proof rambling ideas, but I have a paper coming up in Jan about just this subject …
Yes the promised Quantifying Privacy‘s just around the corner of sorts.

Do read on, here, though. And:

[No empty glasse, please, but a muid will do; Haut Koenigsbourg of course]

There we have it; botcracy

As we turn the leaf towards a new year, let’s not forget what values – in operation, operationalised – protect our Human Rights, in the form of de-mock-racy, and how they are ever so quickly being repelled by, e.g., AI and fake news but in particular, the deployment of bots as here.
Yes I know, that’s three layers of tools but still, the focus is on the first two but the latter plays almost the foulest role.
Yes I know, the ‘operationalised’ part may need elucidation on the side of ‘transparency’, ‘access and inclusion’ etc., but when you read after the link, you’ll understand that the issue is society-wide, not just FCC / net-neutrality.

Well, that was a quicky… hence:
[München, for zero (as in: 0.0) reason]

Unsider threats

How is it that we tend to hear over and over again about ‘insider’ threats ..?
Even when it’s not the Board that is meant here, as the pinnacle of … the ability to drive a company into the ground, those pesky ‘insiders’ really are a pain in the place where you like that or the sun doesn’t shine.
Better get rid of any and all of those ‘insiders’ then, eh ..? AI here you come. But if AI system(s) would be a replacement for humans, wouldn’t they commit the same temporary, small, innocious and unconscious lapses of judgement..?
And what about off-boarding the biggest threats first ..? [Where I do men the above committee]

Maybe better to recall that we’re about to celebrate the fifteenth birthday [was there ..!] of deperimetrisation – with an s once you recognise its country of birth, and disclaim an all-out stupid Jabba the Hutt style claim of origin so no z’s anywhere – who’s an -sider when there’s no way to tell ..?

Also, it villifies the underlings that make your salaries and bonuses so if you punish them (by giving them less reward than yourself), they don’t get mad. They get even. Simple. You gave them the tools You made them build their own tools superficially to keep you afloat but you wouldn’t recognise a buoy from an anchor so guess what you get… And when you’ve lost them, they aren’t much of the insider you’d want, right; morally they’d be on the outside again already.
Case in point: This miss by the venerable @HarvardBiz … Though the solutions offered, are valid – as very-starting points…

So Part I – Ω is to treat your underlings like you care. If, big if, you actually mean it (hence will not in an instant be found out to be a fraud at this), you’re saved for now. Otherwise, no fight against <whatever>sider threats, will be futile. Remember this ..? You get treated like you treat, it starts with you – your intentions towards the other, will be perceived. Positive/negative, the choice is yours.

Oh well. Plus:
[Some light’s also good for the inside; Utrecht (1924 ..!)]

Gee… DPR on Profiling

This again about that pesky new legislation that just won’t go away not even before it will be legally-effectively enforced [as you know, the thing has been around already for a year and a half, but will only be enforceable, in pure theory, per upcoming May 25th but your mileage may (huh) vary greatly – when Risk = Impact x Chance [don’t get me started on the idiocy of that, as here of 2013, Dec 5th – Gift time!] the chance is Low of Low and Impact can be easily managed down, legally yes don’t FUD me that will be the truth, the whole and nothing but it. So it will be legally effective but not in any other sense let alone practically].

For those interested, there’s this piece on Profiling. That has, on p.16 last full para (‘systems‘ that audit ..!?), p.19 3rd para from the bottom “Controllers need to introduce robust measures to verify and ensure on an ongoing basis that data reused or obtained indirectly is accurate and up to date.“, p.30 in full and many other places, pointers towards … tadaaa,

Auditing AI

with here, AI as systems that process data – as close to ‘systems’ in the cybernetic sense as one may get even when needing the full-swing wormhole-distance turn of the universe consisting not of energy but of information to abstract from the difference between info and data.

Where I am developing that auditing of AI systems as a methodologically sound thing. And do invite you to join me, and bring forward your materials and ideas on how to go about that. Yes, I do have a clue already, just not the time yet to write it all up. Will do soon [contra Fermat’s marginal remark].

Oh and then there’s the tons of materials on how anyone (incl corporate persons) will have to be able to explain in no complex terms (i.e., addressing the average or even less clever) how your AI system works…

So, inviting you, and leaving you with:
[What corks are good for, well after having preserved good wine – decoration. Recycle raw materials, don’t re-use data! Ribeauville]

Aïe! Missing the Point

Yet again, some seem to not understand what they’re talking about when it comes to transparency in AI…
Like, here. Worse, this person seems to be a rapporteur to the European Economic and Social Comittee advising the European Committee. If that sounds vague – yes it does even for Europeans.

For the ‘worse’ part: The umpteenth Error, to consider that the secrecy of algorithms is the only thing that would need to change to get transparency about the fuctioning of a complete system.
1. The algorithm is just a part of the system, and the behaviour of the system is not determined in anything close to any majority part by the algorithm – the data fed to it, and the intransparent patterns learned by it, are. The transparency needs to be about the algorithm but much more about the eventual parameters as learned throughout the training time and the training/tuning after that. [Update before press release: There seems to be an erroneous assumption by some way too deep into EC affairs that the parameters are part of the ‘algorithm’ which is Newspeak at its worst, and counterproductive certainly here, and hence dangerous.]
2. The algorithm can just be printed out … If anyone would need that. One can just as easily run an AI code analyser (how good would that be? They exist already, exponentially increasing their quality, savvyness) over the source- or decompiled code.
3. The eventual parameters … not so much; they’re just there in a live system; unsure how well they are written out into any file or so (should be, for backup purposes – when not if AI systems will get legal personhood eventually (excepting the latter-day hoaxes re that), will a power switch-off be the same as attempted murder, and/or what would the status of a backup AI ‘person’ be ..?).
4. Bias, etc. will be in the parameters. The algorithms, mostly-almost-exclusively will be blank slates. No-one yet knows how to tackle that sufficiently robustly since even if the system is programmed (algorithm..!) to cough up parameters, the cleverer systems will know (?) how to produce innocent-looking parameters instead of the possibly culpable actual ones. Leads into this trickery by AI systems, have been demonstrated to develop (unintentionally) in actual tests.
5. How to trick AI pattern recognition systems … the newest of class breaks have just been demonstrated in practice – their theoretical viability had been proven long before – e.g., in this here piece as pointed out only yesterday [qua release; scheduled last week ;-]. Class break = systemically unrepairable … [ ? | ! ].

Let’s hope the EC could get irrelevant just that little less quickly by providing it with sound advice. Not the bumbling litlle boys’ and girls’ type of happythepreppy too-dim-wits. [Pejorative, yes, but not degrading: should, maybe not could, have known better ..!]

Oh, and:
[Already at a slight distance, it gets hazy what goes on there; from the Cathédrale]

Stop dads

When you read too much (ahead) into it…
By means of this court ruling. Where a father was forbidden to post pics of his (was it 2- or 3-yr old) son on Facebook or any other socmed platform, by request of the kid’s mom, since even when the father posted in quite tightly closed circles, Fubbuck has in its terms and conditions that it might use the pics for commercial purposes. Since the latter can not be ruled out and in the interest of protecting the child’s interests, the court ruled such, advising the dad to show the pics to friends op his home compu if he’d really want to.
[What need would the father have to do that? one can ask. Benign or perverted?]

From which we learn, if – very very big if – that indeed we should consider the need and purpose of posting on socmed in the first place. If it is content that one wants the world to see, it’s OK. If some part of that content, or the purpose of the post, would not be OK → get out. If the purpose of the post would be to show off (e.g., one’s cool-dadness – pityful! but see how the other 99.999% of posts anywhere are for that purpose and that alone…), really nothing may cure you (sic).

So now, what about this post …? And:
[Since it’s no longer the site banner: Rightfully and intentionally out in the open; Barça]

Loss of memory

Recently, was reminded (huh) that our memories are … maybe still better than we think, compared to the systems of record that we keep outside of our heads. Maybe not in ‘integrity’ of them, but in ‘availability’ terms. Oh, there, too, some unclarity whether availability regards the quick recall, the notice-shortness of ‘at short notice’ or the long-run thing, where the recall is ‘eventually’ – under multivariate optimisation of ‘integrity’ again. How ‘accurate’ is your memory? Have ‘we’ in information management / infosec done enough definition-savvy work to drop the inaccurately (huh) and multi- interpreted ‘integrity’ in favour of ‘accuracy’ which is a thing we actually can achieve with technical means whereas the other intention oft given of data being exactly what was intended at the outset (compare ‘correct and complete’), or do I need to finish this line that has run on for far too long now …?
Or have I used waaay too many ””s ..?

Anyway, part II of the above is the realisation that integrity is a personal thing, towards one’s web of allegiances as per this and in infosec we really need to switch to accuracy, and Part I is this XKCD:

The dullness of infosec ..?

And you thought fraud detection was about bank transactions or even counterfeiting physical stuff. Boh-ring, when you read this. Takes it to another level, eh?
Which brings me to an important issue: Are we not still studying and practising infosec from the wrong angle, doing a middle-out sort of development in many directions but starting at a very mundane ‘CIA’ sort of point. Which is of course core, but there is so much to cover that some outside-onto view(point) might be beneficial. We’re in the thick of the fight, and no matter in which direction you go, when you wade through the thicket with your control measures machete, you achieve little – when you then turn around to try to clear some area in another direction, all has grown dense with state-of-the-art arms’ race bush again already.
And yes, of course one can educate, etc. in some form of hierarchical approach, top-down. But that leaves us with many, all too many that float comfortably on the canopy where the view … isn’t that great as one’s very certainly in thick fog of the monsoon rain. And nothing is being directed (ugch) deeper down. Or controlled (?). Just more, most partial world views unconnected and behaving erratically.

The e.g. in this is that link above. A tiny subset of situational scenario. Not solved pervasively, once and for all. Now think about the hugely, vastly, enormously wider scope of ‘all’ of infosec that would need to be covered to a. arrive at sub-universes of control, b. overview.

The latter remains Open.
Me not happy.

Solutions, anyone ..?

Oh, plus:
[Ah! The days when this sort of ‘defence’ was enough to conquer! Alésie of course]

Less than containerload shipping

When one would be interested to keep up with what’s happening, and where future class breaks might be, a nice intro would be this little book. Like, when virtual machines came to the fore, it was declared that this would be a solution because of course the VMs would be impenetrable. By the utterly clueless, since it was the stupidest thing possible in infosec to say that. Though it cost some time to show the real value (positive) net of the risks (that indeed showed up…). With this subject, the same will happen. Future fact.

Oh and the post title just refers to shipping single pallets across the big pond, e.g., for these. Groupage, degroupage, forwarders, stewards, you know. The old, still there. And:
[Pro question: Beaune or Dyon ..?]

Trust ⊻ Verify

You get that. Since Verify → ¬Trust. When you verify, you engender the loss of trust. And since Trust is a two-way street (either both sides trust each other, or one will loose initial trust and both will end up in distrust), verification leads to distrust all around – linked to individualism and experience [we’re on the slope to less-than-formal-logic semantics here] this will result in fear all around. And Michael Porter’s two books, not to mention Ulrich Beck in his important one. So, if you’d still come across any type that hoots ‘Trust, but verify’, you know you’ve met him.

Since the above is so dense I’ll lighten up a bit with:
Part of the $10 million I spent on gambling, part on booze and part on women. The rest I spent foolishly. (George Raft)

Which is exactly the sort of response we need against the totalitarian bureaucracy (i.e., complete dehumanisation of totalitarian control – which approaches a pleonasm) that the world is sliding into. Where previously, humanity had escapes, like emigrating to lands far far away, but that option is no more. Hopefully, ASI will come in time to not be coopted by the One Superpower but then, two avenues remain: a. ASI itself is worse, and undoes humanity for its stubborn stupidity and malevolence (the latter two being a fact); b. ASI will bring the eternal Elyseum.
Why would it b. ..?
And if it doesn’t arrive in time, a. will prevail since the inner circle will shrink asymptotically which is unsustainable biologically.

Anyway, on this bleak note I’ll leave you with:

[Escape from the bureacrats; you-know-where]

Maverisk / Étoiles du Nord