Now you read me, now you don’t

As a pointer to what this is about…
You know, like the oldest tricks in the book, still going strong when all the world’s (worlds’?) arms’ races are going nowhere. As predicted. Where the title of course doesn’t reference a major part of the sec controls, stego.

But that’s a finesse point. Let’s be happy that research into faster horses continues, with results.

CU!
[Stylish; what’s hiding here ..? Even when you know where]

Aïe! Missing the Point

Yet again, some seem to not understand what they’re talking about when it comes to transparency in AI…
Like, here. Worse, this person seems to be a rapporteur to the European Economic and Social Comittee advising the European Committee. If that sounds vague – yes it does even for Europeans.

For the ‘worse’ part: The umpteenth Error, to consider that the secrecy of algorithms is the only thing that would need to change to get transparency about the fuctioning of a complete system.
1. The algorithm is just a part of the system, and the behaviour of the system is not determined in anything close to any majority part by the algorithm – the data fed to it, and the intransparent patterns learned by it, are. The transparency needs to be about the algorithm but much more about the eventual parameters as learned throughout the training time and the training/tuning after that. [Update before press release: There seems to be an erroneous assumption by some way too deep into EC affairs that the parameters are part of the ‘algorithm’ which is Newspeak at its worst, and counterproductive certainly here, and hence dangerous.]
2. The algorithm can just be printed out … If anyone would need that. One can just as easily run an AI code analyser (how good would that be? They exist already, exponentially increasing their quality, savvyness) over the source- or decompiled code.
3. The eventual parameters … not so much; they’re just there in a live system; unsure how well they are written out into any file or so (should be, for backup purposes – when not if AI systems will get legal personhood eventually (excepting the latter-day hoaxes re that), will a power switch-off be the same as attempted murder, and/or what would the status of a backup AI ‘person’ be ..?).
4. Bias, etc. will be in the parameters. The algorithms, mostly-almost-exclusively will be blank slates. No-one yet knows how to tackle that sufficiently robustly since even if the system is programmed (algorithm..!) to cough up parameters, the cleverer systems will know (?) how to produce innocent-looking parameters instead of the possibly culpable actual ones. Leads into this trickery by AI systems, have been demonstrated to develop (unintentionally) in actual tests.
5. How to trick AI pattern recognition systems … the newest of class breaks have just been demonstrated in practice – their theoretical viability had been proven long before – e.g., in this here piece as pointed out only yesterday [qua release; scheduled last week ;-]. Class break = systemically unrepairable … [ ? | ! ].

Let’s hope the EC could get irrelevant just that little less quickly by providing it with sound advice. Not the bumbling litlle boys’ and girls’ type of happythepreppy too-dim-wits. [Pejorative, yes, but not degrading: should, maybe not could, have known better ..!]

Oh, and:
[Already at a slight distance, it gets hazy what goes on there; from the Cathédrale]

Loss of memory

Recently, was reminded (huh) that our memories are … maybe still better than we think, compared to the systems of record that we keep outside of our heads. Maybe not in ‘integrity’ of them, but in ‘availability’ terms. Oh, there, too, some unclarity whether availability regards the quick recall, the notice-shortness of ‘at short notice’ or the long-run thing, where the recall is ‘eventually’ – under multivariate optimisation of ‘integrity’ again. How ‘accurate’ is your memory? Have ‘we’ in information management / infosec done enough definition-savvy work to drop the inaccurately (huh) and multi- interpreted ‘integrity’ in favour of ‘accuracy’ which is a thing we actually can achieve with technical means whereas the other intention oft given of data being exactly what was intended at the outset (compare ‘correct and complete’), or do I need to finish this line that has run on for far too long now …?
Or have I used waaay too many ””s ..?

Anyway, part II of the above is the realisation that integrity is a personal thing, towards one’s web of allegiances as per this and in infosec we really need to switch to accuracy, and Part I is this XKCD:

Culpably deaf

All that work for a private sector organisation who take (wrong) decisions based on false information – or, essentially, dismiss accurate, helpful information that would have steered to other decision alternative(s) – will be fired when the truth of the bad decision comes out.
Which would be helpful if applied to sectors where people’s money is so abjectly abused, too. E.g., like this one. Or this one (in English, some info here, and the whole idea of usefulness of having more and more data is debunked endlessly everywhere (you search)). Or this one, completely debunked here. The list is endless.

All of which points to a serious problem. The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt (Bertrand Russell) AND the stupid (to which, ‘immensely’) seem to be masters at picking the wrong advice. Once ‘immensely’ is indeed added, one recognises the ‘politician’. Playing the role of the Fool (not the Jester), unsurpassibly perfectly.

But how now can we get those stupidestest ideas go to die, sooner rather than later ..?

The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd; indeed in view of the silliness of the majority of mankind, a widespread belief is more likely to be foolish than sensible (BR again). That’s completely true; moreover, it’s Maverisk’s motto.

Oh, and:

[To defend the Truth; Châteauneuf not the -du-‘ish or so]

The dullness of infosec ..?

And you thought fraud detection was about bank transactions or even counterfeiting physical stuff. Boh-ring, when you read this. Takes it to another level, eh?
Which brings me to an important issue: Are we not still studying and practicing infosec from the wrong angle, doing a middle-out sort of development in many directions but starting at a very mundane ‘CIA’ sort of point. Which is of course core, but there is so much to cover that some outside-onto view(point) might be beneficial. We’re in the thick of the fight, and no matter in which direction you go, when you wade through the thicket with your control measures machete, you achieve little – when you then turn around to try to clear some area in another direction, all has grown dense with state-of-the-art arms’ race bush again already.
And yes, of course one can educate, etc. in some form of hierarchical approach, top-down. But that leaves us with many, all too many that float comfortably on the canopy where the view … isn’t that great as one’s very certainly in thick fog of the monsoon rain. And nothing is being directed (ugch) deeper down. Or controlled (?). Just more, most partial world views unconnected and behaving erratically.

The e.g. in this is that link above. A tiny subset of situational scenario. Not sonved pervasively, once and for all. Now think about the hugely, vastly, enormously wider scope of ‘all’ of infosec that would need to be covered to a. arrive at sub-universes of control, b. overview.

The latter remains Open.
Me not happy.

Solutions, anyone ..?

Oh, plus:
[Ah! The days when this sort of ‘defence’ was enough to conquer! Alésie of course]