The Pursuit of Triviality

Today being the day of the real Sinterklaas (as here), we also learn (not!) to cherish the small gifts. As in this effect, but otherwise. [That sounds weird…]
But the effect is too, all the same. And so ubiquitous that we may even lose sensitivity towards, or against [here we go again] it. And that is a problem. Because darn, how can we even think to train AI in rational business decision making, when all learning examples, and/or actual practical deployment, will be tainted / rife with such irrational biases? We so commonly swipe, shuffle them softly, under the rug that here we have such an AI-applicability-wrecking issue that we hear so much about lately. [No you don’t, compared to what would be enough, hardly anything at all…]

Another example of knowing your classics, eh? Oh well. Plus:
[Noooo, not those classics again! Saltzb’]

Eternal Life

Remember Castranova’s Synthetic Worlds? You should. If only because this, is, still, in The Atlantic [edited to add: with an intelligent comment/reply here] – because that is still around, it allows for comparison of origins, and the utopical futures as described in the book, with-or-to the current state of the world. Where we have both the ‘hardly any change noticeable in current-state affairs when compared with the moonshot promises of yesterday’ and the ‘look what already has changed; the whole Mobile world change wasn’t even in those rosy pictures’ and ‘hey don’t critique yet we may only be halfway through any major tectonic humanity shifts’.
Where the latter of course ties in with the revitalised [well there‘s a question mark attached like, what do you mean by that; is it ‘life’ and would we even want that] version in ‘singularity’ the extreme nirvana of uploaded, eternal minds. As if the concept of ‘mind’ or ‘intelligence’ would make any sense, in that scenario. And also, since this (pronounced ‘Glick’ one guru told me lately; the importance of continued education), where the distinction between ‘eternal’ and ‘forever in time’ clearly plays up as in this (same), against …

In circles, or Minkovsky‘s / Penrose‘s doodles [wormholing their value], the issue comes back to flatlander (i.e., screen) reality, if there is such a thing…
Oh well; leaving you with:
[Warping perspectives, in ‘meaning’, too; Salzburg]

Noble Black edition

When first, Santa (the real one !!!) his (hey, not ‘her’ – isn’t that wrong by not being utterly iconoclast, or is gross distortion of History by such destruction of artefacts a crime against humanity?) helpers shouldn’t be blackened chimney sweepers (as they have been known to be, not people of African descent, since at least 50 years) because even in their fine dress and with important President&CEO-plus-1 level jobs and maintainers of the kindred-laws certainly over parents too, they were by definition slaves or so. Oh? Weren’t they the ones with rods to enforce the rule of naughty or nice laws? Didn’t they have the nice jobs, with obvious jealousy-inciting quality of work and management culture, being all gay (sic) and joyous whereas most of the children they would have met, and their parents, would be (sometimes literally starving) poor and probably without steady jobs ..?
And now they are changed to Spanish noblemen (qua clothes, zero ladies around because they would be unfit to work ..?) of the 16th century. Because now they’re
(claimed to be… yes really, to fit the crime-against-humanity PC only handful of, almost exclusively, one little (yes) city (literally)
whereas even those of colour (any) don’t see any problem with the original)
completely unrelated to slavery. Of course. Would one be allowed to influence public-anything let alone childrens’ festivities when being so utterly stupid? Where does open democracy break down for terrorists? Here.
Moreover, as noblemen, the new helpers are white. But of course. Otherwise, one would again distort History. And then, what jobs are there now for those of African descent, in the fun occasion? None.

Whatever. Don’t get mad, get even …? Plus:
[White factory – paint should be the only distinction and not implicate anything; Rotterdam of course]

Gee… DPR on Profiling

This again about that pesky new legislation that just won’t go away not even before it will be legally-effectively enforced [as you know, the thing has been around already for a year and a half, but will only be enforceable, in pure theory, per upcoming May 25th but your mileage may (huh) vary greatly – when Risk = Impact x Chance [don’t get me started on the idiocy of that, as here of 2013, Dec 5th – Gift time!] the chance is Low of Low and Impact can be easily managed down, legally yes don’t FUD me that will be the truth, the whole and nothing but it. So it will be legally effective but not in any other sense let alone practically].

For those interested, there’s this piece on Profiling. That has, on p.16 last full para (‘systems‘ that audit ..!?), p.19 3rd para from the bottom “Controllers need to introduce robust measures to verify and ensure on an ongoing basis that data reused or obtained indirectly is accurate and up to date.“, p.30 in full and many other places, pointers towards … tadaaa,

Auditing AI

with here, AI as systems that process data – as close to ‘systems’ in the cybernetic sense as one may get even when needing the full-swing wormhole-distance turn of the universe consisting not of energy but of information to abstract from the difference between info and data.

Where I am developing that auditing of AI systems as a methodologically sound thing. And do invite you to join me, and bring forward your materials and ideas on how to go about that. Yes, I do have a clue already, just not the time yet to write it all up. Will do soon [contra Fermat’s marginal remark].

Oh and then there’s the tons of materials on how anyone (incl corporate persons) will have to be able to explain in no complex terms (i.e., addressing the average or even less clever) how your AI system works…

So, inviting you, and leaving you with:
[What corks are good for, well after having preserved good wine – decoration. Recycle raw materials, don’t re-use data! Ribeauville]

Aïe! Missing the Point

Yet again, some seem to not understand what they’re talking about when it comes to transparency in AI…
Like, here. Worse, this person seems to be a rapporteur to the European Economic and Social Comittee advising the European Committee. If that sounds vague – yes it does even for Europeans.

For the ‘worse’ part: The umpteenth Error, to consider that the secrecy of algorithms is the only thing that would need to change to get transparency about the fuctioning of a complete system.
1. The algorithm is just a part of the system, and the behaviour of the system is not determined in anything close to any majority part by the algorithm – the data fed to it, and the intransparent patterns learned by it, are. The transparency needs to be about the algorithm but much more about the eventual parameters as learned throughout the training time and the training/tuning after that. [Update before press release: There seems to be an erroneous assumption by some way too deep into EC affairs that the parameters are part of the ‘algorithm’ which is Newspeak at its worst, and counterproductive certainly here, and hence dangerous.]
2. The algorithm can just be printed out … If anyone would need that. One can just as easily run an AI code analyser (how good would that be? They exist already, exponentially increasing their quality, savvyness) over the source- or decompiled code.
3. The eventual parameters … not so much; they’re just there in a live system; unsure how well they are written out into any file or so (should be, for backup purposes – when not if AI systems will get legal personhood eventually (excepting the latter-day hoaxes re that), will a power switch-off be the same as attempted murder, and/or what would the status of a backup AI ‘person’ be ..?).
4. Bias, etc. will be in the parameters. The algorithms, mostly-almost-exclusively will be blank slates. No-one yet knows how to tackle that sufficiently robustly since even if the system is programmed (algorithm..!) to cough up parameters, the cleverer systems will know (?) how to produce innocent-looking parameters instead of the possibly culpable actual ones. Leads into this trickery by AI systems, have been demonstrated to develop (unintentionally) in actual tests.
5. How to trick AI pattern recognition systems … the newest of class breaks have just been demonstrated in practice – their theoretical viability had been proven long before – e.g., in this here piece as pointed out only yesterday [qua release; scheduled last week ;-]. Class break = systemically unrepairable … [ ? | ! ].

Let’s hope the EC could get irrelevant just that little less quickly by providing it with sound advice. Not the bumbling litlle boys’ and girls’ type of happythepreppy too-dim-wits. [Pejorative, yes, but not degrading: should, maybe not could, have known better ..!]

Oh, and:
[Already at a slight distance, it gets hazy what goes on there; from the Cathédrale]

Gödel pics

May it be that research has stumbled onto a find of fundamental significance ..?
I’m not overly overstating this; on the surface this looks like a regular weaving error in AI development fabric; one or a few pixels changed, can completely throw off some learned pattern recognition net. Yes, this means that (most probably) neural nets are ‘brittle’ for these things.
But does this perhaps moreover mean, it may be a #first qua Gödel numbers …? I mean not the regular the Hofstädter type that, even when syntactically perfectly normal, will when fed to a machine able to parse syntactically correct form, nevertheless wreck the machine. The halting problem, but more severe qua damage.

Just guessing away here. But still…

Oh, and (of course another pic; no worries not dangerous):
[Yup, dingy car (license plate!) being overtaken by actual productivity; Canada]

Stop dads

When you read too much (ahead) into it…
By means of this court ruling. Where a father was forbidden to post pics of his (was it 2- or 3-yr old) son on Facebook or any other socmed platform, by request of the kid’s mom, since even when the father posted in quite tightly closed circles, Fubbuck has in its terms and conditions that it might use the pics for commercial purposes. Since the latter can not be ruled out and in the interest of protecting the child’s interests, the court ruled such, advising the dad to show the pics to friends op his home compu if he’d really want to.
[What need would the father have to do that? one can ask. Benign or perverted?]

From which we learn, if – very very big if – that indeed we should consider the need and purpose of posting on socmed in the first place. If it is content that one wants the world to see, it’s OK. If some part of that content, or the purpose of the post, would not be OK → get out. If the purpose of the post would be to show off (e.g., one’s cool-dadness – pityful! but see how the other 99.999% of posts anywhere are for that purpose and that alone…), really nothing may cure you (sic).

So now, what about this post …? And:
[Since it’s no longer the site banner: Rightfully and intentionally out in the open; Barça]

Loss of memory

Recently, was reminded (huh) that our memories are … maybe still better than we think, compared to the systems of record that we keep outside of our heads. Maybe not in ‘integrity’ of them, but in ‘availability’ terms. Oh, there, too, some unclarity whether availability regards the quick recall, the notice-shortness of ‘at short notice’ or the long-run thing, where the recall is ‘eventually’ – under multivariate optimisation of ‘integrity’ again. How ‘accurate’ is your memory? Have ‘we’ in information management / infosec done enough definition-savvy work to drop the inaccurately (huh) and multi- interpreted ‘integrity’ in favour of ‘accuracy’ which is a thing we actually can achieve with technical means whereas the other intention oft given of data being exactly what was intended at the outset (compare ‘correct and complete’), or do I need to finish this line that has run on for far too long now …?
Or have I used waaay too many ””s ..?

Anyway, part II of the above is the realisation that integrity is a personal thing, towards one’s web of allegiances as per this and in infosec we really need to switch to accuracy, and Part I is this XKCD:

Awaiting Asibot

All Are Ardently Awaiting – stop, semantics go over syntactic alli – the release of Asibot, as here.
Because we all need such a system. The inverse of Dragon Naturally (into Nuance, too little heard of as well!) combined with a ghost writer, as it were / is / will be. When prepped with one’s own set of texts, should be able to generate useful ground work for all those books you have been wanting to write for a long time but couldn’t get started.
Now, would such a system be able to extract hidden layers, stego-type of themes, that are in your texts that you aren’t even aware of ..? What kind of services would be interested most? Oh, that one’s answered before I finished typing; the three-letter abbrev (uh?) kind of course.

Still, would very much want to meddle with the system… Plus:

[If applicable to music, sunny Spring days in Saltzburg, too for …]

Culpably deaf

All that work for a private sector organisation who take (wrong) decisions based on false information – or, essentially, dismiss accurate, helpful information that would have steered to other decision alternative(s) – will be fired when the truth of the bad decision comes out.
Which would be helpful if applied to sectors where people’s money is so abjectly abused, too. E.g., like this one. Or this one (in English, some info here, and the whole idea of usefulness of having more and more data is debunked endlessly everywhere (you search)). Or this one, completely debunked here. The list is endless.

All of which points to a serious problem. The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt (Bertrand Russell) AND the stupid (to which, ‘immensely’) seem to be masters at picking the wrong advice. Once ‘immensely’ is indeed added, one recognises the ‘politician’. Playing the role of the Fool (not the Jester), unsurpassibly perfectly.

But how now can we get those stupidestest ideas go to die, sooner rather than later ..?

The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd; indeed in view of the silliness of the majority of mankind, a widespread belief is more likely to be foolish than sensible (BR again). That’s completely true; moreover, it’s Maverisk’s motto.

Oh, and:

[To defend the Truth; Châteauneuf not the -du-‘ish or so]

Maverisk / Étoiles du Nord