‘corn down, times 10

After the many lists of wat went well this year, with AI, bitcoin, etc.etc., we wonder: How much of that is plugged fake news or ditto overblown ..?
When still, we have the likes of this: A list of some 10 unicorns that went down (or -soon) despite funding to dream of. When you look into it, we seem to be back in, 2001, and somewhat later, when the idea of drafting a two-pager business plan seemed to be enough to get VC / angel / whathavewe funding. OK, maybe this time around (and for the co’s mentioned) it’s more like a ten-pager requirement but hey, why wait to throw money into a wormhole, right ..?
To remind us that maybe, not all went so well in ’17.

And maybe despite all the hopes we have for 17++, we should again, still, reckon with downside risks a little bit more, please?
But you’re not gonna listen to me, are you?

Mewwy Cwistmas & happy new year anyway! Plus:
[Heck, this has nothing to do with festive fireworks or so but is pretty still; Valencia]

Not sure … about the mix of AI, privacy values, ethics, BD

Recently, I was informed about this. With the blueish table spraking a recall (the way the brain does) of this and in particular, this [downloadable here].
This, the latter in particular, being about how ‘privacy’ as an issue(s), depends on its definitions – both formally, and emotionally.

I   s u g g e s t   y o u   s t u d y   i t   f u l l   d e t a i l  yes that’s a lot of   but definitely worth it. The study I mean.

Now, with the inroads made by Big Data (i.e., mudane profiling now with greater tools for [towards] greater fools), and this being turned into ‘AI’ quod non, we need clarity more than ever.
The Internet has just too small a margin to scribble down my proof – I’d say proof rambling ideas, but I have a paper coming up in Jan about just this subject …
Yes the promised Quantifying Privacy‘s just around the corner of sorts.

Do read on, here, though. And:

[No empty glasse, please, but a muid will do; Haut Koenigsbourg of course]

Unsider threats

How is it that we tend to hear over and over again about ‘insider’ threats ..?
Even when it’s not the Board that is meant here, as the pinnacle of … the ability to drive a company into the ground, those pesky ‘insiders’ really are a pain in the place where you like that or the sun doesn’t shine.
Better get rid of any and all of those ‘insiders’ then, eh ..? AI here you come. But if AI system(s) would be a replacement for humans, wouldn’t they commit the same temporary, small, innocious and unconscious lapses of judgement..?
And what about off-boarding the biggest threats first ..? [Where I do men the above committee]

Maybe better to recall that we’re about to celebrate the fifteenth birthday [was there ..!] of deperimetrisation – with an s once you recognise its country of birth, and disclaim an all-out stupid Jabba the Hutt style claim of origin so no z’s anywhere – who’s an -sider when there’s no way to tell ..?

Also, it villifies the underlings that make your salaries and bonuses so if you punish them (by giving them less reward than yourself), they don’t get mad. They get even. Simple. You gave them the tools You made them build their own tools superficially to keep you afloat but you wouldn’t recognise a buoy from an anchor so guess what you get… And when you’ve lost them, they aren’t much of the insider you’d want, right; morally they’d be on the outside again already.
Case in point: This miss by the venerable @HarvardBiz … Though the solutions offered, are valid – as very-starting points…

So Part I – Ω is to treat your underlings like you care. If, big if, you actually mean it (hence will not in an instant be found out to be a fraud at this), you’re saved for now. Otherwise, no fight against <whatever>sider threats, will be futile. Remember this ..? You get treated like you treat, it starts with you – your intentions towards the other, will be perceived. Positive/negative, the choice is yours.

Oh well. Plus:
[Some light’s also good for the inside; Utrecht (1924 ..!)]

A different take on fireworks

Yes dear people it may be unbelievable to some but there’s some local areas, like the EU, where in some spots/countries, fireworks are still allowed to be lit by just about anyone [age limits for buying, much more overeasily circumvented than e.g., alcohol sales], on Dec 31 – and no-one seems to care about earlier (days in advance) occasional severity-max hindrance to the elderly, dogs, and generally phobic/gravely-disturbed-by-fireworks public. “Tolerance” never seems to go the way of the Meek.
But, when societal discussions go to maybe possibly impinge on these ridiculously-lax liberties, there’s hope. Of a replacement of sorts. Not (only) by means of public fireworks displays – that are, admit it, always much more beautiful then your own, and just noise doesn’t impress anyone but prepuerile boys – but also by, tadaaa:
This here idea of Drone-on-Drone contests. Should be fun! If only we could attach the equivalent of reactive stuff, just for the light show effects.

That was all, folks. Out on a bang with:
[Hey those things are still quite prevalent in Knightsbridge; are they anti-drone security devices or how backward can one be ..? Good riddance from the EU ..?]

Perception

The Doors to Which, you understood.
hat were delivered by all the other posts of this here blog you’re on.
T hough here, I meant it to refer to the cliché that recently rose up again from the dullness of repeat, being stated as “If a door closes before you, there will be another door open to go through” or something similar.

Which of course is very wrong.

When a door closes before you, you open it again. That’s how doors work.

Somewhat different, eh? Even when you’d have to kick it in. Or, reality being too complex to capture is such a short bon mot, indeed you take another door; (but only) when you’re tired of kicking. Whatever your perception of reality, the choice is still yours. Either take any flimsy, tiny hair of a threshold as an excuse to run away (from difficulty that you’ll find eve-ry-where, in particular when you run away so easily – ever more easily, ever less easily overcome), or you stand for what you set out to achieve.

On that festive note, I’ll leave you with:

[Dropped the true idea of Christmas, being thankful for having survived another year and with a glimmer of hope that next year might be a sliver better, now just called Winter Wonderland and being the worst (or over-the-top) consumerist-commercialised dump of rubbish packaged as Christmas market. With stalls from everywhere, in particular Germany, Belgium, Hollund. With merchandise from eveywhere though mostly from China. Now in Hyde Park]

Lagging, nagging ‘Coin feeling

Somehow, the many (?) Cassandra’s messages of late about Bitcoin being a hyped-up bubble about to burst and how it’s the laggerds (only) that now want to jump in, drive up prices but will be the Greater Fools in the game, sounds a bit … false bottom.
[Unsure what the exchange rate is, today. Has the bubble burst already, or ..?]

I’d say, how many countries, and moreover how great a many financial organisations (private or public, or regulatory or inter/supranational) may have an interest to blow up Bitcoin, as a demonstrator that all that blockchain stuff isn’t any good after all ..? Because the promise of blockchain was, and is, that it will not need any, traditionally somehow still geography-bound, standing governing body avalanched-under by politics, gross misunderstanding of the core concepts ruled over, etc. In short, power struggles.
And that, of course, may be a future that some will not allow. And fight to their deaths – inevitably, not by nature but by cause and earlier. But still they simply won’t have it, that blockchain/Bitcoin/smart-contract/whatevva shazam. With the only way to get rid of it in an inconspicuous manner, is to … inflate it till it goes borsht. Risk, but possibly profit hugely during the trip ..? Case in point: this one [hope the link is still valid, and it’s in Dutch which may mix semantic levels to be tauto]

Plus:
[You get a golden bullet-like thing on your pillow. With compliments and filled with chocolate, that is… In Salzburg of course]

The Pursuit of Triviality

Today being the day of the real Sinterklaas (as here), we also learn (not!) to cherish the small gifts. As in this effect, but otherwise. [That sounds weird…]
But the effect is too, all the same. And so ubiquitous that we may even lose sensitivity towards, or against [here we go again] it. And that is a problem. Because darn, how can we even think to train AI in rational business decision making, when all learning examples, and/or actual practical deployment, will be tainted / rife with such irrational biases? We so commonly swipe, shuffle them softly, under the rug that here we have such an AI-applicability-wrecking issue that we hear so much about lately. [No you don’t, compared to what would be enough, hardly anything at all…]

Another example of knowing your classics, eh? Oh well. Plus:
[Noooo, not those classics again! Saltzb’]

Gee… DPR on Profiling

This again about that pesky new legislation that just won’t go away not even before it will be legally-effectively enforced [as you know, the thing has been around already for a year and a half, but will only be enforceable, in pure theory, per upcoming May 25th but your mileage may (huh) vary greatly – when Risk = Impact x Chance [don’t get me started on the idiocy of that, as here of 2013, Dec 5th – Gift time!] the chance is Low of Low and Impact can be easily managed down, legally yes don’t FUD me that will be the truth, the whole and nothing but it. So it will be legally effective but not in any other sense let alone practically].

For those interested, there’s this piece on Profiling. That has, on p.16 last full para (‘systems‘ that audit ..!?), p.19 3rd para from the bottom “Controllers need to introduce robust measures to verify and ensure on an ongoing basis that data reused or obtained indirectly is accurate and up to date.“, p.30 in full and many other places, pointers towards … tadaaa,

Auditing AI

with here, AI as systems that process data – as close to ‘systems’ in the cybernetic sense as one may get even when needing the full-swing wormhole-distance turn of the universe consisting not of energy but of information to abstract from the difference between info and data.

Where I am developing that auditing of AI systems as a methodologically sound thing. And do invite you to join me, and bring forward your materials and ideas on how to go about that. Yes, I do have a clue already, just not the time yet to write it all up. Will do soon [contra Fermat’s marginal remark].

Oh and then there’s the tons of materials on how anyone (incl corporate persons) will have to be able to explain in no complex terms (i.e., addressing the average or even less clever) how your AI system works…

So, inviting you, and leaving you with:
[What corks are good for, well after having preserved good wine – decoration. Recycle raw materials, don’t re-use data! Ribeauville]

Aïe! Missing the Point

Yet again, some seem to not understand what they’re talking about when it comes to transparency in AI…
Like, here. Worse, this person seems to be a rapporteur to the European Economic and Social Comittee advising the European Committee. If that sounds vague – yes it does even for Europeans.

For the ‘worse’ part: The umpteenth Error, to consider that the secrecy of algorithms is the only thing that would need to change to get transparency about the fuctioning of a complete system.
1. The algorithm is just a part of the system, and the behaviour of the system is not determined in anything close to any majority part by the algorithm – the data fed to it, and the intransparent patterns learned by it, are. The transparency needs to be about the algorithm but much more about the eventual parameters as learned throughout the training time and the training/tuning after that. [Update before press release: There seems to be an erroneous assumption by some way too deep into EC affairs that the parameters are part of the ‘algorithm’ which is Newspeak at its worst, and counterproductive certainly here, and hence dangerous.]
2. The algorithm can just be printed out … If anyone would need that. One can just as easily run an AI code analyser (how good would that be? They exist already, exponentially increasing their quality, savvyness) over the source- or decompiled code.
3. The eventual parameters … not so much; they’re just there in a live system; unsure how well they are written out into any file or so (should be, for backup purposes – when not if AI systems will get legal personhood eventually (excepting the latter-day hoaxes re that), will a power switch-off be the same as attempted murder, and/or what would the status of a backup AI ‘person’ be ..?).
4. Bias, etc. will be in the parameters. The algorithms, mostly-almost-exclusively will be blank slates. No-one yet knows how to tackle that sufficiently robustly since even if the system is programmed (algorithm..!) to cough up parameters, the cleverer systems will know (?) how to produce innocent-looking parameters instead of the possibly culpable actual ones. Leads into this trickery by AI systems, have been demonstrated to develop (unintentionally) in actual tests.
5. How to trick AI pattern recognition systems … the newest of class breaks have just been demonstrated in practice – their theoretical viability had been proven long before – e.g., in this here piece as pointed out only yesterday [qua release; scheduled last week ;-]. Class break = systemically unrepairable … [ ? | ! ].

Let’s hope the EC could get irrelevant just that little less quickly by providing it with sound advice. Not the bumbling litlle boys’ and girls’ type of happythepreppy too-dim-wits. [Pejorative, yes, but not degrading: should, maybe not could, have known better ..!]

Oh, and:
[Already at a slight distance, it gets hazy what goes on there; from the Cathédrale]