Blog

Doing it RAIght

Following up on Tuesday’s post; a piece on how to do AI right.
No, its not technical. It’s more like a Do and Don’t… In this piece, you’ll read a lot of how AI can be deployed – until you realise that some systems mentioned that are intended to steer (nudge) employees towards ethical behaviour, actually are very, very creepy in themselves, and possibly illegal in a number of places / the majority of where you’d want to be if you’re ethically-inclined.

Oh well. The Law of Conservation of Trouble, right? Only good for a Newtonian world view, by the way.

And one can also study this piece; much better… Or this – that apparently was out (not ~ there) since Nov 2017 ..? And please also add this in the mix.
Not this one, terribly vacuous.

Which is a positive note to end with, plus:

[Trippy pic by accident, location no accident these days; side street in Toronto]

Intermezzo: Culture eats Strategy for hors d’oeuvres and then some

Don’t you ‘amuse gueulle/bouche’ me …! Why change a winning team name, right?

Also, Drucker’s thing on strategy isn’t the only thing. There’s other things, e.g., qua regulations and ‘control’ here.
TL;DR incl my elucidation: Controls stifle, lead to stagnation – and

  • Are ineffective, as they almost by definition don’t least resistance enable the ‘safest’ procedures to be followed naturally but coax users away from that hence introduce severe friction against which users will put their energy more than against actual downside risks to achieve the upside potentials. So any control that you deem necessary, will be rendered useless as a first priority.
  • Were never intended as a management ‘control’ thing! Management control is about steering and giving out orders [with a huge, huge bandwidth for Commander’s Intent …!!] and it definitely has zero to do with micromanagement and enslavement. The latter, not too strong a word…
  • Are the exact opposite of what you need to succeed. Having a hierarchical ‘framework’ of controls, preferably without ‘risk’ in every line, might have once suited vast army control. But your organisation isn’t an army, on the contrary qua success it’s an inverted pyramid! And even armies nowadays have to deal with guerrillas and IEDs etc. in every outpost corner of ‘business’; the ‘one step forward across the entire continent-wide front’ no longer applies, does it? Edited to add: I’m not even making this up; If you dislike change, you’re going to dislike irrelevance even more. [Eric Shinseki, Retired US Army General]

Only the most trivial of (e.g., anti-fraud) controls may need to be in place, but if you need those, you’ve already lost since you demonstrably have lost the primary motivation of your employees to work with the company as they now at best don’t care a thing just pay for presence. You need guide rails not train rail tracks. The outside world, and your inside world, will derail you (all) in an instant.

So please stop the fellow travellers of the GRC world. Get to the real work in that. But then, ‘culture’ can’t be controlled per checklist. Oh my.
Plus: This.

And:

[Use one of these; Twente AFB a long time ago when analog picsslides (this one a scan from _ even…) were the B/A or rather only/A]

Ai Tool looking for a Tool

In line with various posts (like, this one of this morning, and another one somewhere this week – Thursday; of course I know, it had been scheduled two months ago),
an in-between remark of sorts: The biggest bottleneck in AIland nowadays is the translation of some unknown business need into AI/ML sprint objectives.

See? One can have a lot of data, and randomly figure out the patterns (‘correlations’ of various kinds; note the ”) OR go for the outlier detection. Then what? Is there a business purpose to turn the found patterns (and keep on trucking learning – using the same stats [!] engine) into some system module or so? Is the business case sufficient, and do all involved know enough about each other team mates’ territories to get it together? How much work needs to be done to turn the stats engine part into something efficient, compiled and adaptable through continuous learning? This wrangling with the systems logic at a (?) meta-levels may render the system less effective i.e. business case worthwhile; what’s the minimum and can we achieve that?

Or will the point application remain that, and not even be turned into a fully developed little app ..? [Where has the original ‘applet’ gone ..?]

So, what would happen if after a lab pilot, a full business case were developed ..? I could go on and on about how a demand pull should drive development, not a supply shy little push’let by a breaking-voice nerd… [no offense intended]

Oh well, this:

[A design trick looking for an application; Zuid-As Amsterdam]

Fact, not AIfuzz

Very readable it is, and useful for your information, on the general state of affairs. This here report.
Already – ah! – months old, but still very, very good for a Summertime’s read-up. To be a better guru (link ..?), not follow the (dump) hype.

And it may be incomplete, and somewhat outdated. E.g., add this — in itself, also (very) incomplete as it has zero ref to Canada even when it mentions the old hand US in this… Hey when I checked back the site, halfway through April, it had ‘upcoming events’ till January. Wow.

But it’s better than relying on hearsay like case-by-case ‘news’. If you’re interested, ensure to be at the right conferences also, e.g., this one.

Cheers, with:

[Was it Strassbourg or Colmar? Who cares; Alsace it is.
 I care (at least my pic is undoctored…), as here:

Nononymous

Tech journalism isn’t really what it used to be – the means for breaking news that’s real. Now, it mostly is down to breaking news that a. isn’t, b. isn’t real.

As over and over again we see ‘solutions’ presented as if they aren’t already either outdated or just outdone on the border of misleading. No claim on which side of the border.
This in particular when songs are sung, mostly badly, about anonymisation of data for e.g., mining purposes. Medical stuff, mostly. There’s even some logic there as medical data a. seems to be the flavour du jour in under-the-radar trading circles [prices are much higher per data point than e.g., verified platinum credit card numbers], b. is indeed useful for mining by e.g., ML-to-automated-flagging.

Yes the addition was necessary; ML is a means to some ends, and without the ends, what and how to learn ..? In particular, when mulling over false positives / negatives.
Yes one does need anonymisation desperately, as the previous a. in societal costs outdoes much of the benefits of the b.
Yes anonymisation by smart encryption is an idea.

But No it doesn’t work as intended; as is shown time and time again. In the mass, and in particular – how many extortion-through-doxing-threats do we not know about ..?
This, as the ‘journalism’ of today [sort-of]. From Wired, no less. Sic transit gloria mundi.
And this, the lowdown on the counterpoint. With this, in addition. Too many scientific articles that demonstrate this works, to link here. Another rebuke…

So, next time: Be sure to go beyond mere de-identification, please. And improve your critical double-check journalism standards.

OK?
Well then:

[Your big gun of one-way hashing or whatever simpleton encryption, doesn’t count for much…; Edinborough]

Friday’s Sobering Thoughts – VIII – recovery of the squeamish ossifrage

You may have not heard too much of developments in the crypto arena. Not like, ‘Oh hype! hype! cryptocurrency!’ but like: ‘Elliptic cryptography may beat quantum computing’s power’ or ‘Homomorphic encryption can be cracked into deanonymisaton of personal data’

Anyone who has an overview of how developments stand? And whether the squeamish ossifrage has recoverd?

Now then,


[Just some after shopping-at-5th drinks. Of the right kind, hopefully]

If you missed Friday’s Part VII you are right; it doesn’t exist. I don’t like 100% consistency/perfection, that’s why. It’s just too boring and excludes flexibility, room for renewal/freshening/innovation, you know.

About you

Aww how romantic that may sound. But just a reminder: “Intelligence is what people with less intelligence than I have, don’t have”. Somewhat less r of romance, huh. Maybe a bit more R of R, then. But still valid.
Did you notice, by the way, that R was ‘invented’ (conceived – by nerds, yeah right) in 1992 already? What took it so long?
On the more serious side; the boundary between [assuming no overlap then, by language] statistics and ML processing (either plain or NN-like), where is it? What is the character of the divide ..? There certainly is one, and one better consider it when trying to move from e.g., predictive analytics to true continuous-learning systems. My guess: When things get complex, one is over the boundary already. Linear separability in n dimensions is still on the stats side probably.

Where this is going I have no idea never mind let’s return. To Kansas if possible. Or to Intelligenceland, also good. Is ‘intelligence’ like ‘reality for physicists’? Better go study Gadamer. For the rest of you, just quit the discussion and be no less wise or happy or effective. The latter should soothe.

Plus:

[Someone needed to compensate …; Riga]

Ownership

Of an organisation – or being the organisation.

The former, the idea of the last millennium. Apparently, still there, hopefully not here. Like, “You have to know the individual. Skills are your renewable asset, and you need to treat them like that.”really. If there’s anything that should be renewed, it’s the ‘leadership’ quod non. Last time we looked, slavery had been abandoned in the US. If you treat them as renewable asset, no wonder the ones with sane minds will want to leave.
Unsure whether ‘moronic blurbs by the Board’ was part of the training set – guess not; the blind don’t see – as that would be the hidden variable par example that would obviate the need for any training set. Not even considering whether using actual employee data would be allowed for this, qua privacy.
And cause and effect of employee treatment and retention ..? When you treat them as drones, all you’ll end up with will be.
See? The tools are taking over their own users, brainwash their users into their fellow travellers. In-company variants of this.

The latter, of this century. Leadership is subservient to the organisation. People join a band of professionals in order to achieve their goals that they can’t achieve on their own; achievement of the goals of the organisation and through that, their own. If their own goals aren’t met/achieved sufficiently [yes, yes, allowing for some bartering along the way between the org’s interests and your own, nothing for nothing], they’ll look elsewhere indeed. The organisation has no inherent right of existence but to serve the constituents. That ‘capital’ would have a say, OK, but one say among many. If ‘capital’ takes over [e.g., through weaving errors in the system, ref. Adam Smith himself!], in the end capital will possibly not have a need for humans (as per the above in-company take-overs) and become DACs (or DAOs even) with maybe some puppets mumbling the overlords’ fake truths.
See? That’s not even the origin of this century’s organisations, it’s the very foundation of the theory of firm.

Your choice. Dystopia, or join towards Utopia.
[That will not be achievable, and better for it as it is a nightmare! as in the original intention]

Also, read up on this piece.
And, edited to add before publication: This on Big G’s change-to-come, especially the paragraph block below Ortega… [not y Gasset, though.]

Leaving you with:

[The somewhat-pastiche of the small-scale world looks appetising! Niagara-on-the-Lake]

The Model

Yes, hopefully, you were thinking of some Great music; here. Otherwise, I wanted to ask, again, whether anyone would have pointers to some easy intro on modelling – on how in neural networks, the weights are the model. In terms of parametrising a SAP system, but learning by training and doing; changing the parametrisation as you go in production, to accountants’ delight [maybe not so much]. This, compared to Expert Systems and evolutionary algo’s. The latter, so much underestimated but hey, let’s get something going in that area and first try to get some catch-all explanation-for-the-masses on neural networks out, right?

Then again:

[Emerging-from-software library; Dublin]

BughuntAIng ..?

“Asking for a friend” whether anyone would have a lead on how things stand with AI doing bug hunting ..?

As [hey, by sheer coincidence, exactly] three years ago I asked already. And got no serious response then. And since, there have been more talk about AI on either side of the infosec FLOT, but not much on How. Or where the FLOT is. Throw in some Adversarial Examples and there you go; arms’ races and falling behind in those…

By now, it should be easy to do such a thing; have an ML system learn what sloppy coding looks like, even when that is hiding in plain sight i.e., OK on first (syntax/compile) impressions but functionally tricky ..? A (data/input) edge case quality break-down severity tester would also help.
Then again, I suspect that some masters of invention haven’t mastered the solution, yet… As per DARPA’s request for your proposal to help solve a tangent issue; had they solved the above, they would’ve been able to translate the results to the latter.

In return, for your viewing pleasure:

[Just a pretty picture – when you’re an engineering-inclined mind; Binckhorst Den Haag Voorburg..?]