Friday’s Sobering Thoughts – III

“A human must turn information into intelligence or knowledge.” — Grace Hopper

Indeed, too much of the world these days seems to run on information. At most. We blogged about this, earlier, when We were even more optimistic.

No wonder all the craze is to get more data. That isn’t the new oil but you had heard that, didn’t you? When not in depth, when not being able to move up, one moves sideways at best, trying desperate to get higher but slipping, sliding, failing. So many data lakes, seas, oceans; so little bare intelligence let alone Higher.

Can we fix this ..? Is there hope ..? Well, there’s this diamond.


[Amazing maze grace space; Mezquita of Córdoba]

Valjoow

I’ve been known to have opined, sometimes in words that have not always been the most diplomatic euphemisms, on the idea of Value in e.g., goal setting for IT-departments. To name just one example, where those that should know better, want ROI but for the R have nothing but ‘Value’, that’s what you deliver when you deliver what The Business wants.

Now, “As the business innovator W. Edwards Deming taught, if management sets only quantitative targets and makes people’s jobs depend on meeting them, “They will likely meet the targets — even if they have to destroy the enterprise to do it.” In Deming’s estimation, more than 90 percent of the conditions that affect a company’s performance can’t be easily tracked and measured. Yet managers spend more than 90 percent of their time monitoring and analyzing some form of measures; most notably, time tracking” (here).
Also, “The perception that the world is quantitative and that business is therefore mechanistic has for the past fifty years shaped all the variants of strategic planning, financial analysis, budgeting, cost management, and management accounting that have been taught by graduate business schools and practiced in large organizations. Executives versed in such practices and who believe that reality is defined by quantitative measurements are like the puppies who believe that the fence defines reality” (same).

What can be measured, is of no value; what is of value, cannot be measured.

Yeah I know a few, a precious few circumstances where Value is in the measurable, and there, sometimes the milliseconds are the long run. Short run is shorter, indeed… But for most, ditch the idea that you know what Value is about. Re-engineer your organisation indeed [even if that might be painful as you’ll probably have to transform irrecognisably], and conform to the original theory of firm. Then, durable success may be yours. The times, they are a’changing ..!

Cheers, and:

[Art, as a prime example; Barça]

Complexicated it ain’t

When dealing with complex problems…

Since you can call anything you don’t understand complex, but most probably are just over your head and can’t even see the issue is just complicated but no more than that – being more than you can handle already but even that you don’t see. I.e.,

Complicated is a problem that needs a complicated algorithm to solve. Mostly by dumbing down the situation, a.k.a. ‘modelling’. Which is OK if the problem at hand is complicated indeed, meaning that a step-wise analysis/detailing actually works and one can prune the hierarchical problem tree of insignificant branches until one arrives at a model which can be understood (by you) and tackled with a known problem-solving strategy/algo. Most people tend to just strip down a problem’s structure by slashing away at its parts in- or too-little-discriminately as they don’t see what they loose in the process or couldn’t care they just need a model they can understand not a model that is appropriate for problem resolution – leading to Dunning-Kruger extremities. You know, like ‘any crippled model is better than no model at all’ in this way.
< Failure Level: Maximum+ >

Complex is where this all doesn’t help. Even when you do an intelligent complication stripping … you end up with interrelations tat are just too much to cope, conceptually, and no algorithm for (linear …!) problem reduction can help. Nor can any complex though process. a., we may not be capable of that to the degree necessary nor to the degree we think we’re capable of … [D-K again]; b., no progress may be made.

So, … when “Wicked problem: [A] social or cultural problem that is difficult or impossible to solve for as many as four reasons: incomplete or contradictory knowledge, the number of people and opinions involved, the large economic burden, and the interconnected nature of these problems with other problems.” – here – the latter applies here.
Even when we’re in sub-wicked, just complex territory.

To consider, when you face a ‘problem’…: If it reeks of non-triviality, go Design mode. If it truly is merely complicated – go ahead and solve it what are you waiting for.

By the way; read more on the subject here, and here again.
Shall we stop throwing ‘wicked’ around? It applies so much less often than now, and it inflates the true nature.

Now then, …:

[Great design; Rijksmuseum]

New ethics angles

Or, should I say in typical AI style, a new linear separation ..?

The latter, not too literally. But in the whole thing, I suddenly [referring to, say, ‘over a quarter’] see new insights. As per the old “oh I have no clue let’s take an AI system as a black box and don’t want to have anything to do with the content” of ‘ethicists’, now moving over to some that have a clue and make progress. And even, from a country not known to be interested … have serious legal back-up [attempt] in that explainability thing. But I digress… back to the Clue part.

In particular, the distinction between perception and conceptualisation. Harking back to neuro/psycho/functional left-right brain differences [and how they’re connected into hopefully integrated wholes!], but also to e.g., Kant’s vision (pun intended, for the initiate) of how the brain works – not far off the percept-conceptua divide, actually – and deeper down to e.g., Zukav‘s interpretation of absolute SotA theoretical physics.
[ Note though, that claims about having to go to ‘first principles’ are the pastiche of a pop sci variant of Kant’s apriorism only; don’t trust those that often don’t progress beyond merely meekly bleating about those fp’s and never get anywhere but lost … ]

Yes, let’s work from such fundamental differences, when aligning the full scale of linear-logic algorithmic / Expert-System [ / Evolutionary as an intermission ] / Fuzzy-Logic / ML / AI to see that what we’ll want in the end is a bit of all, surf ‘n turf,
but also that as we hoomans now stand and usually have a bit of all as well (mainly to the right [zero endorsement v.v. ‘left’ or ‘wrong’] of the scale), how do we asses the efficacy of ‘our’ operations, qua ethics, correctness, efficiency ..?

We want to hold ‘AI’ accountable before allocating agency. Do we do that with humans, or is it a matter of being around and then being allowed to run for president ..? [As a pointer, not to anyone in particular, again]

Much work, very interesting:

[Still no clue how ‘they’ could’ve let Hermes be there at the entrance, but somehow relevant; Siena]

Noli me auto

… Yes, again on auto cars [for over a century known as auto-mobiles or just “auto’s”]. This time, a possibly new subject: Who has the self-driving capabilities ..? In a sense close to ‘ownership’ that is. As training is so hard that “As a result, most companies presently guard their machine learning data as trade secrets. But trade secret protection has its drawbacks, one of which is to society at large. Unlike technology taught in patented disclosures, which could allow a new entrant to catch up (provided it licenses or designs around the patent), an AV data set of an unwilling licensor is not obtainable absent a trade secret violation or duplicating the considerable miles driven.[1] — in a distance, it reads a bit like this. Not (only) re the Old Guard, but moreover referring to the stand-offish pose of AI gone beyond AGI… Think that one through.

In practice, for now, it leads to “Keeping AV data secret also creates a “black box” where consumers and authorities are unable to fairly and completely evaluate the proficiency/safety of the AI systems guiding the vehicles. At most, consumers will likely have to rely on publicly compiled data regarding car crashes and other reported incidents, which fail to adequately assess the underlying AI or even isolate AI as the cause (as opposed to other factors). As it is, AV developers’ “disengagement reports” — those tallying incidents where the human attendant must take over for the AI — vary widely, depending on how the developer chooses to interpret the reporting requirement. Without comparable data, consumers are often left with nothing more than anecdotal evidence as to which AV system is the safest or most advanced.” — yet again, intransparency.

Intransparency, at one great blocker of agency … allocatability, in normal human life. Here, we may have a problem..? Wrong Quote alert..! as here.
Nevertheless, this brings back the discussion about driver’s licenses. Currently, car-driver by car-driver.
Soon, both!
As the cars will have to be certified categorically, per software release per physical platform! As the interplay (and compatibility) may vary wildly, and e.g., sw may bloat and hence spoil response times even if tech capabilities don’t undo / worse the sw update(s). Every time, all the time. These failure phenomena creep, too, not only stepwise turn up. [how’zat for brevity?]
As the drivers will have to handle all sorts of exceptions when in all sorts of platforms. Yes, they‘ll need drivers’ licenses still, and be able to demonstrate much more competency, experience and skill in much more of the exceptional circumstances in which they might have to take over from the autos. Driving round the block is useless; shitstorm/clusterfuck traffic is what you get your license for in the first place! So many people will fail, so many will never be able to get the experience (from where, would you suggest ..?).

So, agency … Plus:

[Ah, and there’s NY traffic …]

[1] Quoted from this great introduction.

Friday’s Sobering Thoughts – II

Yup, another one, actually from the same series: A useful reading on the tribulations of fake news, in a way. In another one [way], the scope is much wider. Last Friday’s Friday’s So(m)bering already referred to Ortega y Gasset. This one, too.

Again; how do we fight these things …? One human at a time; if only we knew what ‘time’ is … Even Kant couldn’t make much of it [re: Kritik der reinen Vernunft – no really, all that it has is a derivative notion not an operational definition] – anyways that takes too long.

OK then, leaving you with:

Flooding (news on global warming) versus stochastics

Where the headline reads like something to do with e.g., vintners having to look forward to having more acidic grapes, or moving North to be able to make light-enough styles.
Which is correct reading.
But this post is more on a current-day issue.

Vines needing really, really long to mature to anything useful, you know. Then again: Think of the vast expanses of Canada that will open up to winemaking… Oh Canada your promise …
And the issue at hand, being of Today. Where already we had ‘Stochastic Terrorism‘ – and now it’s to be linked to ‘Flooding‘.

Well, at least, that’s my idea. The latter, dumbing down the masses so Dunning-Kruger may take effect, and overly affect. In between, sowing the wind and reaping the storm with the former, ST.

See? It’s a double whammy. [I mean that in a culturally-neutral sense, excluding suggestive senses oh here I might have gone again]

Now what?
And:

[the bite always looms under the surface, either from a local of from an ages-term outside view; Baltimore]

Your AI Project

As a brief intermission: Your AI project is truly similar to the highest-tech car around.

In the glossy brochure and in the specs. Das AIuto it is (Truth In AI Engineering), The Ultimate AI-Driven Machine. The Best AI or Nothing. There Is No non-AI Substitute.

Then, grand delivery:

There you go! You have your shiny toy!
Unsure when the AI world will mature…

Yes, personally I’d prefer this brand (CD or SC), or this.

Good AI

…implementations. Now the hubbub [in lieu of a. using in lieu of improperly as so often is the case, b. ‘hype’] is settling down a bit, though some laggards are still at it instead of keeping mum and just doing it in a proper way.

The proper way is that the real talk is of now.

  1. On the one hand, there’s an enormous lack of use of the tools that are available now – not widely available, but they’re there, if one would take a look, and see their building ‘tool’ status. No, I don’t mean the glitzy stuff with fancy screens. I mean the ‘advanced’ stuff, that captures the forefront of what was available in theory functional descriptions and ready-made demonstrators of … yes, twenty-five years ago.
    Those have been transformed, by scraping off the (then) practical drawbacks of lack of training data, of compute power, and comparative economic advantages v.v. human labour. The latter, having come down not in cost [your mileage may vary when comparing purchasing power ..!!] but in intelligence. Yes. Read my many posts on ‘management’, cooperative efforts, et al. The fact (sic) is that now, the marginal out-performance of ‘systems’, gains interest.
  2. On the other, there’s the realisation that as with any hype, the proof is in the long, arduous, classic systems implementation drudgery of building and rolling out actual ‘systems’ – not only the software/hw as that is to be somewhere in a cloud [even if private, localised, i.e., a data centre of yours] or you’re doing IT wrong, but also including the assembly line operators duly called ‘managers’ (them again!) that think they still control stuff but are allowed to tighten a screw here and there and certainly not do more as they shouldn’t interfere with the system since they only can/will do wrong then – the system being the assembly line of information flows, of bits and bytes that should be untouched as that’s where the money is in not in the mere auxiliaries called humans. For lack of a better pejorative word.
    This then includes Change Management of the organisation kind, sometimes jokingly called ‘process‘ change. Which is harder to ‘get’ than understanding (true; sic) AI technology; the latter being much beyond mere theoretical physics Sheldon, let alone rocket science. But still quite easy when one puts one’s mind to it. Unless you’re not even believing in yourself for probably a very good reason.
  3. On the third hand [don’t even bother to smallmindcorrect me], there’s more serious stuff to do around the system engineering. Take it as a normal business investment project, like here. Ah, now there‘s the clue to this post. But without the above intro, it wouldn’t been as much fun, wouldit’vebeen..?

So, finally again:

[Now go contemplate, e.g., where this is. Any half-decent AI implementation will render ‘Convento Corpus Christi, V.N. de Gaia’ in seconds; what about you ..?]

No, you’re wrong. Just pay appropriately.

Again, the issue of the ‘cyber’security skill gap (#ditchcyber, and skill ..?) came up, in this thing. Which seems like a logical piece. But it isn’t. It falls into the same trap as a great many other articles, et al.: When discussing rising pay, do they apply economics 101 ..? No they don’t. They see rising pay, but not the enormous chasm between the risen pay and the market-clearing pay. The former, paltry. The latter …:

Why on earth should people that, apparently, have so much more impact on the results of any company than the CEO or Board has, not be paid respective incomes? If, if, if, the company were rationally managed, the ‘cyber’punks that protect the company, would earn five to fifty times more than the CEO does. As that would reflect the value-to-impact ratio. No, let the CEOs, Boards, do their babble double full time anywhere; it’s still babble without much if any, if any positive contribution to the bottom line. [The very rare exceptions noted. Fun fact: You’re not one of them.]
So, where CEOs claim [corollary fun fact: falsely!] to have such an impact that they deserve the 500-5000+ times the income of workers that actually do have an impact, the ‘cyber’punks to be hired should have such a multiple above that, or are you illogical, irrational on the verge of insanity ..? [not saying on which side of the verge.]

Another option is to not pay the in-demands (?) so ridiculously … and also not the Boards. Oh how huge the pools of money would be that then would flow to the employees that actually matter …?

Also, the ‘skills’ gap is mainly a supply gap, made so most certainly by the demand side (as in this; in Dutch but G Translate will do some, possibly mediocre, trick).
So, for now:

[Yet another Board member (uh-huh, you-know-what-I-mean) compensating design (though Good); London]

Maverisk / Étoiles du Nord