Understanding a model, you know ..?

No you don’t.
To be more precise, … read this. Which links back to my, and many others’, posts about modelling being all about dumbing down complication or complexity, until you ‘understand’ what’s going on. Here, taken to its full consequence.

Also, tracking back and seeing how ‘we’ may think we’re above-average intelligent, not only does Dunning-Kruger kick in big time, but also … Where do we draw the line, where is ‘our’ or others’ thinking so dumbed-down that we cannot speak of ‘intelligence’ anymore? Oh for sure, the line is somewhere between your intelligence and whatever it is that’s going on in the brains of your co-workers (let alone bosses), but how far on from there would we need to concede that machine learning is off-limits (as well) ..!?
Because it’s a scale, and a hard cut-off will fail [for the rhyme] – AI can drop the I. How are your A-systems coming along ..?

Cheers,

[Ugly, or stylish? At least, it’s consistent and it has a great bar downstairs; Amsterdam]

Advertisial AI

To throw in a new phrase. Since ‘AI’ in itself hasn’t the buzz anymore it once had, we seem to need even more interesting-looking mumbo jumbo to attract attention of the not-so-in siders.

And since ‘adversarial AI’ has been taken. In a somewhat-right way, though grossly incomplete [the doc behind it, here]. Since the most pernicious, class-breaking sub-fields have been omitted [from that linked piece]. Like, adversarial examples… Yes, yes I know I read the darn thing I know adversarial examples are in there, but not in the way that phrase used to be used (like, ages i.e., mere couple of months, ago). Diluting the term.

When ones are diluting the term for their own benefit, profit,
the conclusion is: We need a new term.
Hence, to the linked doc: Advertisial AI. Offering no solutions, just scaremongering you into contacting them so they can deliver yet another powerpoint-cum-ridiculous-billing.

… Yet again, the doc has some insights, like giving an overview of current-day pressing issues, to be tackled [note: ‘resolved’ you can’t!] before they become insurmountable.
Hey, I just want a serious piece of that action, capice..?

Edited to add: The true use with the true definition of adversarial AI, to be used against this, of course. And:


[No relevance (huh, like, the object being just a little incomplete..?), for a pretty sight; Baltimore]

The body is the missing link for truly intelligent machines

A full repost, with permission:

It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.

I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.

Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions – such as whether your average cat is as big as a horse, or likely to chase a mouse.

This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.

Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet – all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.

The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43 per cent of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.

Now, it’s a bit of a leap to go from smart, self-organising cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.

I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.

This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.

On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence – and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.

Ben Medlock

This article was originally published at Aeon and has been republished under Creative Commons. [CC Attribution No-Derivatives]

For your reading patience:

[Your body is not ‘intelligence-transparent’; Barça]

The ,0% mirage

That’s not a Unicode error. That’s a reference to the 100,0% no-error that suddenly [?] seems to be required of “auto”s, as here – get the full 100% explainability as a bonus.

The [?] was there since the initiative/newsflash was from a couple of months ago already. Nevertheless; apparently the clueless have taken over. I will not comment on how that has been the case since human society emerged, almost always. But if the ‘net has done anything for the world (except greatly speeding up its impending complete destruction), it is the outing of this.

How could you achieve such a thing; requiring 100,0% perfection in predicting the future ..!? Because that is the requirement that we’re talking of. Which is logically impossible, in physics the subset of mathematics the subset of rigorous human thought, and even more so in practice, where the world is über-chaotic (in a systems theoretical sense) and we don’t know a thing about the starting conditions [that might only theoretically have helped, if we would be able to map able of the complexity of the universe into a model … that itself would be in the universe; bottomless recursion, and if the model would turn out to be fractalian – already much more structured than Reality.

And, why ..? Certainly, the ones who require such an impossibility by that very fact prove they shouldn’t be allowed to commandeer any piece of machinery [let alone any human group] but last time we looked, this test of fact wasn’t a requirement to obtain a driver’s license. So humans with all their crazyness and unpredictability don’t need to attain the perfect 100,0% but machines, once programmed (incl. ‘learned’ but that goes without saying) hence easily strictly maintainable behaviour, do ..? ‘The world on its head’…
And We commented before, the aim seems to be to be able to blame any humans in the vincinity for all mishaps since it couldn’t have been the machines ..? Nice … in your delusional dreams.

I intended not to get angry(er) and just grin’ace but I think to achieve that I better stop now… with:

[On its head, the world is getting to be; Voorburg]

Danger zone

Just a warning; the world altogether is moving into the danger zone now. Not qua global warming, the threshold on that has been passed already … Not on plastic soup, that’s approaching insurmountability-before-collapse (in this style).
But of a more encompassing style, as typical in the lack of real response to this: ‘Predictive’ profiling creeping (sic) on. As posted earlier, we may need to make haste with anti-profiling human rights, before it’s too late. Which is, quite soon.

Yes, the benign purpose is there. But is also, ridiculously superficial window dressing. The sinister effects, lurking in the background. With all data about abuse – my guesstimate: everywhere such systems are deployed – suppressed. With the moronic claim that yes sometimes, the Good (false positives) will have to suffer from the Bad; if you think that, please remove yourself from the gene pool. Certainly since the false negatives figure nowhere. Anyone surprised we never hear of the false positives ..? Nor of the right positives; do they exist or are they enrolled in the management talent pool since they seem to know what’s up ..?

And, what negative societal effects of total surveillance ..? Already, our society is much tighter controlled than any <give or take a century ago>’s totalitarian regimes, so vilified, could. We’re living in the tirant’s dream world. And, as researched, the effect is we complain all the less.

To end with a Blue Pill (?):

Account Home lockout for cheap thrills

Remember when it only took three ‘failed’ attempts at pwd guessing and one’s coworker account was locked out and needed to be reset to Welcome1 by the helpdesk ..?

  • How stupid since the delayed auto-unlock was available so much longer already – but then, just try a million times by bot and have to wait till Singularity to be let in again;
  • Now, there’s a new flavour [I’ll keep spelling proper English till the K isn’t U and about half of it is back in the EU, welcome Edinburgh! or is it Dunedin] by means of smart quod non home systems that have electronic widgets in stead of a normal door lock.
    Which leaves ample room for neighbourhood [there we go again] children [not kids] to lock you out, possibly, by trying to gain entry in a wrong way. No, cameras won’t save you, the children will know better than you how to approach those without being seen, soon enough. More children have Guy Fawkes’ masks than there are fans of him (hopefully: yet).
    Yes, not all locks have such failed-attempts counters (I guess), but if not they definitely will have some form of them in the near future and someone will stumble upon some everyday means (cell phone based or otherwise, i.e., ubiquitous and easily anonymisable) that does the trick.

Already, I saw somewhere something on “How to recover after you’re locked out of your own ‘smart’ house” so apparently, it is starting to happen.
Cheers with all that. Though it may also serve as a test how fast your security co. responds and might be present at your house, if (big one) the you-lockout triggers an alarm. On the flip side; not an advantage too much since the neighbourhood children will trigger many during-workday drives home on edge b/c Computer Says Alarm.

Can’t have it all, can we? [… there’s more below the pic]
This you can:

[For the little perps: Al’s cell; Philadelphia PA – much worth a visit]

And not even referring to this:

Intermission: A/B – the Net is broken

Just an intermission: The Internet is broken.
Stark claim. But, some proof: I conducted a little A/B test with my posts. Yes, there’s many bias and systemic deviation possibilities, but most probably, they effect neutrally.

The test being, which post would get most views; either short texts or long(er) ones, either with many links or with a few or none, either with a long title or a short one, the title being descriptive or terse/lapidary.
The result: High links/text ratio ⇒ low hit count, ceteris paribus, is the only significant find.

But wasn’t Tim Berners-Lee’s original idea that ‘hypertext’, text linked throughout, should be what the Internet would be about ..? Not cat pics, or NSFW stuff, or streaming et al. as the latter-day circuses; WiFi vid of any kind as the primordial necessity of life.
And true knowledge nay intelligence extending linking of far-flung ideas and concepts … declared unwanted, apparently.
[This test may be repeated with e.g., wikipedia; I predict similar results]

No wonder we’ll have to re-invent the ‘Net.
Now, back to serious [??] content with:

[Transparent glass art at the Noord-Brabants Museum some time ago]

Suity McSuitcasebot

Question: Why don’t we have robot suitcase (un)loaders at airports yet ..?

The physical robot tech is there already, for a number of years, including the touchy-feely right-pressurising of the grippers. And it’s not like there’s a need for sensitively treating eggs or so; today’s manual grippers don’t seem to be all that delicate with your luggage as you can see from the humans holding pens waiting areas or your plane seat.
The piece-by-piece visual ID part also has been solved long ago, and today’s improvements in image processing (think: cars) has more, vastly more than required, subtlety. With a huge margin, even, as all luggage has a somewhat-similar format – or wouldn’t be allowed through luggage self-check-in.

Add in the possibilities of automatic weight balancing (the auto-pick-upper can locate the barco tag, and can decide where in the plane any piece would go); lowering imbalance. And in the memory of this; when some moron passenger thinks they’re so Important that they can go on shopping till the gate has closed, the Machine will know somewhat-exactly where the luggage to be unloaded sits. Thus lowering wait times, delays.
The Machine would also know exactly which luggage did or did not go in/out of a plane, thus having an extra check on lost/mis-routed luggage. And when some piece goes lost (elsewhere?) a cursory description could run through the visual-recognition system to spot its last-seen location. Both, increasing travellers’ airport satisfaction rating.

Yes, today’s luggage handlers are cheap. But they act like it, unsurprisingly – very low pay with high work-related-injury risk induced stress –, decreasing non-financial returns of airports (i.e., customer satisfaction). And they care not about speed, anyway. Not in the way that a Suity McSuitcasebot would be able to work considerably faster (thus potentially lowering run-around times!) than a human handler that complains, has a sore back (average airport-deployed employee health increase!), doesn’t care about precision (careless negligence) and possibly maltreats luggage (throwing it around and who cares when something accidentally drops).
I also see qualitative improvements in security; less men handling luggage, more per-piece control.

And qua overall handling: For one, linking up the distribution/collection of luggage with the central storage/handling under the airport, may save layover costs (space/time). For another, why not have a continuous distribution system, with holding facilities much closer to every gate..? Increasing turn-around times even further. Also since the self-checked-in luggage under the airport already is in their individual trays, easing robot visuals/handling later on.

So, probably robots would take some investment to develop and deploy. And some handler staff would have to be retained, for on-site supervision and troubleshooting I guess; no categorical redundancies but some, with retained staff having less hauling, slightly more interesting work left. Like outsize luggage or precious cargo for which the high-speed machines can’t field enough tenderness.
Returns are both non-financial and financial. Add the possibility to let robots haul (a bit) heavier luggage – probably, in the plane’s cargo hold humans may need to schlepp still – but one can allow variation, and charge per kg over some (low) standard weight ..? [Pre-registered otherwise the plane might get too heavy for take-off…]
All improvements require upfront investment — for a serious and lasting return in time.

Edited to add: This may be thrown into the mix. Slightly after It’s five o’clock somewhere so you can figure out where, on your own you smartypants.

Leaving you with:

[This one wouldn’t need help with ‘unloading’ I guess; an old analog pic from … near Racine, WI at some inpromptu(ly found) local air show]

Correcting the pyramids

By and large, the world is slowing down a bit [sic] already again towards the Summer lull. Which gives us plenty of time to get rid of old habits. And of misconceptions.
The famous pyramids are one (huh) of them, definitely.
Compare the standard picture you’ve all become accustomed to, to ‘For instance, he wrote that a man who struggles to feed himself “may fairly be said to live by bread alone.” Maslow, however, “was quick to point out that such situations are rare,” the authors explain. Instead, he felt that most people “are partially satisfied in all their basic needs and partially unsatisfied in all their basic needs at the same time.”‘ from this.

Yes. I didn’t say that I referenced those predecessors by a handful of centuries of the picture, like, in Egypt, Central America, and elsewhere, as idealised forms in stone of said picture quite a bit avant la lettre; where did you get that idea ..??

Two more then, with the Why I write this all up in the first place:
As Bridgman and his fellow researchers note, the hierarchy concept “captured the prevailing [post-war] ideologies of individualism, nationalism and capitalism in America and justified a growing managerialism in bureaucratic (i.e., layered triangular) formats.”‘ and
Fowler finds that many executives use the pyramid as “an excuse to not have to deal with people’s psychological issues,” and to set their more complex needs aside. Says Fowler, “It’s kind of like, ‘Well, we can’t afford to give this to everybody at the bottom of the pyramid, so we’ll just assume they’re not going to be motivated by higher-order reasons.’”

Another ill-effect she links to Maslow’s hierarchy is the belief that: “If I made more money, I would be happier. Or, if I had those security needs taking care of, I would be happier, or more motivated to do my job,” says Fowler. “People get trapped in that,” she argues, “and yet if money was the solution, we wouldn’t see any rich people who suffer. They wouldn’t have divorces or become drug addicts or commit crimes.”
Most damning, perhaps, is the pyramid’s insinuation that there is a finite amount of space at “the top,” and we are all competing against each other to get there. “Really,” says Fowler, “people are ‘self-actualizing’ all over the place.”’
Indeed, when you read the whole darn piece it isn’t difficult at all once you gain elementary school proficiency in reading [oh hey may that take some time still? You an executive?], you’ll see that an idea warped to suit consultants’ needs [infinite recursion then, qua satisfaction], will almost invariably lead to enormities qua externalities.

Now there‘s something to fight against…

Oh and:

[Triangles, anyone? Not here! Porto, obviously]

Phishing for ‘clients’; AI-style

Just a warning, that the same thing that is still going on, and on, and on, and on, … in information-security/infosec/ITsec/Cyberrrr…!, being the exasperating hype-panting over the latest threats and here’s now finally a single (our, the only) tool / method to solve all your troubles; is now flowing over into ethical/transparent/explainable AI auditing.

Not.
Show me evidence, actual source code [even if of the written-out procedures kind] of how one would go about realising such opinion formulation after obtaining reasonable evidence… That stuff just does not happen, doesn’t exist.

It’s only vapourware! Be alert! [If only because the world needs more lerts!] It’s just foot-in-the-door kind of fake-it-til-you-make-it-at-the-clients-much-expense.

For the rest:

[Was: The ethics content of bankers at the Zuid-As Amsterdam; Is: The algo transparency of fat promises]

Maverisk / Étoiles du Nord