Q: Are the fools still out there ..?

Would anyone have an update whether these kind of people are still onto this ..?

1. ‘These kind of people’: When displaying either such a lack of logical understanding or such a blatant urge for blabbering about (actually) ‘fake news’ a.k.a. ‘lies’, the choice is theirs and ours: Should society allow such persons to walk freely, or (xor) should they be given placement in sheltered workshops only ..? The latter, their current positions aren’t; overpaid and with the extreme negative impact they have on civil (and civilised) society at large… Yeah I kno, their deceit (if that’s what it is) is fed by other objectives – but these other objectives are against the objectives they were supposed/tasked to aim for; in case you or they don’t see that, you’re in the illogic territory again.

2. The update, on this: The discussion has been age-old, and has as long, favoured the survival of the anti-this over the pro-this, by the latter removing themselves through this, from the gene pool. But at what cost to others …?

3. Hence: We’ll have to encourage the pro-this to continue at speed, so they’re the ones being removed. E.g., by ‘counterfactual’ evidence i.e. use their backdoors into their back openings and out the results in full view. That seems to be the only way to minimise the externalities…

X: Russell’s question re Dunning-Kruger. And:


[Their brain. Or the Amersfoortse Kei. I always keep mixing these up…]

Firmly frustrated

Was thinking – which at least I do, what about you ..? – on the subject of theories of firm, and on transaction costs et al. that seem to be all the rage over the last couple of decades, to be gotten rid of by ‘disruption’ and the ‘net.

Where originally, people banded together to form a firm [for the alliteration, again], so that all their individual interests could be taken just a bit further than they would have been able to on their own in sacrificing a little bit of their independence, by saving on transaction costs that would have bugged the network-of-purely-independents that was the alternative.
But at what cost? Nothing in life is free.
The coordination costs may be smaller, but when firms grow, these are back big [same] time.
The sacrifice may appear small at first, but when firms grow, these are back big [same] time.

Just re-re-rex-read J.K. Galbraith (as referenced in here) on the four solutions. Of which a number may, might, reflect onto latter-day disruptive business models — that mostly aren’t, at least not in a sustainable way: At some point in the (near!) future, the lack of sustainable profits comes to roost. Still, read up on the (lower part of) this, for a glimpse of a better future.

On the other (?) hand; large, stable, incumbent organisations may just be too big to still have the right balance between extra productivity vis-à-vis individual goals (certainly when employees don’t see any benefit, bonus/malus, of extra effort; just coast and wing it so at least you don’t loose too much) and on the other hand feeling lost among the anonymous mass thus not seeing any personal gains. Beyond mere pay checks but there, Maslow comes in [not with a pyramid you …; read here].

Now, is there some sort of Dunbar’ish number one can ballpark refer to for an ideal, optimal organisation size ..? Where the lost and lonely are engaged back into giving just a little extra and receiving more back, outdoing the Overhead by small enough size and less internal coordination costs? Or is that where the Dunbar-150 actually comes from …??
You probably know more research than I do [0] that has been done on this already. Glad to receive your pointers…

For the TIA:

[Your org design; National Gallery, Sculpture Garden DC]

Haircuts

In this renewed age of privacy [attention, not actual], regulators seem not to be of this age. They still issue fines in the paltry single-digit billion order of magnitude, if at all.

Whereas this age is also one of this. Whaddif the common 4%-of-turnover were replaced by 4%-of-market-cap ..?

Would be easily implemented; take the average market cap over the last year, or over the period of breach of regulation(s), whichever is highest.
No need anymore for int’l tax reform on global dodgers, right..? B/c with the above, only ethics would remain as an argument for such reform – hah hah, in today’s age.

Why not ..?
And:

[Reality is that warped already anyway; Toronto]

…bbatical you want ..?

Hey, just a question: Would any of you – yes, yes I know that’s a plural where a (much rounded up) single would suffice – have actual experience with working via this ..?
I posted about it some time ago [like, 39 months, oh how time flies like an arrow, like here, and here in particular] and had lost sight. But see that they seem to be quite alive, still, or am I looking at some internet artifact ..?

As a TIA for any reply…:

[Darn! Yes they do have cubicles (‘cells’ even here…) and here’s that pic again; Segovia]

Understanding a model, you know ..?

No you don’t.
To be more precise, … read this. Which links back to my, and many others’, posts about modelling being all about dumbing down complication or complexity, until you ‘understand’ what’s going on. Here, taken to its full consequence.

Also, tracking back and seeing how ‘we’ may think we’re above-average intelligent, not only does Dunning-Kruger kick in big time, but also … Where do we draw the line, where is ‘our’ or others’ thinking so dumbed-down that we cannot speak of ‘intelligence’ anymore? Oh for sure, the line is somewhere between your intelligence and whatever it is that’s going on in the brains of your co-workers (let alone bosses), but how far on from there would we need to concede that machine learning is off-limits (as well) ..!?
Because it’s a scale, and a hard cut-off will fail [for the rhyme] – AI can drop the I. How are your A-systems coming along ..?

Cheers,

[Ugly, or stylish? At least, it’s consistent and it has a great bar downstairs; Amsterdam]

Advertisial AI

To throw in a new phrase. Since ‘AI’ in itself hasn’t the buzz anymore it once had, we seem to need even more interesting-looking mumbo jumbo to attract attention of the not-so-in siders.

And since ‘adversarial AI’ has been taken. In a somewhat-right way, though grossly incomplete [the doc behind it, here]. Since the most pernicious, class-breaking sub-fields have been omitted [from that linked piece]. Like, adversarial examples… Yes, yes I know I read the darn thing I know adversarial examples are in there, but not in the way that phrase used to be used (like, ages i.e., mere couple of months, ago). Diluting the term.

When ones are diluting the term for their own benefit, profit,
the conclusion is: We need a new term.
Hence, to the linked doc: Advertisial AI. Offering no solutions, just scaremongering you into contacting them so they can deliver yet another powerpoint-cum-ridiculous-billing.

… Yet again, the doc has some insights, like giving an overview of current-day pressing issues, to be tackled [note: ‘resolved’ you can’t!] before they become insurmountable.
Hey, I just want a serious piece of that action, capice..?

Edited to add: The true use with the true definition of adversarial AI, to be used against this, of course. And:


[No relevance (huh, like, the object being just a little incomplete..?), for a pretty sight; Baltimore]

The body is the missing link for truly intelligent machines

A full repost, with permission:

It’s tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.

I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.

Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions – such as whether your average cat is as big as a horse, or likely to chase a mouse.

This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.

Machine learning has produced many tremendous practical applications in recent years. We’ve built systems that surpass us at speech recognition, image processing and lip reading; that can beat us at chess, Jeopardy! and Go; and that are learning to create visual art, compose pop music and write their own software programs. To a degree, these self-teaching algorithms mimic what we know about the subconscious processes of organic brains. Machine-learning algorithms start with simple ‘features’ (individual letters or pixels, for instance) and combine them into more complex ‘categories’, taking into account the inherent uncertainty and ambiguity in real-world data. This is somewhat analogous to the visual cortex, which receives electrical signals from the eye and interprets them as identifiable patterns and objects.

But algorithms are a long way from being able to think like us. The biggest distinction lies in our evolved biology, and how that biology processes information. Humans are made up of trillions of eukaryotic cells, which first appeared in the fossil record around 2.5 billion years ago. A human cell is a remarkable piece of networked machinery that has about the same number of components as a modern jumbo jet – all of which arose out of a longstanding, embedded encounter with the natural world. In Basin and Range (1981), the writer John McPhee observed that, if you stand with your arms outstretched to represent the whole history of the Earth, complex organisms began evolving only at the far wrist, while ‘in a single stroke with a medium-grained nail file you could eradicate human history’.

The traditional view of evolution suggests that our cellular complexity evolved from early eukaryotes via random genetic mutation and selection. But in 2005 the biologist James Shapiro at the University of Chicago outlined a radical new narrative. He argued that eukaryotic cells work ‘intelligently’ to adapt a host organism to its environment by manipulating their own DNA in response to environmental stimuli. Recent microbiological findings lend weight to this idea. For example, mammals’ immune systems have the tendency to duplicate sequences of DNA in order to generate effective antibodies to attack disease, and we now know that at least 43 per cent of the human genome is made up of DNA that can be moved from one location to another, through a process of natural ‘genetic engineering’.

Now, it’s a bit of a leap to go from smart, self-organising cells to the brainy sort of intelligence that concerns us here. But the point is that long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.

I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.

This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.

On the same day that SwiftKey unveiled Hawking’s new communications system in 2014, he gave an interview to the BBC in which he warned that intelligent machines could end mankind. You can imagine which story ended up dominating the headlines. I agree with Hawking that we should take the risks of rogue AI seriously. But I believe we’re still very far from needing to worry about anything approaching human intelligence – and we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.

Ben Medlock

This article was originally published at Aeon and has been republished under Creative Commons. [CC Attribution No-Derivatives]

For your reading patience:

[Your body is not ‘intelligence-transparent’; Barça]

The ,0% mirage

That’s not a Unicode error. That’s a reference to the 100,0% no-error that suddenly [?] seems to be required of “auto”s, as here – get the full 100% explainability as a bonus.

The [?] was there since the initiative/newsflash was from a couple of months ago already. Nevertheless; apparently the clueless have taken over. I will not comment on how that has been the case since human society emerged, almost always. But if the ‘net has done anything for the world (except greatly speeding up its impending complete destruction), it is the outing of this.

How could you achieve such a thing; requiring 100,0% perfection in predicting the future ..!? Because that is the requirement that we’re talking of. Which is logically impossible, in physics the subset of mathematics the subset of rigorous human thought, and even more so in practice, where the world is über-chaotic (in a systems theoretical sense) and we don’t know a thing about the starting conditions [that might only theoretically have helped, if we would be able to map able of the complexity of the universe into a model … that itself would be in the universe; bottomless recursion, and if the model would turn out to be fractalian – already much more structured than Reality.

And, why ..? Certainly, the ones who require such an impossibility by that very fact prove they shouldn’t be allowed to commandeer any piece of machinery [let alone any human group] but last time we looked, this test of fact wasn’t a requirement to obtain a driver’s license. So humans with all their crazyness and unpredictability don’t need to attain the perfect 100,0% but machines, once programmed (incl. ‘learned’ but that goes without saying) hence easily strictly maintainable behaviour, do ..? ‘The world on its head’…
And We commented before, the aim seems to be to be able to blame any humans in the vincinity for all mishaps since it couldn’t have been the machines ..? Nice … in your delusional dreams.

I intended not to get angry(er) and just grin’ace but I think to achieve that I better stop now… with:

[On its head, the world is getting to be; Voorburg]

Danger zone

Just a warning; the world altogether is moving into the danger zone now. Not qua global warming, the threshold on that has been passed already … Not on plastic soup, that’s approaching insurmountability-before-collapse (in this style).
But of a more encompassing style, as typical in the lack of real response to this: ‘Predictive’ profiling creeping (sic) on. As posted earlier, we may need to make haste with anti-profiling human rights, before it’s too late. Which is, quite soon.

Yes, the benign purpose is there. But is also, ridiculously superficial window dressing. The sinister effects, lurking in the background. With all data about abuse – my guesstimate: everywhere such systems are deployed – suppressed. With the moronic claim that yes sometimes, the Good (false positives) will have to suffer from the Bad; if you think that, please remove yourself from the gene pool. Certainly since the false negatives figure nowhere. Anyone surprised we never hear of the false positives ..? Nor of the right positives; do they exist or are they enrolled in the management talent pool since they seem to know what’s up ..?

And, what negative societal effects of total surveillance ..? Already, our society is much tighter controlled than any <give or take a century ago>’s totalitarian regimes, so vilified, could. We’re living in the tirant’s dream world. And, as researched, the effect is we complain all the less.

To end with a Blue Pill (?):

Account Home lockout for cheap thrills

Remember when it only took three ‘failed’ attempts at pwd guessing and one’s coworker account was locked out and needed to be reset to Welcome1 by the helpdesk ..?

  • How stupid since the delayed auto-unlock was available so much longer already – but then, just try a million times by bot and have to wait till Singularity to be let in again;
  • Now, there’s a new flavour [I’ll keep spelling proper English till the K isn’t U and about half of it is back in the EU, welcome Edinburgh! or is it Dunedin] by means of smart quod non home systems that have electronic widgets in stead of a normal door lock.
    Which leaves ample room for neighbourhood [there we go again] children [not kids] to lock you out, possibly, by trying to gain entry in a wrong way. No, cameras won’t save you, the children will know better than you how to approach those without being seen, soon enough. More children have Guy Fawkes’ masks than there are fans of him (hopefully: yet).
    Yes, not all locks have such failed-attempts counters (I guess), but if not they definitely will have some form of them in the near future and someone will stumble upon some everyday means (cell phone based or otherwise, i.e., ubiquitous and easily anonymisable) that does the trick.

Already, I saw somewhere something on “How to recover after you’re locked out of your own ‘smart’ house” so apparently, it is starting to happen.
Cheers with all that. Though it may also serve as a test how fast your security co. responds and might be present at your house, if (big one) the you-lockout triggers an alarm. On the flip side; not an advantage too much since the neighbourhood children will trigger many during-workday drives home on edge b/c Computer Says Alarm.

Can’t have it all, can we? [… there’s more below the pic]
This you can:

[For the little perps: Al’s cell; Philadelphia PA – much worth a visit]

And not even referring to this:

Maverisk / Étoiles du Nord