Not just Q, IQ

Well, yesterday’s post was about just a quote, this one’s about what should be a full cross-post but hey, I’m no wizard I’ll just blockquote it from here because it’s so good (again, qua author):

Society in the Loop Artificial Intelligence

Jun 23, 2016 – 20:37 UTC

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make – a modern version of what philosophers call “The Trolley Problem.” The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning – something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are “trained” by AI engineers using huge amounts of data. The engineers tweak what data is used, how it’s weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk – obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process ‘lensing’, of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it’s quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance.

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human – it was even more important than getting the “correct” answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a “human in the loop” and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It’s also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public – society in the loop – in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn’t unprecedented – in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain – it’s impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem – the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits

•Iyad Rahwan – The phrase “society in the loop” and many ideas.
•Karthik Dinakar – Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
•Andrew McAfee – Citation and thinking on parole boards.
•Natalie Saltiel – Editing.

And, of course for your viewing pleasure:
DSC_0370
[Would AI recognise this, an aside in the Carnegie Library; Reims]

DAUSA

Maybe we should just push for a swift implementation of the megasystem that will be the Digitally Autonomous USA. No more need for things like a ‘POTUS’, or ‘Congress’ or so. When we already have such fine quality of both and renewal on the way into perfection (right?), and things like personal independence and privacy are a sham anyway, the alternative isn’t even that crazy.

But then, there’s a risk (really?): Not all the world conforms yet to, is yet within, the DAUSA remit. Though geographical mapping starts to make less and less sense, there’s hold-outs (hence: everywhere) that resist even when that is futile. The Galactic Empire hasn’t convinced all to drop the Force irrationality and take the blue pill, though even Elon Musk is suspected of being an alien who warns us we’re living in a mind fantasy [this, true, actually — the story not the content so much].
But do you hope for a Sarah Connor ..? Irrationality again, paining yourself with such pipe dreams.

On the other hand … Fearing the Big Boss seems to be a deep brain psychology trick, sublimating the fear of large predators from the times immemorial (in this case: apparently not) when ‘we’ (huh, maybe you, by the looks of your character and ethics) roamed the plains as hunter-gatherers. So if we drop the fear, we can ‘live’ happily ever after; once the perfect bureaucracy has been established. Which might be quite some time from now you’d say, given the dismal idio…cracy of today’s societal Control, or may be soon, when ASI improves that in a blink, to 100,0% satisfaction. Tons of Kafka’s Prozesses be damned.

Wrapping up, hence, with the always good advice to live fearlessly ..! 😉

20160529_135303
[Some Door of Perception! (and entry); De Haar castle]

Save a few

Just a reminder; Dutch lower gov’t agencies struggling with storage formats … (Here, in Dutch, but Alphabet Translate (heh that still doesn’t ring well!) may help)

There may be hope for (!) privacy. And:
DSCN1053
[Nice, functional (as / where it is), and certainly will look Old before you know it; La Défense]

Human / Not

Of course Cerf is right. But also … is the opposite side; human error would be harmless (save the Almost part) when vulnerabilities wouldn’t be attacked. As long as they exist, they will. And human error will exist; that’s just the way our genes, and memes, and all of Nature, play it out. The instability of Nature (here and here!) means evolution happens, works. On the Changing-environment- and on the trial-through-error sides.

Hence, you’re still where you started. Still pursuing max fault-freedom but sure to not achieve it. I.e., in danger — the Condition Humaine since the dawn of Time (on that in a PhD thesis, some other time) and dismissing Hegelian progress fantasies, forever.

Well then, to leave in a positive tone:
DSCN0487
[No time ?? for R&R; outside Siena]

Overwhelmed by ‘friendly’ engineers

The rage seems to be with chat bots, lately. Haven’t met any, but that may only be me — not being interesting enough to be overwhelmed by their calls.
Which will happen, in particular to those in society that have less than perfect resistance against the various modes of telesales and other forms of social engineering (for phishing and other nefarious purposes) already. Including all sorts of otherwise-possibly-bright-and-genius-intelligent-but (??)-having-washed-up-in-InfoSec-for-lack-of-genuine-societal-intelligence types like us. But these being the ones of all stripes that ‘we’ need to protect, rather than the ones apparently already so heavily loaded that they can spare the dime for development of such hyper-scaling ultra-travelling foot-in-the-door salesmen. Is this the end stage, where none have a clue as to which precious little interaction is still actually human-to-human, and the rest may be discarded ..?

As for the latter … It raises the question of Why, in communications as a human endeavor… Quite a thought.

But for the time being, you’re hosed, anti-phishing-through-social-engineeringwise.

Just sayin’. Plus:
DSCN0408
[Retreat, a.k.a. Run to the hills / Run for your life; but meant positively! Monte Olivieto Maggiore near Siena]

Last night a … saved me (updated)

Just a repost, as due to some obvious glitch, the below didn’t get the chart storming air play it deserves (hence, it must’ve been a glitch).
[To the music of Indeep — somewhat outdated lyrics but what style …]

Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life
Cause I was sittin’ there screen of death’d bored to death
And in just one breath he chatted said

You gotta reboot get up
You gotta reload get on
You gotta restore get down girl
You know you drive me #DIV/0! crazy baby
You’ve got me turning to another OS man
Called you on the VOIP phone

No one’s pinging back home
Baby why ya leave me all >dev/null alone
And if it wasn’t for the endless GitHub surfing music
I don’t know what I’d do

Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life from a broken pipe heart
Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life with a patch song

You know I hopped into my notepad car
Didn’t need to leave the coffee shop get very far no
Because I had you on my stash of data-breached X-rated pics mind
Why only give you a ticket and secretly close it immediately be so unkind?

You’ve got your hapless users women all around
All around this AD town, boy
But I was trapped in the SLA from hell love with you
And I didn’t know what to do
But when I turned on my RTFManual radio
I found out all I needed to key in know
Run the diagnostics kit Check it out

Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life from a broken pipe heart
Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life with a patch song

Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life from a broken pipe heart
Last night an information risk / security management professional saved my desktop PC life
Last night an information risk / security management professional saved my desktop PC life with a patch song

Hey listen up to your local information risk / security management professional
You better hear what he’s got to type so fast you can’t keep track say
There’s not a problem that I can’t fix
Cause I can do it in the rogue exploit suite mix
And if your crappy ol’ XP machine man gives you trouble
Just you step away from the keyboard move out on the double
And you don’t let it trouble your ‘brain’ brain
Cause away goes PEBKAC troubles
Down the drain
I said away goes PEBKAC troubles
Down the drain

Last night an information risk / security management professional saved my desktop PC life
There’s not a problem that I can’t fix
Cause I can do it in the rogue exploit suite mix
There’s not a problem that I can’t fix
Cause I can do it in the rogue exploit suite mix

Last night an information risk / security management professional saved my desktop PC life
There’s not a problem that I can’t fix
Cause I can do it in the rogue exploit suite mix
There’s not a problem that I can’t fix
Cause I can do it in the rogue exploit suite mix

Quite an improvement indeedp … And leaving you with, of course a new pic:

DSC_0109
[Old and new in one pic … Haut K again]

Plusquote: Critique of the Pure Reasonlessness

This episode, by reference to the excellent Future Crimes (Marc Goodman, as here), one originally by G.K. Chersterton (The Blue Cross):

The criminal is the creative artist; the detective only the critic

To which we would want to add: And the auditor, only the disgruntled desk-bound traffic cop.
Since, the checker (and penaliser) of the trivial petty little rules, should remain in the third line, right ..?

Where by the way, the creativity of the artist is required to make the art work that sells — and hence all make their living off straightforward crime or would perish. The more you bureaucratise into totalitarianism, the more you see life wither, till death. Even if the crime keeps on being perpetrated — by laxity of the second and particularly third lines, in cahoots with the profiteers. … Maybe that’s a bit deep-but-overly-lapidary …
Hence, just:
DSC_0247
[Panopticon Central, Strassbourg]

Big Data as a sin

Not just any sin, the Original one. Eating from the ultimate source of Knowledge that Big, Totalitarian, All-Thinkable Data is, in the ideal (quod non).
We WEIRDS (White, Educated, Industrialised, Rich Democratic people), a.k.a. Westeners, know what that leads to. Forever we will toil on spurious correlations…

5ff77c8f-a5a4-4a23-b585-06acdec85a84-original

Miss(ed), almost ..?

One might have easily missed one of the most valuable annual reports … but if you trust it (you can) or would want to dismiss it (you can, for various reasons like the management babble leading to a great many missed threats and ~levels as here, always of course, but still), it is an important item when you’re in InfoSec despite #ditchcyber! so you’d better study it.
Oh, yeah, this being the thing.

OK now. Plus:
DSC_0113
[In “cyber”space (#ditchcyber once more), easily scaled. Haut Koenigsbourg again.]

Maverisk / Étoiles du Nord