You're So Smart

In a reference to a song about me:
Most ab-original humans wouldn’t pass a serious Turing test.
Most serious AI trying to pass, would.

You, the select elite of my blog readers … Well, elite by numbers, mostly, or ..? And select, as in ‘clicked by error’..?

Just kidding, of course, off course.
What I meant was: as originally intended, Turing tests have become a hypothetical mind game ‘only’. Now that we’re approaching ‘intelligence’ of machines, like, graduating from ANI to AGI and on to ASI without a blink — not all of society will change at one instant to the next level, and then after some prep all of society will move to the next! Much more creepy stuff is out there without general (public its) knowledge than you can imagine (if anything) — suddenly we return to the thought experiment.
Acknowledging that we have never been able to give a sort-of extentional definition of ‘intelligence’, only an intentional one. Which may indeed suffice. Now that we’re accustomed, and into ethics discussions rather than did/did-not type of things (ex the laggerds who still can’t stand being surpassed by ‘dumb’ machines — calling them that, calls yourself ‘below’ (quod non) that…), we’ve seem to have made the question irrelevant. When a few still say that this sort of thing is impossible, others are already doing it and hardly anyone seems to care.

The latter part being the scary bit. Wait and see just will not be enough here, in particular RE settling the Ethics elements. It’s not only self-driving cars where momentum is out of human hands and into Technology’s… It’s everywhere.

To not be afraid — or to be but be brave and conquer your fears and Act, this:
20160820_150122
[Still recognisable as VR trompe l’oeil; Rijks Amsterdam]

Contra Bruce, for once

For once, Bruce is not at the right end. Maybe not opposite of it, but.
As per this here blog post of his — a repeat of one of his, and others’, thread.

The argument: We make things, like, security, too difficult for users and hence (?) we shouldn’t try to change them into secure behaviour.
The contra: ‘Guns kill people’, or was it that the men (mostly) firing guns, kill people? And the many toddlers shooting their next of kin since, being at the approximate maturity of the Original gun pwner, they have no clue.

The Contra, too, and much more to the point when it comes to ‘information’ ‘security’: We should make cars run at maximum 5Mph … Since ‘users’ are waaaay too stupid to drive carefully.
Just don’t mention that ‘security’ is a quality not an absolute pass-or-fail thing, and that ‘information’ could not be more vague. [Except ‘cyber’, that’s so vacated of any meaning that it’s a black hole.] And don’t mentoin we still seem to let cars be used by any other moron that once, possibly literally decades ago before ‘chips’ were invented, passed some formal test — the American idea of the test coming very, dangerously, close to … was (sic) it the Belgian? system where one could pick up one’s driver’s license at the post office. Able, allowed, to buy cars that drive not just 5 but 250Mph, on busy roads, without protection against using socmed mid-traffic… One thing could be to introduce Finnish-style booking for unsafe behaviour (if caught, not when as per next paragraph [think that through…]), and/or huge fines for the producers of bad equipment (hw/sw) comparable to fines on car makers, or outright laws to build airbags in, etc.

And then, if we’d design ‘secure’ systems, e.g., the Apple way, we’d end up with even worse Shallows sheeple that have so much less clue than before… And all in the hands of … even in ultra-liberal countries one would suggest either Big Corp, or Big Gov’t, both options being Big Brother literally in such an atrocious Dystopia of humanity.

So, you want safe systems? You get the loss of humanity before actual safety.

[Yes I get the Humans Are The Cause Of Much Infosec Failure thing (including Human Flexibility Can (still!) Solve More Than Machines Can, Against System (!) Malfunction), but also I am completely in favour of both the Humans Must Through Tech Be Completely Shielded From Being Able To Do Anything Wrong and Humans Should Retain All Freedom To Act Responsibly solutions.]

Pick your stand. And:

[Use G Translate if you have to, from Dutch. Typifying the driver, probably, if only for picking the brand/car…; London]

Weird infosec science

Who would have thought — that total surveillance would reach into the house, no / hardly any backdoors need to be built in even.
As explained here, and here in closer-to-humanly-readable form.

If such are the Tempest inroads, who needs the newest-of-highest-tech solutions as they all will all succumb to either trivial complexity-induced-unavoidable sloppiness of implementation, or to circumvention in the above way…?

Of course all of it is an atrocity in ethics but … I won’t be utterly negative about humanity’s future so I’ll stop now. With:
20160820_120127
[Art imitating life; Stedelijk Amsterdam]

Said, not enough

Here’s a trope worth repeating: Humans are / aren’t the weakest link in your InfoSec.

Are, because they are fickle, demotivated, unwilling, lazy, careless, (sometimes! but that suffices) inattentive, uninterested in InfoSec but interested in (apparently…) incompatible goals.

Are, because you make them a single point of failure, or the one link still vulnerable and through their own actual, acute, risk management and weighing, decide to evade the behavioral limitations set by you with your myopic non-business-objectives-aligned view on how the (totalitarian dehumanized, inhumane) organisation should function.

Aren’t, because the human mind (sometimes) picks up the slightest cues of deviations, is inquisitive and resourceful, flexible.

Aren’t, because there’s so many other equally or worse weak links to take care of first. Taking care of the human factor may be the icing, but the cake would be very good to perfect for making the icing worthwhile…!

Any other aspects ..? Feel free to add.

If you want to control ‘all’ of information security, humans should be taken out of the (your!) loop, and you should steer clear of theirs (for avoiding accusations of interference with business objectives achievement, or actually interfering without you noticing since your viewpoint is so narrow).

That being said, how ’bout we all join hands and reach for the rainbow ..? Or so, relatively speaking. And:
DSC_0404
[Where all the people are; old Reims opera (?)]

Walnuts, brain size and you

Combining some recent news, some really old news, and your place in between. Or not.

The recent news: Birds might have tiny brains, but they still may be very intelligent (as animals go). Now, on a related note, discoveries show that the brain cells of birds may be smaller and/or much denser packed than they are in, e.g., humans and family.
The really before-stone-age news:dinosaurs-picture-is-bleak

Combined: Birds have a separate line of descendance from their dinosaur-time quite-close equivalents. Having survived some dino extinction rounds and still remain quite similar in body and operations as before, having kept the same lightweight and small-package brain structure too?
Then, maybe the dinosaurs weren’t so stupid either with their small but possibly also very densely packed neurons and they just had a bad hair day (that’s what you get when a comet strikes your coiffure — footballers beware).
Just a, very,very,very after-the-facts hypothesis… And:
DSC_0595
[For wine making; isn’t that obvious !?!?!? Quinta do Vallado; Douro]

AId

To start, an introduction — how unusual:

René Descartes walks into a bar and sits down for dinner. The waiter comes over and asks if he’d like an appetizer.
“No thank you,” says Descartes, “I’d just like to order dinner.”
“Would you like to hear our daily specials?” asks the waiter.
“No.” says Descartes, getting impatient.
“Would you like a drink before dinner?” the waiter asks.
Descartes is insulted, since he’s a teetotaler. “I think not!” he says indignantly, and POOF! he disappears.

As recalled by YouByNowKnowWho from David Chalmers.

Which demonstrates quite a bit about identity, and artificial intelligence.

The identity part: To quote YBNKW, “… that identity is preserved through continuity of the pattern of information that makes us. Continuity allows for continual change, so whereas I am somewhat different than I was yesterday, I nonetheless have the same identity.” — thus, thinking (both the directed, problem solving way and the massively concurrent undirected, associative and ‘unconscious’ way) is what both constitutes and preserves Identity.

The AI part: Being the part where ‘intelligence’ or the I to the A (or human ~, whatever; after Ray you may not care about a hypothetical difference) is the thinking (or not) of René.

So, whether A or not, the I makes the Id. Not the Es in a mother’s darling child sense! there, it is the (‘super’?)ego but that’s another story.

Now, how to translate that to latest developments in the IAM, blockchain-trust, and ANI/ASI arenas ..? Plus:
DSC_0543
[Nuclear shelter, a.k.a. know your building history; Casa da Musica Porto but you surely knew that]

Not just Q, IQ

Well, yesterday’s post was about just a quote, this one’s about what should be a full cross-post but hey, I’m no wizard I’ll just blockquote it from here because it’s so good (again, qua author):

Society in the Loop Artificial Intelligence

Jun 23, 2016 – 20:37 UTC

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make – a modern version of what philosophers call “The Trolley Problem.” The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning – something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are “trained” by AI engineers using huge amounts of data. The engineers tweak what data is used, how it’s weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk – obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process ‘lensing’, of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it’s quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance.

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human – it was even more important than getting the “correct” answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a “human in the loop” and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It’s also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public – society in the loop – in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn’t unprecedented – in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain – it’s impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem – the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits

•Iyad Rahwan – The phrase “society in the loop” and many ideas.
•Karthik Dinakar – Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
•Andrew McAfee – Citation and thinking on parole boards.
•Natalie Saltiel – Editing.

And, of course for your viewing pleasure:
DSC_0370
[Would AI recognise this, an aside in the Carnegie Library; Reims]

DAUSA

Maybe we should just push for a swift implementation of the megasystem that will be the Digitally Autonomous USA. No more need for things like a ‘POTUS’, or ‘Congress’ or so. When we already have such fine quality of both and renewal on the way into perfection (right?), and things like personal independence and privacy are a sham anyway, the alternative isn’t even that crazy.

But then, there’s a risk (really?): Not all the world conforms yet to, is yet within, the DAUSA remit. Though geographical mapping starts to make less and less sense, there’s hold-outs (hence: everywhere) that resist even when that is futile. The Galactic Empire hasn’t convinced all to drop the Force irrationality and take the blue pill, though even Elon Musk is suspected of being an alien who warns us we’re living in a mind fantasy [this, true, actually — the story not the content so much].
But do you hope for a Sarah Connor ..? Irrationality again, paining yourself with such pipe dreams.

On the other hand … Fearing the Big Boss seems to be a deep brain psychology trick, sublimating the fear of large predators from the times immemorial (in this case: apparently not) when ‘we’ (huh, maybe you, by the looks of your character and ethics) roamed the plains as hunter-gatherers. So if we drop the fear, we can ‘live’ happily ever after; once the perfect bureaucracy has been established. Which might be quite some time from now you’d say, given the dismal idio…cracy of today’s societal Control, or may be soon, when ASI improves that in a blink, to 100,0% satisfaction. Tons of Kafka’s Prozesses be damned.

Wrapping up, hence, with the always good advice to live fearlessly ..! 😉

20160529_135303
[Some Door of Perception! (and entry); De Haar castle]

Print Goodbye World

Somehow, got triggered that there’s a near future where 100 print “Hello world” would meet with Sorry Dave, I can’t compile that not even with warnings (what; no 200 End ..!?) — because one’s not supposed to be able to influence the Machine. No red pills allowed.

Oh the things that keep me awake at night [they don’t]. Soon, baby, soon. Plus:
DSCN6171
[Just Lotharingen things; Nancy]

Maverisk / Étoiles du Nord