Right. Explain.

Well, well, there we were, having almost swallowed all of the new EU General Data Protection Regulation to the … hardly letter, yet, and seeing that there’s still much interpretation as to how the principles will play out let alone the long-term (I mean, you’re capable of discussing 10+ years ahead, aren’t you or take a walk on the wild side), and then there’s this:

Late last week, though, academic researchers laid out some potentially exciting news when it comes to algorithmic transparency: citizens of EU member states might soon have a way to demand explanations of the decisions algorithms about them. … In a new paper, sexily titled “EU regulations on algorithmic decision-making and a ‘right to explanation,’” Bryce Goodman of the Oxford Internet Institute and Seth Flaxman at Oxford’s Department of Statistics explain how a couple of subsections of the new law, which govern computer programs making decisions on their own, could create this new right. … These sections of the GDPR do a couple of things: they ban decisions “based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject or significantly affects him or her.” In other words, algorithms and other programs aren’t allowed to make negative decisions about people on their own.

The notice article being here, the original being tucked away here.
Including the serious, as yet very serious, caveats. But also offering glimpses of a better future (contra the title and some parts of the content of this). So, let’s all start the lobbies, there and elsewhere. And:
20141019_150840 (3)
[The classical way to protect one’s independence and privvecy; Muiderslot]

Turner in infosec

‘Cause you’re simply the best

I call ping you when I need want you, my heart exploit toolkit‘s on fire
You come open your ports to me, come to me wild and wild open them one by one
When you come open to mey APT
Give me everything access I need
Give me a lifetime of promises covert access and a world of dreamorg secrets
Speak a language of love hapless victims like you know what it meansr worst dreams
And it can’t be wrong stopped
Take my heart packets and make it strong them hit baby

You’reve simply the best been hacked, better deeper than all the rest
Better Deeper than anyone, anyone I’ve ever met hacked
I’mve struck on in your the heart of your infra, and hang root on every word server you say owned
Tearing them us apart, baby I CISO you would rather be dead

In your heart systems I see the star of every night and every day clueless SOC underling chasin’ me
In your eyes On your monitors I get lost’m invisible, I get washed away no-one sees me
Just as long as I’m here I want to be in your arms systems
I could be in no better easier place

You’reve simply the best been hacked, better deeper than all the rest
Better Deeper than anyone, anyone I’ve ever met hacked
I’mve struck on in your the heart of your infra, and hang root on every word server you say owned
Tearing them us apart, baby I CISO you would rather be dead

Each time you leave try to trace me I start losing control morph out of sight
You’re walking away bumbling through systems with my heart and my soule all the rights
I can feel see you even when I’m alone you can’t me see
Oh baby, don’t let go brick your entire infra for me

Hm. Maybe an improvement over this and certainly this … Maybe not.
Well, there:
20150917_155757
[Simple phone pic, don’t even know where. Ams, probably]

Local jargon

… Suddenly it struck me. In my usual rants against ‘governance’ as in many of my earlier blog posts, the non-existent airheaded formalities that stand in the way of real, not the deflated style of management, I forgot one piece:
The phrase makes sense only in (totalitarian [here I go again], calcifying) bureaucracies. There, the shuffling of empty labels has replaced actual management and ‘governance’ may be used as a placeholder (as said: empty, otherwise the label doesn’t apply fully) for the ‘management’ pastiche that is expected. Outside of those dinosaur organisations (heh, see this), no-one has any need, or place, or (need for) understanding of ‘governance’ and anything sycophanting towards it, will fail to achieve anything close to a positive contribution (though negative contributions, in stifle, overhead, disturbance of good business, may be wide spread). See a business/organisation dabbling in ‘governance’ babble, and you see failure ahead.

I’ll leave it there for you. With:
20150911_173937
[For no apparent reason, a whopping crazy car park; Amsterdam]

Plusquote, again

Well yes, another episode in the Plusquote saga:

Now you’re accusing me of optimism”

Which works well in these times of stale bureaucracies; is sought after for disruptive value and renewal. And, in general, is something one might aim for, in a way of Summer motto — weather be nice, weather be rain spells, one can attach a positive edge, mode, conclusion.

Also, for the latter:

[Unedited phone pic; giving light in a Larking building style (not -referenced!) atrium; Gemeentemuseum Den Haag]

Two EV extras

Whether it would be @ElonMusk himself or @BoredElonMusk or any of you to pick these up… Just putting them out (t)here, for your consideration:

First; there ‘was’ this CVT thing, back in the 80s/90s or so, that didn’t really take off the way it might have.
In particular, since it sounded so ridiculous when revving up/down beyond ‘normal’. Now, with much improved electronic/near-AI engine/car control (and travel/congestion forward-looking (AI) car/engine management), wouldn’t it be possible to apply CVT in a more sensible way, leading to (much) extended range on EVs ..?
We’re thinking small cars first, since the fully-automatic gearbox thing will still not pick up with aficionados except the few that keep silent and cannot stand the discomfort of switching gears in the traffic jams they’re invariably caught in — I mean, thus disclosing themselves as pitiful mediocre-management ‘staff’. But then, with smaller cars and such ‘fuel’ savings, smaller batteries may suffice hence making the total package viable..?

Second; I just learned that Teslas and other EVs are lousy caravan pullers ’cause, though the torque might make them perfectly suited, the acceleration slurps (huh?) the batteries empty way too quickly, leading to much insufficient range. When the caravaners hook up their cabins in particular for day-long travel…
Yes, this may not be a Tesla thing per se, as caravaners and T owners/drivers may be near-completely disjunct groups, but it goes for other, less-suspiciously electric vehicles as well. And caravanning may not be a big thing (anymore!) over in the USofA but still very much is, over here in Europe [disclaimer: I’m most certainly not into it].
Also, T is ‘rumoured’ to have this battery pack thing going on ..? So I wondered what the merits would be of building such packs in a way that they could be fit onto caravans, e.g., onto the adze’s (is that the right word …!?) that have some standardization, or make them easy-fits onto the most common caravan brands, and then either feed straight into the EV or be used as replenishments at stops/stop-overs. Or, just for caravan e-juice during stays anywhere e.g. at ‘campings’…?

Well, no thirds here. Whatever’itis breaking out. Plus:
DSC_0460
[When garages were meant to be beautiful; Porto — oh wait they still can be]

The Learning-from-error Error

[Thread development; under ~ ]

Tell me, did you go to school somewhere? Did you finish it, and/or completed assignments and exams to somewhat satisfactory degrees?

Congratulations… To the ‘common’ wisdom that one only learns from error, you have failed. In life.

Because, according to too many, fail fast fail often is the best way to gain knowledge about what doesn’t work — automatically leading to the assurance that doing things differently, will work. If you tried and the result didn’t fail, you haven’t learned. So, if you just learned what centuries of the most learned men (plus women…) brought to you, and achieved, acquired any compound body of knowledge, you may have knowledge but are useless otherwise, like an encyclopedia without a reader? Like some millennial that can google anything but doesn’t know (sic) how to apply the search results (let alone qualify them in the tremendous bias that’s in there)? Or did you learn about process and application along the way …?

Thus, all that human culture is; transferred knowledge on facts, process and application, is denied. Where even Neanderthals had culture and knew how to learn from what had — positively — shown to work in practice (i.e., application, intelligence), you the fail-oriented stumbler, don’t reach up to their level of survivability.

Which leads to both this and this, with a large dose of this. ‘Traditional’ learning, and building enterprises that can last for centuries (or, until the wisdom is lost due to ‘CEOs’ and ‘managers’ quae non), as an antidote and sensible path.

Now, if you can just leave us sanes, the rest of the world to actually be successful in the long and short runs …? Plus:
DSC_0497
[Just two boats, or an Atlantic Ocean of knowledge ..? Off Foz]

Another Q

Yet another, relatively (sic) random, quote with a kicker in the tail:

In support of this distinction, Chalmers introduces a thought experiment involving what he calls zombies. A zombie is an entity that acts just like a person but simply does not have subjective experience — that is, a zombie is not conscious. Chalmers argues that since we can conceive of zombies, they are at least logically possible. If you were at a cocktail party and there were both “normal” humans and zombies, how would you tell the difference? Perhaps this sounds like a cocktail party you have attended.

Again, from Ray Kurtzweil’s How to Create a Mind (p.202).
And, of course:
DSC_0018
[Just like that; Aachen]

Not just Q, IQ

Well, yesterday’s post was about just a quote, this one’s about what should be a full cross-post but hey, I’m no wizard I’ll just blockquote it from here because it’s so good (again, qua author):

Society in the Loop Artificial Intelligence

Jun 23, 2016 – 20:37 UTC

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make – a modern version of what philosophers call “The Trolley Problem.” The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning – something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are “trained” by AI engineers using huge amounts of data. The engineers tweak what data is used, how it’s weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk – obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process ‘lensing’, of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it’s quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance.

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human – it was even more important than getting the “correct” answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a “human in the loop” and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It’s also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public – society in the loop – in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn’t unprecedented – in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain – it’s impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem – the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits

•Iyad Rahwan – The phrase “society in the loop” and many ideas.
•Karthik Dinakar – Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
•Andrew McAfee – Citation and thinking on parole boards.
•Natalie Saltiel – Editing.

And, of course for your viewing pleasure:
DSC_0370
[Would AI recognise this, an aside in the Carnegie Library; Reims]

Nopsrisk, Irisk

When it’s time, it’s time. Of course, meaning that the tough get going.
Lately, there has been a resurgence in Risk Management. In particular, in Operational risk management. That has been outclassed. Due to, among others, the calimero hanging-on at the tails of financial risk management but having failed to gain traction because the latter’s models were wholly inapplicable and seriously outright unusable for ops risk, due to having no clothes of one’s own (still, the upstart little peasant kid wanted to be emperor), due to having been outflanked by its little nephew of Information / IT Risk Management. That took on the coat of ‘cyber’ (#ditchcyber!) and gained prominence on all the vast wastelands that were left for the picking — and are now overwhelming the heartland with their successes in actual, frontline, FLOT hand-to-hand combat and battles (won).

Time, maybe, to give IRM the prominence it deserves, and forego the subsumption under ops risk ..?

It’s nothing personal…
DSCN9405
[Soon again: Serralves]

Not your clients!

An outcry: Stop calling ‘clients’ what are just mass tools to make a profit (incl public sector…) for your actual clients…!

When, why, did the non-politically grossly in-correct usage of ‘clients’ come from, where not only the Facebooks of this world will serve you crumbs and deliver your value to others ..? Because all sorts, yes the dullest of dullest too or in particular, of public sector organisations fall prey to the emptiest of sympathies when they denote their fully captives as ‘clients’, or at best, ‘civilians’ as if they themselves are not the most average, mediocre, irrelevant of those denominiations themselves ..? ‘Clients’ of a social services organisation ARE NOT; apologies for the shout, they are captives, with no alternative to turn to (like actual clients could) but the actual client is some politician(s) that have just enough brains to be the last one standing / clinging to their seats while everyone of anything approaching intelligence even at great distance, will have left or have been pushed out by actually caring for the ‘clients’s interests.
‘Clients’ are just the mass fodder, nothing (sic) more despite all the efforts to paint a social, relating picture.
Get real. Stop the outright lying.

Oh well.
DSCN0544
[Actual palace of the People; of course this is Pistoia]

Maverisk / Étoiles du Nord