Rio per capita

… Is the medal list per capita out already ..?
[Spoiler: next Thusday’s post has some results for the below…]
For surely, just adding up medals per ‘country’ is ridiculous. When some country may send two athletes (four?) to some contest and can pull from, e.g., 10M citizens, how much infrastructure (economically, culturally etc.) can it muster, compared to some country that has a potentials pool of, e.g. 300M ..?
[Including that some form of compensation should be available for the very fact that population- and surface-wise smaller countries have a much lower ‘pyramid’ of local contestants challenging each other for better performance, and less physical room for training/contest facilities, uniform marketing hence sponsoring, and societal recognition to be had — if at all, see the following.]

Bragging about some idiotic sort of ‘we’ that has collected 1000 medals over the decades, is double nonsense. How many of the medal winners were allowed to procreate so prolifically that, genetically, the ‘we’ is now justified, gene pool wise? Or rather, how many of the medal winners were neglected by society so that they died in ignominy and often even poverty ..!? That’s quite contrary to the ‘we’, those medals should be discounted from any total …

So, where is it, the Per Capita medals list of, e.g., Rio’16 ..?

[No, the Netherlands wouldn’t climb very much higher; close to median in population as it is, and same qua performance (?).]

Next, what would a handicap system look like ..?

And:
20150311_122327_HDR[1]
[a.k.a. ‘The Medal Race’ — or is it a commentary on the financial industry in the midst of which it lies beached ..? [spoiler: yes it is]; Zuid-As Amsterdam]

ChainWASP

… With all the blockchain app(lication)s, in all senses, sizes and seriousnesses if that is a word, growing (expo of course) everywhere,
wouldn’t it be time to think about some form of OWASP-style programming quality upgrading initiative,

now that the ‘chain world is still young, hasn’t yet encountered its full-blown sobering-up trust crash through sloppy implementation. But, with Ethereum‘ and others’ efforts to spread the API / Word (no, no, not the linear-text app…) as fast and far and wide as possible, chances of such a sloppy implem leading to distrust in the whole concept, may rise significantly.

Which might, possibly, hypothetically, be mitigated by an early adoption of … central … Oh No! control mechanism of e.g., code reviews by trusted (huh?) third parties (swarms!) where the code might still remain proprietary and copyrighted.
Or at least, the very least, have some enforceable set of coding quality standards. Is that too much asked …??

I know; that’s a Yes. So I’ll leave you with the thought of a better near-future, and:
20150109_145839
[Horizontal until compile-time errors made adjustments necessary (pic); beautiful concept — other than Clean Code, actually executed to marvelous effect]

The carrot won’t stick

Almost as an intermission, on my way to a full-length post on behavioral change and InfoSec: A shortie on Compliance.

Having realised that classical compliance is a hygiene thing: Nothing happens, until some factor sinks below the surface / zero; then, all heck breaks loose.
I.e., no carrot, many many sticks. Not your average well-balanced incentive scheme, right?

Classical awareness / behavioral change programs, then. Where only the winner, Employee of the Month, or less, will receive some recognition. Often, recognized among peers and colleagues ‘for being a d.ck’. The rest, that tagged along without doing anything particularly bad, or even only just arriving at the #2 spot: Not much, often Nothing.
A tiny carrot, possibly up some unsunshined place or used as pick, and not much by way of sticks.

Where is the scheme with a lot of carrots (but not for all, especially not as guaranteed sign-on bonus…!!) and a few sticks-in-private (as they should be!) …?

Just asking, maybe for an impossible thing but your considerate responses are very much welcomed… and:
DSC_0700 (2)
[‘Dagpauwoog’ i.e., back yard beauty]

The Learning-from-error Error

[Thread development; under ~ ]

Tell me, did you go to school somewhere? Did you finish it, and/or completed assignments and exams to somewhat satisfactory degrees?

Congratulations… To the ‘common’ wisdom that one only learns from error, you have failed. In life.

Because, according to too many, fail fast fail often is the best way to gain knowledge about what doesn’t work — automatically leading to the assurance that doing things differently, will work. If you tried and the result didn’t fail, you haven’t learned. So, if you just learned what centuries of the most learned men (plus women…) brought to you, and achieved, acquired any compound body of knowledge, you may have knowledge but are useless otherwise, like an encyclopedia without a reader? Like some millennial that can google anything but doesn’t know (sic) how to apply the search results (let alone qualify them in the tremendous bias that’s in there)? Or did you learn about process and application along the way …?

Thus, all that human culture is; transferred knowledge on facts, process and application, is denied. Where even Neanderthals had culture and knew how to learn from what had — positively — shown to work in practice (i.e., application, intelligence), you the fail-oriented stumbler, don’t reach up to their level of survivability.

Which leads to both this and this, with a large dose of this. ‘Traditional’ learning, and building enterprises that can last for centuries (or, until the wisdom is lost due to ‘CEOs’ and ‘managers’ quae non), as an antidote and sensible path.

Now, if you can just leave us sanes, the rest of the world to actually be successful in the long and short runs …? Plus:
DSC_0497
[Just two boats, or an Atlantic Ocean of knowledge ..? Off Foz]

Not just Q, IQ

Well, yesterday’s post was about just a quote, this one’s about what should be a full cross-post but hey, I’m no wizard I’ll just blockquote it from here because it’s so good (again, qua author):

Society in the Loop Artificial Intelligence

Jun 23, 2016 – 20:37 UTC

Iyad Rahwan was the first person I heard use the term society-in-the-loop machine learning. He was describing his work which was just published in Science, on polling the public through an online test to find out how they felt about various decisions people would want a self-driving car to make – a modern version of what philosophers call “The Trolley Problem.” The idea was that by understanding the priorities and values of the public, we could train machines to behave in ways that the society would consider ethical. We might also make a system to allow people to interact with the Artificial Intelligence (AI) and test the ethics by asking questions or watching it behave.

Society-in-the-loop is a scaled up version of human-in-the-loop machine learning – something that Karthik Dinakar at the Media Lab has been working on and is emerging as an important part of AI research.

Typically, machines are “trained” by AI engineers using huge amounts of data. The engineers tweak what data is used, how it’s weighted, the type of learning algorithm used and a variety of parameters to try to create a model that is accurate and efficient and making the right decisions and providing accurate insights. One of the problems is that because AI, or more specifically, machine learning is still very difficult to do, the people who are training the machines are usually not domain experts. The training is done by machine learning experts and the completed model after the machine is trained is often tested by experts. A significant problem is that any biases or errors in the data will create models that reflect those biases and errors. An example of this would be data from regions that allow stop and frisk – obviously targeted communities will appear to have more crime.

Human-in-the-loop machine learning is work that is trying to create systems to either allow domain experts to do the training or at least be involved in the training by creating machines that learn through interactions with experts. At the heart of human-in-the-loop computation is the idea of building models not just from data, but also from the human perspective of the data. Karthik calls this process ‘lensing’, of extracting the human perspective or lens of a domain expert and fit it to algorithms that learn from both the data and the extracted lens, all during training time. We believe this has implications for making tools for probabilistic programming and for the democratization of machine learning.

At a recent meeting with philosophers, clergy and AI and technology experts, we discussed the possibility of machines taking over the job of judges. We have evidence that machines can make very accurate assessments of things that involve data and it’s quite reasonable to assume that decisions that judges make such as bail amounts or parole could be done much more accurately by machines than by humans. In addition, there is research that shows expert humans are not very good set setting bail or granting parole appropriately. Whether you get a hearing by the parole board before or after their lunch has a significant effect on the outcome, for instance.

In the discussion, some of us proposed the idea of replacing judges for certain kinds of decisions, bail and parole as examples, with machines. The philosopher and several clergy explained that while it might feel right from a utilitarian perspective, that for society, it was important that the judges were human – it was even more important than getting the “correct” answer. Putting aside the argument about whether we should be solving for utility or not, having the buy-in of the public would be important for the acceptance of any machine learning system and it would be essential to address this perspective.

There are two ways that we could address this concern. One way would be to put a “human in the loop” and use machines to assist or extend the capacity of the human judges. It is possible that this would work. On the other hand, experiences in several other fields such as medicine or flying airplanes have shown evidence that humans may overrule machines with the wrong decision enough that it would make sense to prevent humans from overruling machines in some cases. It’s also possible that a human would become complacent or conditioned to trust the results and just let the machine run the system.

The second way would be for the machine to be trained by the public – society in the loop – in a way that the people felt that that the machine reliability represented fairly their, mostly likely, diverse set of values. This isn’t unprecedented – in many ways, the ideal government would be one where the people felt sufficiently informed and engaged that they would allow the government to exercise power and believe that it represented them and that they were also ultimately responsible for the actions of the government. Maybe there is way to design a machine that could garner the support and the proxy of the public by being able to be trained by the public and being transparent enough that the public could trust it. Governments deal with competing and conflicting interests as will machines. There are obvious complex obstacles including the fact that unlike traditional software, where the code is like a series of rules, a machine learning model is more like a brain – it’s impossible to look at the bits and understand exactly what it does or would do. There would need to be a way for the public to test and audit the values and behavior of the machines.

If we were able to figure out how to take the input from and then gain the buy-in of the public as the ultimate creator and controller of this machine, it might solve the other side of this judicial problem – the case of a machine made by humans that commits a crime. If, for instance, the public felt that they had sufficient input into and control over the behavior of a self-driving car, could the public also feel that the public, or the government representing the public, was responsible for the behavior and the potential damage caused by a self-driving car, and help us get around the product liability problem that any company developing self-driving cars will face?

How machines will take input from and be audited and controlled by the public, may be one of the most important areas that need to be developed in order to deploy artificial intelligence in decision making that might save lives and advance justice. This will most likely require making the tools of machine learning available to everyone, have a very open and inclusive dialog and redistribute the power that will come from advances in artificial intelligence, not just figure out ways to train it to appear ethical.

Credits

•Iyad Rahwan – The phrase “society in the loop” and many ideas.
•Karthik Dinakar – Teaching me about “human in the loop” machine learning and being my AI tutor and many ideas.
•Andrew McAfee – Citation and thinking on parole boards.
•Natalie Saltiel – Editing.

And, of course for your viewing pleasure:
DSC_0370
[Would AI recognise this, an aside in the Carnegie Library; Reims]

Short Cross posting

… Not from anyone, not from anywhere. But crossing some book tips, and asking for comments.
Was reading the Good Book, when realizing that it, in conjunction with Bruce, could lead to some form of progress beyond the latter when absolutist totalitarian panopticon control frameworks might seem the only way out. In particular, when including this on the Pikettyan / Elyseym escape or not that serves only some but not the serfs. And then add some Mark Goodman (nomen est omen, qua author, and content?) and you can see where Bruce may have missed exponential crumbling of structures, and said escape might be by others than the current(ly known) 1% … Not all Boy Cried Wolfs will be wrong; on the contrary — Not Yet is very, very different from Never, but rather Soon Baby, Soon.

Not rejoicing, and:
DSC_0097
[Nope, not safe here (Haut Koenigsbourg) either.]

Rien vous ne pouvez plus …?

When business is about betting, hopefully educated guessing, the near and bit further future developments of <somethings>. Educated, of course with a pinch of theory — but then, only the parts that are actually true, and still valid, for the future, too so throw away all (seriously) but a few nuggets of the most absolute que sera, sera of economics / business administration (sic) — and a healthy dose of experience — but not too much as it would lead to a lachrymose same as the (true) theory and we still need Action, don’t we ..?

Then totalitarian bureaucracies, like the banking world (in a suffocatingly tight grip, including the regulatory-captured but also holier-than-thou regulators), will try to squeeze all involved so thoroughly that no business is feasible anymore. But will fail, as the spirit has been out of the bottle since the Apple; Original Sin is about being human, above animalistic sustenance-supporting instinctive compliance with the laws of nature. Again and again, the stupidity of belief that Apollo wins out over Dionysos ..! They’re equal in all respects, certainly in the spirit of Man, and remember that even Zeus was forced to break marital laws (what a player he was, by necessity …:) because at the End of Times, the titans, the powers of Chaos, will almost win out. A trope so powerful that all belief systems have it (but a few exceptions) so to be held for certain; until proven wrong…?

Even more: The higher the pressure (that the straightjackets on subjects can take), the more fluid everything becomes. The more zealous, zealotic, crazy (outright that word, yes) compliance efforts micromanage (lest the ‘manage’ part which is utterly ridiculous if used in this sense), the more devious and deceptional will the business be, caused by this very reason of compliance efforts. LIBORgate, anyone ..?

‘Trust, but verify’ is a lie. Since only the slightest hint of the latter, immediately completely destroys the former ..! As the former is a two-way street! Seek out those that support this lie, and you’ll find the true culprits of the above.

So far, so good for a Monday morning’s rant, right ..? I’ll stop now, with:
20151008_123437
[Rigid, but colourful, the max you can achieve; Nieuwegein]

Disciplined away from bureaucracy

After some thought on bureaucracy on either side of the Big Pond, it suddenly dawned on me how to explain the seeming (of course) paradox:

  • At the Western shores, a lot of military with front line battle experience (and some, only a bit less so), possibly out of reserve functions in mundane business, have gone (back) over to the dark side of commercial business, with their discipline and cutthroat ‘competition’ (using not secondhand car salespeople but live ammo) as main assets / gathered experience to bring to bear.
  • On the East oceanboards, not so much, and a love for egalitarian Rhineland ideas might have persisted, giving flexibility and care for customers (‘s souls), and much room for ‘Millennials’ (let’s all drop that most empty of phrases though you get my drift) in the workplace.
  • On the point of competition effectiveness, Westeros beats Essos hand down.

But, the critical points for resolution are:

  • US businesses have been taken away from petty-rule-based (only) bureaucracy that they were in (yes they were, even with the freedom-seeking escapism rampant throughout), by the infusion with serious doses of Mission Command (a.k.a. Commander’s Intent) flexibility in goal achievement over procedural justice / form-over-substance.
  • European corps had nothing to counter Power Corrupts style demise unto totalitarian bureaucracies with their headless-chicken compliance.

So, it really is no contest but we would need a (not present) ref to break it off. To bad, and:
DSC_0608
[Oh how cutiepie, Doesburg defenses]

Paradise Lost

.. of not the Milton kind, unfortunately.
But of the kind of Age of Innocense. In crowdsourcing. You remember, from before the days of Mechanical Turk and similar no cure, no pay but the pay’s a rip-off scams. Close to (?) post-slavery slavery by Hobson’s Choice.

But as said; before that, there were the dreams of free agents delivering their best efforts to common problems and getting handsomely rewarded for their solutions (if the best). The Age of Aquarius dawned. No more masters, no servants, all equal.
Or so.
The brevity of hapiness…

DSCN5189
[All play and sunny weather now that you have been returned to Consumer status (pejorative). Sure to (have) change(d) …; NY of course]

Maverisk / Étoiles du Nord