MITmoral Machine: Wrong system?

Guess we all noticed the somewhat-groundbreaking results of the MIT Moral Machine survey around the world, on the trolley problem and how would you have reacted.
But, apart from the also noted possibly severe bias (towards male, highly educated, affluent hence probably more cosmopolitan, and responding in the first place), there is another aspect:

The survey allowed, maybe even asked, for the brain’s System II (re: Kahneman) to kick in. You know, the cognitive, thinking part that includes ethical and moral reasoning. This, demonstrated by the linkage to language, both on the input side and on the output side. Especially the latter would invalidate the results vis-à-vis practice.

Because in practice, like, human practice, and auto-cars (auto-mobile, remember? “autos”!?), only System I is / will be fast enough to respond in time. Hence, don’t ask but test human responses! And even when in some fancy-dancy VR environment, one cannot preclude a hint of reality-disbelief in the back of ones’ minds (where the appropriate processing takes place!), uncanny valley skews, etc.
And then, “I panicked” is a perfect excuse. Will it be for autos? Legally, too ..? Who will have panicked? etc.

Yes, people panick and respond intuitively, too. But wasn’t the auto trained to produce System II results for System II and System I occasions? If the latter would be captured under IF SystemIPanic() Then HandBackControlToCluelessHuman(); – emphasis mine, because of this (1st paragraph) – Then translate the pilots’ experience to autos, please:
The more you rely on the auto aspect of autos, and no-one is not aiming at Level 5 independence, the more clearly only the ever more distinctly panicky of System I decision failures will be pushed to the human that still will need to be fully equipped in the cockpit (and think of this, too) by the way contra each and every designer’s wishes, at the ever more latest of latest moments,
the less the human will be able to generate System I responses as these come from experience. So, when not if human override is required, failure will be ever more certain. Can one then still blame the human for a. not having had the ability to get experience properly, b. being called in too late (for almost certain), c. the actual actions taken that transpire, afterwards, to have been an ethical decision? Or will HandBackControlToCluelessHuman(); mean: GiveUpAlreadyAsTheHumanWillAlsoBeClueless()?

OK. The future is bright…? And:

[Just your everyday rush hour in Baltimore]

3 thoughts on “MITmoral Machine: Wrong system?”

  1. Pingback: Biasin’ -out.

Leave a Reply

Maverisk / Étoiles du Nord