Bias Time (3 of 9)


[Bastion; inside/outside]

Yes, it’s bias time again. The third of the series of biases that you, yes you, have. (previous one here.) Even if you are aware of these, and even if you consciously try to correct for them to be, heh, ‘objective’, as in what e.g. auditors pursue, you will fail.

Social biases

  • Actor-observer bias – the tendency for explanations of other individuals’ behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation (see also fundamental attribution error). However, this is coupled with the opposite tendency for the self in that explanations for our own behaviors overemphasize the influence of our situation and underemphasize the influence of our own personality.
  • Dunning–Kruger effect – a two-fold bias. On one hand the lack of metacognitive ability deludes people, who overrate their capabilities. On the other hand, skilled people underrate their abilities, as they assume the others have a similar understanding.
  • Egocentric bias – occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would.
  • Forer effect (aka Barnum effect) – the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. For example, horoscopes.
  • False consensus effect – the tendency for people to overestimate the degree to which others agree with them.
  • Fundamental attribution error – the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).
  • Halo effect – the tendency for a person’s positive or negative traits to “spill over” from one area of their personality to another in others’ perceptions of them (see also physical attractiveness stereotype).
  • Herd instinct – common tendency to adopt the opinions and follow the behaviors of the majority to feel safer and to avoid conflict.
  • Illusion of asymmetric insight – people perceive their knowledge of their peers to surpass their peers’ knowledge of them.
  • Illusion of transparency – people overestimate others’ ability to know them, and they also overestimate their ability to know others.
  • Illusory superiority – overestimating one’s desirable qualities, and underestimating undesirable qualities, relative to other people. (Also known as “Lake Wobegon effect,” “better-than-average effect,” “superiority bias,” or “Dunning-Kruger effect”).
  • Ingroup bias – the tendency for people to give preferential treatment to others they perceive to be members of their own groups.
  • Just-world phenomenon – the tendency for people to believe that the world is just and therefore people “get what they deserve.”
  • Notational bias – a form of cultural bias in which the notational conventions of recording data biases the appearance of that data toward (or away from) the system upon which the notational schema is based.
  • Outgroup homogeneity bias – individuals see members of their own group as being relatively more varied than members of other groups.
  • Projection bias – the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions.
  • Self-serving bias (also called “behavioral confirmation effect”) – the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias).
  • Self-fulfilling prophecy – the tendency to engage in behaviors that elicit results which will (consciously or not) confirm existing attitudes.
  • System justification – the tendency to defend and bolster the status quo. Existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest. (See also status quo bias.)
  • Trait ascription bias – the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.
  • Ultimate attribution error – similar to the fundamental attribution error, in this error a person is likely to make an internal attribution to an entire group instead of the individuals within the group.

Sealing your site


[What a circus. In the background.]

ENISA recently announced it wants an upgrade of the ‘reliability’ of web seals.

Nice. BUT anything out there will be:

  • Copied and replayed at third party sites, differences being inobservable to the average hooman;
  • Placed without being warranted for the info presented, at / after some point in time. The ‘certification’ is there, so don’t bother keeping up to ever spiraling up security requirements;
  • Valid for some time only, with all sorts of re’certification’ / failed update issues.

All the ENISA talk about automated checking, etc., would be very welcome, but no-one would want the accountability when (not if!) the automated checks are subverted i.e. fail to check at semantic level as well as all the way down. The ‘net just cannot be trusted per se ..!

The principles are nice, and kudos to ENISA for calling out the need for improvement. But the principles will suffer badly when implementation time comes around, and in BAU – between dream and action, there’s laws and practical objections.

So let’s (have someones) pursue this. It’ll take time, and we’ll have to learn from mistakes… but still.

Money is grease; 3D- and time-wise


[Matter, Brussels]

To come back to something I stated before, now more in the clear:

Money is the form that the grease of purchasing power captured takes, to span place and time differences between supply and demand of actual stuff (goods and services).

No more, no less. It’s handy, to that purpose. It bridges on-the-spot supply and demand imbalances, so that however abstract, markets can operate without on-the-spot perfect one-to-one barters need to take place. But it creates the Marxian wage gap… Does it? When studied in more detail and length, and varying abstraction levels, it may bring back ‘capital’ as production factor into the fold of perfect balanced theories of firm.
Quite a thing to pursue. If only I had time…

But think about it… The crazy abstract thing called ‘money’ that might drive some insane when thinking about the futility of its value foundations, may on another plane or polarization of understanding turn out to be quite comprehensible. Relief.

And, it points into the direction of uselessness of hoarding it beyond the amount that can be transferred back (sic) into real valuable assets and services; those that serve satisfaction according to Maslow’s pyramid rather than just satisfaction in a secondary or tertiary abstraction-away sort of fashion. So that’s why having and earning money beyond some point doesn’t raise our happiness anymore; our subconscious is aware of the futility of that ‘more’ and adds in some guilt over not giving away the money to elsewhere, to satisfy other Maslow needs…

Books by Quote: Practical Wisdom


[Small, but human in an other environment; AMS]

The third ‘Book By Quote‘ then
(An attempt to subjectively summarise a book by the quotes I found worthwhile to mark, to remember. Be aware that the quotes as such, aren’t a real unbiased ‘objective’ summary; most often I heartily advise to read the book yourself..!)

So, this time: Barry Schwartz’s and Kenneth Sharpe’s Practical Wisdom, The Right Way to Do the Right Thing, Riverhead Books, 2010, ISBN 9781594485435

The assumption behind carefully constructed rules and procedures, with close oversight, is that even if people do want to do the right thing, they need to be told what it is. (p.4)

Rules cannot substitute for practical widsom anymore than incentives can. We need rules to guide and govern the behaviour of people who are not wise: one reason we suffered the recent financial crisis was that the weak and loosely enforced rules and regulations allowed bankers to run amok with shrewd money making schemed like derivatives. But tighter rules and regulations, however necessary, are pale substitutes for wisdom. Aristotle might say that we need rules to protect us from disaster. But at the same time, rules without wisdom are blind and at best guarantee mediocrity – forcing wise practitioners to become outlaws, rule-breakers pursuing a kind of guerilla war to achieve excellence. (pp.9-10)

We need to see how the current reliance on strict rules and regulation and clever incentives to improve practices like medicine, education, and law risk undermining the very wisdom of practitioners that is needed to make these practices better. Well-meaning reformers are often engaged in a kind of unintended stealth war on wisdom. (p.10)

All too often, the diagnosis of the problems in the institutions that serve us is that people don’t really care about their work; they are blamed for just caring about making money, or gaining status, or amassing more power. … Rules and incentives may improve the behaviour of those who don’t care, though they don’t make them wiser. But in focusing on the people who don’t care – the targets of our rules and incentives – we miss those that do care. (pp.11-12)

Emotion is critical to moral perception in another way. It is a signalling device. The emotion of the father – ‘he just freaked out’ – signalled to Luke that something was wrong. (p.23)

Luke and Judge Forer help us to understand some of the key characteristics of practical wisdom. To summarize:
1. A wise person knows the proper aims of the activity she is engaged in. She wants to do the right thing to achieve these aims – wants to serve the needs of the people she is serving.
2. A wise person knows how to improvise, balancing conflicting aims and interpreting rules and principles in the light of the particularities of each context.
3. A wise person is perceptive, knows how to read a social context, and knows how to move beyond the black-and-white of the rules and see the grey in a situation.
4. A wise person knows how to take on the perspective of another – to see the situation as the other person does and thus to understand how the other person feels. This perspective-taking is what enables a wise person to feel empathy for others and to make decisions that serve the client’s (student’s, patient’s, friend’s) needs.
5. A wise person knows how to make emotion an ally of reason, to rely on emotion to signal what a situation calls for, and to inform judgement without distorting it. He can feel, intuit, or ‘just know’ what the right thing to do is, enabling him to act quickly when timing matters. His emotions and intuitions are well educated.
6. A wise person is an experienced person. Practical wisdom is a craft and craftsmen are trained by having the right experiences. People learn how to be brave, said Aristotle, by doing brave things. So, too, with honesty, justice, loyalty, caring, listening, and counseling. (pp.25-26)

Weick found that the longer the checklists for the wildland firefighters became, the more improvisation was shut down. Rules are aids, allies, guides, and checks. But too much reliance on rules can squeeze out the judgement that is necessary to do our work well. … Better to minimize the number of rules, give up trying to cover every particular circumstance, and instead do more training to encourage skill and practical reasoning and intuition. (p.42)

Rules Talk urges us to consult a text or a code. Wisdom Talk urges us to learn from others who are practically wise. (p.45)

Having the emotional capacity for experiencing empathy doesn’t mean we’ll use it. People can be put in institutional settings – … – that can discourage emotions like empathy and encourage other emotions like fear, embarrassment and anxiety about pleasing supervisors. If the routines of work systematically discourage people from experiencing and using an emotion like empathy – and encourage countervailing emotions instead – there is a danger that our capacity for practical wisdom will be undermined. (p.79)

Even worse, reliance on rules or, more generally, on aspects of a situation that we can describe with words may distort our judgement. (p.86)

The fact that many of the patterns we recognise are not easily captured in language has important implications when it comes to thinking about moral rules as guides to conduct. … If we rely on rules to tell us what to do, then we shut ourselves off from information and understanding we may have that cannot be put into words. And doing that may deprive us from the opportunity to make far more nuanced judgements than any rules would allow. (pp.85-86)

But rules must be used with care. A rule can “entrench” particular patterns of activation in a network, making the network extremely difficult to change, so that even if a person subsequently has a wide range of experience, including experience that throw the rule into question, the experience will not result in subtle and sensitive changes to the networks. (p.103)

[Third,] the features of a moral network help us understand an important source of moral diagreement among people as other than simply a clash of values. Since different experiences will produce different moral networks, we can expect frequent occasions in which good people, with similar values, come to quite different views about what the particular situation before them calls for. (p.103)

[Fourth,] the moral networks model helps us account for the occasions where pattern recognition is not rapid, when practical wisdom demands careful deliberation. For one thing, learning to distinguish and recognise important patterns is a long process; it takes a while for sonar devices looking for mines, let alone children and even adults looking for moral guidance, to tune their networks, and none of us can be experts in all domains. Novices always have to deliberate. And experts do too, when they face a new or strange pattern. (pp.103-104)

Reflective deliberation is a part of moral networks in a second way. We are often faced with situations of moral ambiguity, where moral perception is unclear or conflicted, and the moral network can’t settle on a single answer or output. .. In cases like this, small changes in the pattern can tip the balance so that what was ambiguous becomes clear. … In such situations, it may take careful deliberation and moral imagination for us to find a way to choose between alternatives (e.g., kindness and honesty) or craft a path that combines them. (p.104)

As Paul Churchland explains it, moral virtue or excellence, as Aristotle saw it, was not something given by an outside authority and swallowed whole. “It was a matter of developing a set of inarticulate skills, a matter of practical wisdom.” The child is born into a moral community, with detailed social practices already in place. The child’s initiation into that community takes time: … Statable rules are not the basis of one’s moral character. They are merely its pale and partial reflection at the comparatively impotent level of language. (p.105)

As we’ll see, all too often efforts to make things better – with carefully developed rules and methodical procedures – actually make them worse. Instead of nurturing practical wisdom, they end up waging a stealth war on it. (p.106)

We will discuss how our will to achieve the proper aims of our practices gets eroded by an overreliance on incentives. And we will explore how some shrewd and brave individuals – “canny outlaws,” we call them – find ways to exercise wisdom in spite of organizations whose rules and procedures systematically discourages it. (p.111)

Professor Kimberly Kirkland at Franklin Pierce Law Center looked inside large law firms to see what impact this growing specialization and market competition had on the young lawyers coming up in the firm. … What she found was a kind of organization that squeezed out wise coucil and discouraged the experiences that teach wise judgement. (p.149)

The army is creating cooks, says Wong, leaders who are “quite adept at carrying out a recipe,” rather than chefs who can “look at the ingredients available to them and create a meal.” (p.159)

Thus, the de-skilling of teachers produces a kind of “de-willing.” It risks taking the fight out of some good teachers and takes other good teachers out of the fight. The danger here is a downward spiral. Good, experienced teachers leave, and idealistic and talented prospective teachers are discouraged from entering the classroom. Administrators interpret the lack of experience or commitment as evidence that more stringent procedures and rules are needed and ratchet up the standardization, demoralizing and turning away more promising teachers. (p.176)

Incentives may get you what you pay for, but they often will not get you what you want and need. … there are two problems with incentives. First, they are too often too blunt an instrument to get us what we need. In situations that call for scalpels, incentives are sledgehammers. Second, when incentives are introduced into a situation, they can undermine other, better motives to do the right thing. Different kinds of motives can compete, and financial or other material incentives often win the competition. The result, as we’ll see, is that such financial incentives can lead to demoralization – in two senses. First, they take the moral dimension out of our practices; second, they risk demoralizing the practitioners themselves. (pp.180-181)

Aristotle thought that good people do the right thing because it is the right thing. Doing the right thing because it’s the right thing unleashes the nuance, flexibility and improvisation that moral challenges demand and moral skill enables. Doing the right thing for pay shuts down the nuance and flexibility. (p.182)

Why is it that incentives seem to have these perverse effects? We can begin to answer this question when we appreciate that incentives have two distinct components. First, they provide feedback; a bonus or a gold star says, “You’ve got it right. Good job! Keep up the good work.” Second, incentives provide people with something that they want and like – money, status, or glory, for example. That is, they are hedonically positive. It’s the hedonic kick that’s the source of the problem. (p.184)

As Dweck puts it, performance-oriented children want to prove their ability, whereas mastery-oriented children want to improve their ability. Children with performance goals avoid challenges. They prefer tasks that are well within the range of their ability. Children with mastery goals seek challenges. They prefer tasks that strain the limits of their ability. Children with performance goals respond to failure by giving up. Children with mastery goals respond to failure by working harder. Children with performance goals take failure as a sign of their inadequacy and come to view the tasks at which they fail with a mixture of anxiety, boredom, and anger. Children with mastery goals take failure as a sign that their efforts, and not they, are inadequate, and they often come to view the tasks at which they fail with the kind of relish that comes when you encounter a worthy challenge. (pp.185-186)

Detailed scripts and rules may enable us to make contracts that are more complete, but moving in that direction will compromise the quality of the services that doctors, lawyers, teachers, and custodians provide. More complete contracts allow us to incentivize what we think we want … But what we really want is “Make a good-faith effort to do whatever it takes to achieve our objective.” (p.188)

When a dispute with management arose, instead of going out on strike, unions would sometimes resort to “working to rule.” Employees did exactly what was specified in their contracts – and nothing more. Such work-to-rule actions paralyzed production. (p.188)

When we lose confidence that people have the will to do the right thing, and we turn to incentives, we find that we get what we pay for. (p.189)

As economist Fred Hirsch said thirty years ago, “The more is written in contracts, the less can be expected without them; the more you write it down, the less is taken, or expected, on trust.” The solution to incomplete contracts is not more complete ones, it is a nurturing of moral will. (p.189)

And when we find problems with the new, adjusted scheme, we adjust again. What we hope and expect is that over time, incentives that get us ever closer to what we want will evolve. Manipulating incentives seems easier and more reliable than nurturing moral will. And what’s the harm? If incentives can’t do the job by themselves, perhaps they can contribute to improving performance, both by telling people (doctors, teachers) how they’re doing and by motivating them to do better. They can’t hurt. Or can they? As it turns out, there is harm in incentives, and the harm can be quite considerable. (pp.189-190)

Morality is for suckers, the offer of money seemed to be saying, even if only implicitly. (p.193)

It might seem that if you are inclined to do someone a favor, the offer of compensation should only give you a second reason to do what you were inclined to do already. Again, two reasons are better than one. Except that they’re not. The offer of money tells people implicitly that they are operating in the financial/commercial domain, not the social domain. … Thus, social motives and financial ones compete. (pp.194-195)

These examples help us answer the question “What’s the harm in using incentives?” There is potential harm, even if the incentives work. Incentives like money, prizes, or awards can “crowd out” the pleasure people get from an activity… They can crowd out the moral motives that drive an activity… And they can crowd out the inclination people have to be helpful to others. (p.195)

Some doctors, lawyers, and teachers will do what they think is right and be impervious to the financial consequences of their behavior. But many will not. Furthermore, as incentive schemes come to dominate practices, they will reshape what the practices are like, making it increasingly difficult for the handful of canny outlaws to do their work as they think it should be done. Not only will they have to find clever ways to work around the constraints on good practice imposed by the system or its administrators, but they will have to do so in pursuit of a set of objectives that everyone around them seems to have abandoned. … It is hard to maintain the morale, the courage, and the confidence that being a canny outlaw requires in the face of a tidal wave of colleagues and supervisors who are moving in a different direction. (p.196)

This builds temptation into any professional practice. A practitioner can often make more money by doing things that do not serve clients well – the extra medical test, the extra appointment, the extra billable hour – and the client will not know. We trust that the majority of professionals who serve us are not like this. (p.197)

They had created a culture that “came to treat patients the way subprime-mortgage lenders treated home buyers.” (p.209)

Once the making of money inserts itself as a central aim in the daily life of doctors and the institutions where they work, it begins a spiral of ever greater demoralization. (p.210)

In the frequently quoted words of prestigious jurist Roscoe Pound: “the term [profession] refers to a group … pursuing a learned art as a common calling in the spirit of public service – no less a public service because it may incidentally be a means of livelihood. Pursuit of the learned art in the spirit of public service is the primary purpose.” (p.212)

The direction a profession takes is determined, in part, by its participants. If it becomes dominated by people with inappropriate aims, it may be corrupted to the point where it is unrecognizable. People will not possess the will – or the skill – to practice education, medicine, law, or anything else in the way we want to see it practiced. The “good doctor,” the “good lawyer,”and the “good teacher” will become first a dim memory, and then a romantic fantasy, en route to fading out of our collective consciousness altogether. (p.228)

Detailed rules and procedures, however well intended, are undermining the skill that wisdom requires. Incentives, however well meaning, are undermining the will that wisdom requires. Canny outlaws, struggling to be wise in the face of significant obstacles, are not enough to stop the forces arrayed against practical wisdom. (p.231)

“Overcome the desire to tell subordinates how to do it,” Wong advised. “Refrain from detailing how a task is to be accomplished. … Demand a solution, not the solution.” (p.255)

The students, they said, could be taught certain medical knowledge: best practices,the skills needed to do an exam or surgery. But they couldn’t be taught to be perceptive. To care. To get inside the thoughts and feelings of their patients. To balance empathy and detachment. (p.263)

Practical wisdom is not something that can be taught, at least not in the narrow sense of listening to classroom lectures, reading books, and doing exams or papers. And it can’t be learned as an isolated “subject” or even as a general skill that we can go around “applying.” … Moral skill and will, like technical skill, are learned by practicing the craft. That, of course, is why wisdom is associated with experience. But it’s not just by any experience. Experience must be structured in ways that “cause wisdom to be learned.” (pp.271-272)

The system changers know that rules and incentives are a necessary part of the institutions they build. But they also know that they are but initial scaffolds, or sometimes last resorts. The first resort is to create institutions that pursue the right goals and encourage their practitioners to do the same, precisely because they are right. (p.273)

“The Talmud,” says Mogel, “sums up the Jewish perspective on child rearing in a single sentence: ‘A father is obligated to teach his son how to swim.’” … “Real protection,” argues Mogel, “means teaching children to manage risks on their own, not shielding them from every hazard.” (p.277)

Without work all life goes rotten. But when work is soulless, life stifles and dies. (Albert Camus, p.281)

The more people’s behavior at work is controlled by rules and incentives, and the less opportunity they have to exercise – and develop – practical wisdom, the worse their work will be. (p.284)

Mark


[“We will belittle you”, La Défense]

It suddenly occurred to me that when stocks go up or down on their underlying company’s earnings performance hits the mark as set by the average of analysts’ predictions, those average predictions are the real target. So … the secondary or tertiary derivative of actual strength has now become the real thing ..?
Or should we invest in analysis of core strength the opposite way; the way of true organisational resilience that will result in better, richer even, performance over a much longer time horizon..?

Bias Time (2 of 9)


[See how the metro entrance folds; Valencia of course]

Yes, it’s bias time again. The second of the series of biases that you, yes you, have. [Part one here] Even if you are aware of these, and even if you consciously try to correct for them to be, heh, ‘objective’, as in what e.g. auditors pursue, you will fail.

Biases in probability and belief

  • Ambiguity effect – the tendency to avoid options for which missing information makes the probability seem “unknown.”
  • Anchoring effect – the tendency to rely too heavily, or “anchor,” on a past reference or on one trait or piece of information when making decisions (also called “insufficient adjustment”).
  • Attentional bias – the tendency to neglect relevant data when making judgments of a correlation or association.
  • Authority bias – the tendency to value an ambiguous stimulus (e.g., an art performance) according to the opinion of someone who is seen as an authority on the topic.
  • Availability heuristic – estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.
  • Availability cascade – a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or “repeat something long enough and it will become true”).
  • Belief bias – an effect where someone’s evaluation of the logical strength of an argument is biased by the believability of the conclusion.
  • Clustering illusion – the tendency to see patterns where actually none exist.
  • Capability bias – the tendency to believe that the closer average performance is to a target, the tighter the distribution of the data set.
  • Choice-supportive bias – The tendency to remember one’s choices as better than they actually were
  • Conjunction fallacy – the tendency to assume that specific conditions are more probable than general ones.
  • Disposition effect – the tendency to sell assets that have increased in value but hold assets that have decreased in value.
  • Gambler’s fallacy – the tendency to think that future probabilities are altered by past events, when in reality they are unchanged. Results from an erroneous conceptualization of the Law of large numbers. For example, “I’ve flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads.”
  • Hawthorne effect – the tendency to perform or perceive differently when one knows they are being observed.
  • Hindsight bias – sometimes called the “I-knew-it-all-along” effect, the tendency to see past events as being predictable.
  • Illusory correlation – beliefs that inaccurately suppose a relationship between a certain type of action and an effect.
  • Last illusion – The belief that someone must know what is going on
  • Neglect of prior base rates effect – the tendency to neglect known odds when reevaluating odds in light of weak evidence.
  • Observer-expectancy effect – when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect).
  • Optimism bias – the tendency to be over-optimistic about the outcome of planned actions.
  • Ostrich effect – ignoring an obvious (negative) situation.
  • Overconfidence effect – excessive confidence in one’s own answers to questions. For example, for certain types of questions, answers that people rate as “99% certain” turn out to be wrong 40% of the time.
  • Positive outcome bias – the tendency to overestimate the probability of good things happening to them (see also wishful thinking, optimism bias, and valence effect).
  • Pareidolia – a vague and random stimulus (often an image or sound) is perceived as significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and hearing hidden messages on records played in reverse.
  • Primacy effect – the tendency to weigh initial events more than subsequent events.
  • Recency effect – the tendency to weigh recent events more than earlier events (see also peak-end rule).
  • Disregard of regression toward the mean – the tendency to expect extreme performance to continue.
  • Selection bias – a distortion of evidence or data that arises from the way that the data are collected.
  • Stereotyping – expecting a member of a group to have certain characteristics without having actual information about that individual.
  • Subadditivity effect – the tendency to judge probability of the whole to be less than the probabilities of the parts.
  • Subjective validation – perception that something is true if a subject’s belief demands it to be true. Also assigns perceived connections between coincidences.
  • Survivorship bias – the tendency to concentrate on the people or things that “survived” some process and ignoring those that didn’t, or arguing that a strategy is effective given the winners, while ignoring the large number of losers.
  • Telescoping effect – the effect that recent events appear to have occurred more remotely and remote events appear to have occurred more recently.
  • Texas sharpshooter fallacy – the fallacy of selecting or adjusting a hypothesis after the data is collected, making it impossible to test the hypothesis fairly. Refers to the concept of firing shots at a barn door, drawing a circle around the best group, and declaring that to be the target.
  • Well travelled road effect – underestimation of the duration taken to traverse oft-traveled routes and over-estimate the duration taken to traverse less familiar routes.
Maverisk / Étoiles du Nord