If by Rudyard Kipling

If you can keep your head when all about you   
    Are losing theirs and blaming it on you,   
If you can trust yourself when all men doubt you,
    But make allowance for their doubting too;   
If you can wait and not be tired by waiting,
    Or being lied about, don’t deal in lies,
Or being hated, don’t give way to hating,
    And yet don’t look too good, nor talk too wise:

If you can dream—and not make dreams your master;   
    If you can think—and not make thoughts your aim;   
If you can meet with Triumph and Disaster
    And treat those two impostors just the same;   
If you can bear to hear the truth you’ve spoken
    Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
    And stoop and build ’em up with worn-out tools:

If you can make one heap of all your winnings
    And risk it on one turn of pitch-and-toss,
And lose, and start again at your beginnings
    And never breathe a word about your loss;
If you can force your heart and nerve and sinew
    To serve your turn long after they are gone,   
And so hold on when there is nothing in you
    Except the Will which says to them: ‘Hold on!’

If you can talk with crowds and keep your virtue,   
    Or walk with Kings—nor lose the common touch,
If neither foes nor loving friends can hurt you,
    If all men count with you, but none too much;
If you can fill the unforgiving minute
    With sixty seconds’ worth of distance run,   
Yours is the Earth and everything that’s in it,   
    And—which is more—you’ll be a Man, my son!

Excerpts from Chapter VIII (Inner Demons: The Moralization Gap and The Myth of Pure Evil) from The Better Angels of Our Nature by Steven Pinker

I [have] argued that the modern denial of the dark side of human nature — the doctrine of the Noble Savage — was a reaction against the romantic militarism, hydraulic theories of aggression, and glorification of struggle and strife that had been popular in the late 19th and early 20th centuries. Scientists and scholars who question the modern doctrine have been accused of justifying violence and have been subjected to vilification, blood libel, and physical assault. The Noble Savage myth appears to be another instance of an antiviolence movement leaving a cultural legacy of propriety and taboo.

…[In his book, Evil, Roy] Baumeister was moved to study the commonsense understanding of evil when he noticed that the people who perpetrate destructive acts, from everyday peccadilloes to serial murders and genocides, never think they are doing anything wrong. How can there be so much evil in the world with so few evil people doing it?

…Baumeister and his collaborators Arlene Stillwell and Sara Wotman couldn’t very well get people to commit atrocities in the lab, but they reasoned that everyday life has its share of smaller hurts that they could put under the microscope. They asked people to describe one incident in which someone angered them, and one incident in which they angered someone. The order of the two questions was randomly flipped from one participant to the next, and they were separated by a busywork task so the participants wouldn’t answer them in quick succession. Most people get angry at least once a week, and nearly everyone gets angry at least once a month, so there was no shortage of material. Both perpetrators and victims recounted plenty of lies, broken promises, violated rules and obligations, betrayed secrets, unfair acts, and conflicts over money.

But that was all that the perpetrators and victims agreed on. The psychologists pored over the narratives and coded features such as the time span of the events, the culpability of each side, the perpetrator’s motive, and the aftermath of the harm. If one were to weave a composite out of their tallies, they might look something like this:

The Perpetrator’s Narrative: The story begins with the harmful act. At the time I had good reasons for doing it. Perhaps I was responding to an immediate provocation. Or I was just reacting to the situation in a way that any reasonable person would. I had a perfect right to do what I did, and it’s unfair to blame me for it. The harm was minor, and easily repaired, and I apologized. It’s time to get over it, put it behind us, let bygones be bygones.
The Victim’s Narrative: The story begins long before the harmful act, which was just the latest incident in a long history of mistreatment. The perpetrator’s actions were incoherent, senseless, incomprehensible. Either that or he was an abnormal sadist, motivated only by a desire to see me suffer, though I was completely innocent. The harm he did is grievous and irreparable, with effects that will last forever. None of us should ever forget it.

They can’t both be right — or more to the point, neither of them can be right all of the time, since the same participants provided a story in which they were the victim and a story in which they were the perpetrator. Something in human psychology distorts our interpretation and memory of harmful events.

This raises an obvious question. Does our inner perpetrator whitewash our crimes in a campaign to exonerate ourselves? Or does our inner victim nurse our grievances in a campaign to claim the world’s sympathy? Since the psychologists were not flies on the wall at the time of the actual incidents, they had no way of knowing whose retrospective accounts should be trusted.

In an ingenious follow-up, Stillwell and Baumeister controlled the event by writing an ambiguous story in which one college roommate offers to help another with some coursework but reneges for a number of reasons, which leads the student to receive a low grade for the course, change his or her major, and switch to another university. The participants (students themselves) simply had to read the story and then retell it as accurately as possible in the first person, half of them taking the perspective of the perpetrator and half the perspective of the victim. A third group was asked to retell the story in the third person; the details they provided or omitted serve as a baseline for ordinary distortions of human memory that are unaffected by self-serving biases. The psychologists coded the narratives for missing or embellished details that would make either the perpetrator or the victim look better.

The answer to the question “Who should we believe?” turned out to be: neither. Compared to the benchmark of the story itself, and to the recall of the disinterested third-person narrators, both victims and perpetrators distorted the stories to the same extent but in opposite directions, each omitting or embellishing details in a way that made the actions of their character look more reasonable and the other’s less reasonable. Remarkably, nothing was at stake in the exercise. Not only had the participants not taken part in the events, but they were not asked to sympathize with the character or to justify anyone’s behavior, just to read and remember the story from a first-person perspective. That was all it took to recruit their cognitive processes to the cause of self-serving propaganda.

…The Moralization Gap is a part of a larger phenomenon called self-serving biases. People try to look good. “Good” can mean effective, potent, desirable, and competent, or it can mean virtuous, honest, generous, and altruistic. The drive to present the self in a positive light was one of the major findings of 20th-century social psychology…Among the signature phenomena are cognitive dissonance, in which people change their evaluation of something they have been manipulated into doing to preserve the impression that they are in control of their actions, and the Lake Wobegon Effect (named after Garrison Keillor’s fictitious town in which all the children are above average), in which a majority of people rate themselves above average in every desirable talent or trait.

…The problem with trying to convey an exaggerated impression of kindness and skill is that other people are bound to develop the ability to see through it, setting in motion a psychological arms race between better liars and better lie detection…[Robert] Trivers ventured that natural selection may have favored a degree of self-deception…[Thus meaning,] We lie to ourselves so that we’re more believable when we lie to others. At the same time, an unconscious part of the mind registers the truth about our abilities so that we don’t get too far out of touch with reality. Trivers credits George Orwell with an earlier formulation of the idea: “The secret of rulership is to combine a belief in one’s own infallibility with a power to learn from past mistakes.”

Self-deception is an exotic theory, because it makes the paradoxical claim that something called “the self” can be both deceiver and deceived. It’s easy enough to show that people are liable to self-serving biases, like a butcher’s scale that has been miscalibrated in the butcher’s favor. But it’s not so easy to show that people are liable to self-deception, the psychological equivalent of the dual books kept by shady businesses in which a public ledger is made available to prying eyes and a private ledger with the correct information is used to run he business.

A pair of social psychologists, Piercarlo Valdesolo and David DeSteno, have devised an ingenious experiment that catches people in the act of true, dual-book self-deception. They asked the participants to cooperate with them in planning and evaluating a study in which half of them would get a pleasant and easy task, namely looking through photographs for ten minutes, and half would get a tedious and difficult one, namely solving math problems for forty-five minutes. They told the participants that they were being run in pairs, but that the experimenters had not yet settled on the best way to decide who got which task. So they allowed each participant to choose one of two methods to decide who would get the pleasant task and who would get the unpleasant one. The participants could just choose the easy task for themselves, or they could use a random number generator to decide who got which. Human selfishness being what it is, almost everyone kept the pleasant task for themselves. Later they were given an anonymous questionnaire to evaluate the experiment which unobtrusively slipped in a question about whether the participants thought that their decision had been fair. Human hypocrisy being what it is, most of them said it was. Then the experimenters described the selfish choice to another group of participants and asked them how fairly the selfish subject acted. Not surprisingly, they didn’t think it was fair at all. The difference between the way people judge other people’s behavior and the way they judge their own behavior is a classic instance of a self-serving bias.

But now comes the key question. Did the self-servers really, deep down, believe that they were acting fairly? Or did the conscious spin doctor in their brains just say that, while the unconscious reality-checker registered the truth? To find out, the psychologists tied up the conscious mind by forcing a group of participants to keep seven digits in memory while they evaluated the experiment, including the judgment about whether they (or others) had acted fairly. With the conscious mind distracted, the terrible truth came out: the participants judged themselves as harshly as they judged other people. This vindicates Trivers’s theory that the truth was in there all along.

…Though acknowledging a compromising truth about ourselves is among our most painful experiences…it is, at least in principle, possible. It may take ridicule, it may take argument, it may take time, it may take being distracted, but people have the means to recognize that they are not always in the right. Still, we shouldn’t deceive ourselves about self-deception. In the absence of these puncturings, the overwhelming tendency is for people to misjudge the harmful acts they have perpetrated or experienced.

The full book can be purchased here.

Excerpts from Chapter X (Arguments and Logical Fallacies) of The Skeptics’ Guide to the Universe by Steven Novella, et al.

Ad Ignorantiam

The argument from ignorance basically states that a specific belief is true because we don’t know that it isn’t true. Defenders of extra-sensory perception, for example, will often overemphasize how much we don’t know about the human brain. It’s possible, they argue, that the brain may be capable of transmitting signals at a distance. UFO proponents are probably the most frequent committers of this fallacy. Almost all UFO eyewitness evidence is ultimately an argument from ignorance – lights or objects sighted in the sky are unidentified and are therefore alien spacecraft.

Intelligent design is almost entirely based upon this fallacy. The core argument for intelligent design is that there are biological structures that have not been fully explained by evolution, therefore a powerful intelligent designer must have created them. In this context, arguments from ignorance are often referred to as ‘god of the gaps’ arguments, because God is offered as the explanation for any current gap in our knowledge.

Often the argument from ignorance is defended with the adage ‘Absence of evidence is not evidence of absence.’ While this sounds pithy, it’s not strictly true. Absence of evidence is, in fact, evidence of absence. It’s just not absolute proof of absence.

A more scientific way to look at this question is this: How predictive is the absence of evidence for the absence of the phenomenon in question? Well, that depends on how thoroughly we’ve looked and with what sensitivity. You can’t ever prove a negative, but the more you look for something without find it, the less likely it is to exist. We haven’t found alien signals yet, but it’s a big universe out there and we have only surveyed a tiny slice. On the other hand, we have scoured Loch Ness for decades without any signs of Nessie, so I am not holding my breath that a giant creature is lurking beneath the waves.

In any case, in order to make a positive claim, positive evidence for that specific claim must be presented. The absence of another explanation only means that we don’t know – it doesn’t mean that we get to make up a specific explanation.

Closed-Minded

Perhaps the most routine ad hominem fallacy directed at skeptics is the claim that we are closed-minded (which functions exactly like accusing someone of lacking faith or lacking vision). Using the charge of closed-mindedness to dismiss valid criticism is a fallacy, but it’s also often an incorrect premise.

Skepticism isn’t closed-minded, and the opposite of skepticism is not open-mindedness (it’s gullibility). Scientists, critical thinkers, and skeptics can and should be completely open-minded, which means being open to the evidence and logic whatever it says. If the evidence supports a view, then we will accept that view in proportion to the evidence.

But being open-minded also means being open to the possibility that a claim is wrong. It doesn’t mean assuming every claim is true or refusing to ever conclude that something is simply false. If the evidence leads to the conclusion that a claim is false or a phenomenon does not exist, then a truly open-minded person accepts that conclusion in proportion to the evidence. Open-mindedness works both ways.

Ironically, it’s usually those accusing their critics of being closed-minded that tend to be the most closed. They are closed to the possibility that they are wrong.

Tautology

A tautology is an argument that utilizes circular reasoning, which means that the conclusion is also its own premise. The structure of such arguments is A=B therefore A=B, although the premise and conclusion might be formulated differently so the tautology is not immediately apparent. For example, saying that therapeutic touch works because it manipulates the life force is a tautology because the definition of therapeutic touch is the alleged manipulation (without touching) of the life force.

This fallacy is often called ‘begging the question,’ meaning that the premise assumes the conclusion, or that an argument assumes an initial point. Perhaps the most common example is to argue that we know the Bible is the literal word of God because the Bible says so.

The Moving Goalpost

The moving goalpost is a method of denial that involves arbitrarily moving the criteria for ‘proof’ or acceptance out of range of whatever evidence currently exists. If new evidence comes to light meeting the prior criteria, the goalpost is pushed back further – keeping it out of range of this new evidence. Sometimes, impossible criteria are set up at the start – moving the goalpost impossibly out of range for the purpose of denying an undesirable conclusion.

Anti-vaxxers claimed that the MMR vaccine caused autism. When scientific studies shot down that claim, they moved on to thimerosal (a mercury-based preservative in some vaccines, but not in the MMR). They predicted that when thimerosal was removed from standard vaccines in the US in 2002, autism rates would plummet – they didn’t. So they claimed that mercury from other sources, like coal factories, made up for the drop in mercury exposure from vaccines. When the evidence did not support that claim, they moved on to aluminum as the cause (nope). And now they just reference vague ‘toxins.’

No amount of safety data on vaccines is ever enough. They just keep moving the goalpost.”

The full book can be purchased here.

The authors’ website, including podcast, can also be found here.

Ouija Board: Demystifying the “Mystifying Oracle” by Benjamin Radford

20200607_213927

Full article is here.