Self-Deception
Ninety-four percent of university professors think they are
better at their jobs than their colleagues.
Twenty-five percent of college students believe they are in
the top 1% in terms of their ability to get along with others.
Seventy percent of college students think they are above
average in leadership ability. Only two percent think they are below average.
--Thomas Gilovich, How We Know What Isn't So
Eighty-five percent of medical students think it is improper
for politicians to accept gifts from lobbyists. Only 46 percent think it's
improper for physicians to accept gifts from drug companies.
--Dr. Ashley Wazana JAMA Vol. 283 No. 3, January 19, 2000
A Princeton University research team asked people to
estimate how susceptible they and "the average person" were to a long
list of judgmental biases; the majority of people claimed to be less biased
than the majority of people.
A 2001 study of medical residents found that 84 percent
thought that their colleagues were influenced by gifts from pharmaceutical
companies, but only 16 percent thought that they were similarly influenced.
--Daniel Gilbert, "I'm OK; you're biased"
People tend to hold overly favorable views of their
abilities in many social and intellectual domains... This overestimation
occurs, in part, because people who are unskilled in these domains suffer a
dual burden: Not only do these people reach erroneous conclusions and make
unfortunate choices, but their incompetence robs them of the metacognitive
ability to realize it. --"Unskilled and Unaware of It: How Difficulties in
Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," by
Justin Kruger and David Dunning Department of Psychology Cornell University,
Journal of Personality and Social Psychology December 1999 Vol. 77, No. 6,
1121-1134.
'Our capacity for self-deception has no known limits.' --
Michael Novak
"We and They"
by Rudyard Kipling
Father, Mother, and Me,
Sister and Auntie say
All the people like us are We,
And everyone else is They.
And They live over the sea
While we live over the way,
But—would you believe it?—They look upon We
As only a sort of They!
We eat pork and beef
With cow-horn-handled knives.
They who gobble Their rice off a leaf
Are horrified out of Their lives;
While they who live up a tree,
feast on grubs and clay,
(Isn't is scandalous) look upon We
As a simply disgusting They!
We eat kitcheny food.
We have doors that latch.
They drink milk and blood
Under an open thatch.
We have doctors to fee.
They have wizards to pay.
And (impudent heathen!) They look upon We
As a quite impossible They!
All good people agree,
And all good people say,
All nice people, like us, are We
And everyone else is They:
But if you cross over the sea,
Instead of over the way,
You may end by (think of it!) looking on We
As only a sort of They!
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
“…people tend to have overly favorable and objectively indefensible views of their own abilities. For example, a full 94% of college professors say they do "above average" work, although it is impossible for nearly everyone to be above average. …
‘…people use themselves as the "model of excellence" in judgments of other people. For example, ask people what it takes to be an "effective leader," and they tend to describe someone who resembles themselves: Task-oriented people (e.g., they describe themselves as persistent, ambitious) tend to cite task-skills as important in leadership; People-oriented individuals (e.g., they describe themselves as friendly and tactful) tend to emphasize social skills in their definition of the effective leader.
‘The second phenomenon (using the self as model of excellence) produces the first phenomenon describe above (too many people describe themselves as above average).
‘…using the self as the model of excellence in judging others leads to many disagreements in social judgment.
‘…people tend to define excellence so egocentrically.’
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Researchers at Emory University monitored brain activity while asking staunch party members, from both left and right, to evaluate information that threatened their preferred candidate prior to the 2004 Presidential election. "We did not see any increased activation of the parts of the brain normally engaged during reasoning," said Drew Westen, Emory's director of clinical psychology. "Instead, a network of emotion circuits lit up... reaching biased conclusions by ignoring information that could not rationally be discounted. Significantly, activity spiked in circuits involved in reward, similar to what addicts experience when they get a fix," Westen explained.
- See more at: http://www.davidbrin.com/addiction.html#sthash.K0E4pYJN.dpuf
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Oxytocin promotes human ethnocentrism
- Carsten K. W. De Dreu1,
- Lindred L. Greer,
- Gerben A. Van Kleef,
- Shaul Shalvi, and
- Michel J. J. Handgraaf
- Edited by Douglas S. Massey, Princeton University, Princeton, NJ, and approved December 21, 2010 (received for review October 12, 2010
- Abstract
Human ethnocentrism—the tendency to view one's group as centrally important and superior to other groups—creates intergroup bias that fuels prejudice, xenophobia, and intergroup violence. Grounded in the idea that ethnocentrism also facilitates within-group trust, cooperation, and coordination, we conjecture that ethnocentrism may be modulated by brain oxytocin, a peptide shown to promote cooperation among in-group members. In double-blind, placebo-controlled designs, males self-administered oxytocin or placebo and privately performed computer-guided tasks to gauge different manifestations of ethnocentric in-group favoritism as well as out-group derogation. Experiments 1 and 2 used the Implicit Association Test to assess in-group favoritism and out-group derogation. Experiment 3 used the infrahumanization task to assess the extent to which humans ascribe secondary, uniquely human emotions to their in-group and to an out-group. Experiments 4 and 5 confronted participants with the option to save the life of a larger collective by sacrificing one individual, nominated as in-group or as out-group. Results show that oxytocin creates intergroup bias because oxytocin motivates in-group favoritism and, to a lesser extent, out-group derogation. These findings call into question the view of oxytocin as an indiscriminate “love drug” or “cuddle chemical” and suggest that oxytocin has a role in the emergence of intergroup conflict and violence.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
from http://www.wired.com/wiredscience/2011/10/how-friends-ruin-memory-the-social-conformity-effect/
We tweak our stories so that they become better stories. We bend the facts so that the facts appeal to the group. Because we are social animals, our memory of the past is constantly being revised to fit social pressures.
The power of this phenomenon was demonstrated in a new Science paper by Micah Edelson, Tali Sharot, Raymond Dolan and Yadin Dudai. The neuroscientists were interested in how the opinion of other people can alter our personal memories, even over a relatively short period of time. The experiment itself was straightforward. A few dozen people watched an eyewitness style documentary about a police arrest in groups of five. Three days later, the subjects returned to the lab and completed a memory test about the documentary. Four days after that, they were brought back once again and asked a variety of questions about the short movie while inside a brain scanner.
This time, though, the subjects were given a “lifeline”: they were shown the answers given by other people in their film-viewing group. Unbeknownst to the subjects, the lifeline was actually composed of false answers to the very questions that the subjects had previously answered correctly and confidently. Remarkably, this false feedback altered the responses of the participants, leading nearly 70 percent to conform to the group and give an incorrect answer. They had revised their stories in light of the social pressure.
The question, of course, is whether their memory of the film had actually undergone a change. (Previous studies have demonstrated that people will knowingly give a false answer just to conform to the group.) To find out, the researchers invited the subjects back to the lab one last time to take the memory test, telling them that the answers they had previously been given were not those of their fellow film watchers, but randomly generated by a computer. Some of the responses reverted back to the original, but more than 40 percent remained erroneous, implying that the subjects were relying on false memories implanted by the earlier session. They had come to believe their own revisions.
Here’s where the fMRI data proved useful. By comparing the differences in brain activity between the persistent false memories and the temporary errors of “social compliance” the scientists were able to detect the neural causes of the misremembering. The main trigger seemed to be a strong co-activation between two brain areas: the hippocampus and the amygdala. The hippocampus is known to play a role in long-term memory formation, while the amygdala is an emotional center in the brain. According to the scientists, the co-activation of these areas can sometimes result in the replacement of an accurate memory with a false one, provided the false memory has a social component. This suggests that feedback of others has the ability to strongly shape our remembered experience. We are all performers, twisting our stories for strangers.
The scientists briefly speculate on why this effect might exist, given that it leads to such warped recollections of the past:
Altering memory in response to group influence may produce untoward effects. For example, social influence such as false propaganda can deleteriously affect individuals’ memory in political campaigns and commercial advertising and impede justice by influencing eyewitness testimony. However, memory conformity may also serve an adaptive purpose, because social learning is often more efficient and accurate than individual learning. For this reason, humans may be predisposed to trust the judgment of the group, even when it stands in opposition to their own original beliefs.
This research helps explain why a shared narrative can often lead to totally unreliable individual memories. We are so eager to conform to the collective, to fit our little lives into the arc of history, that we end up misleading ourselves. Consider an investigation of flashbulb memories from September 11, 2001. A few days after the tragic attacks, a team of psychologists led by William Hirst and Elizabeth Phelps began interviewing people about their personal experiences. In the years since, the researchers have tracked the steady decay of these personal stories. They’ve shown, for instance, that subjects have dramatically changed their recollection of how they first learned about the attacks. After one year, 37 percent of the details in their original story had changed. By 2004, that number was approaching 50 percent. The scientists have just begun analyzing their ten year follow-up data, but it will almost certainly show that the majority of details from that day are now inventions. Our 9/11 tales are almost certainly better – more entertaining, more dramatic, more reflective of that awful day – but those improvements have come at the expense of the truth. Stories make sense. Life usually doesn’t.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
This next set of research-focused quotes are from a book called:
‘The No Arsehole Rule: Building a Civilized Workplace and Surviving One That Isn’t’ by Robert I. Sutton 2010, Hachette Book Group, Inc. USA
P.17 ‘A series of controlled experiments and field studies in organizations shows that when teams engage in conflict over ideas in an atmosphere of mutual respect, they develop better ideas and perform better. That is why Intel teaches employees how to fight, requiring all new hires to take classes in ’constructive confrontation.’ The same studies show, however, that when team members engage in personal conflict – when they fight out of spite and anger – the creativity, performance, and job satisfaction plummets. In other words, when people act like a bunch of ‘#*!&?%’s, the whole group suffers.’
P.32 ‘European researchers have assembled the best evidence on ripple effects… a British study of more than 5000 employees found that while 25% had been victims of bullying in the past five years, nearly 50% had witnessed bullying incidents. Another British study of more than 700 public sector employees found that 73% of the witnesses to bullying incidents experienced increased stress and 44% worried about becoming targets themselves. A Norwegian study of more than 2000 employees from seven different occupational sectors found that 27% of employees claimed that bullying reduced their productivity, even though fewer than 10% reported being victims. The fear that bullies inject in the workplace appears to explain much of this additional damage; research in the United Kingdom found that more than one third of witnesses wanted to intervene to help victims but were afraid to do so. Bullies drive witnesses and bystanders out of their jobs, just as they do to “firsthand” victims. Research as summarized by Charlotte Rayner in the United Kingdom suggests at about 25% of the victims and about 20% of witnesses quit their jobs [compared to a typical rate of about 5%], So our souls don’t just injure the immediate victims; their wicked ways can poison everyone in the workplace, including their own careers and reputations.’
P.36 ‘The damage that ‘#*!&?%’s do to their organizations is seen in the cost of increased turnover, absenteeism, decreased commitment to work, and the distraction and impaired individual performance documented in studies of psychological abuse, bullying, and mobbing. The effects of ‘#*!&?%’s on turnover are obvious and well-documented.’
P.45, 110 ‘Researchers Charlotte Rayner and Loraleigh Keashly demonstrate how to produce estimates of such costs. They start by estimating (based on past studies in the United Kingdom) that 25% of bullying ‘targets’ and 20% of the ‘witnesses’ leave their jobs [compared to a typical rate of about 5%], and that the ‘average’ bullying rate in the UK is 15%. Rayner and Keashly calculate that in an organization of 1000 people, if 25% of the bullied leave, and the replacement cost is $20,000, then the annual cost is $750,000. They add that if there is an average of two witnesses for each victim, and 20% leave, that adds 1.2 million, for a total replacement costs just shy of $2 million per year.’
P.69-72 ‘Leaders within most organizations not only get paid more than others, they also enjoy constant deference and fans of flattery. A huge body of research – hundreds of studies – show that when people are put in positions of power, they start talking more, taking what they want for themselves, ignoring what other people say or want, ignoring how less powerful people react to the behavior, acting rudely, and generally treating any situation or person as a means for satisfying their own needs – and that being put in positions of power blinds them to the fact that they are acting like jerks. My Stanford colleague Deborah Gruenfeld has spent years studying and cataloging the effects of putting people in positions where they can lord power over underlings. The idea that power corrupts people and makes them act as if they were above rules meant ‘for the little people’ is widely accepted. But Gruenfeld shows that it is astounding how rapidly even tiny and trivial power advantages can change how people think and act – and usually for the worse. In one experiment, student groups of three discussed a long list of contentious social issues {things like abortion and pollution). One member was [randomly assigned] to the higher power position of evaluating the recommendations made by the other two members. After 30 minutes, the experimenter brought in a plate of five cookies. The more ‘powerful’ students were more likely to take a second cookie, chew with their mouths open, and get crumbs on their faces and the table.’
P.72 ‘These actions are consistent with findings that powerful people construe others as a means to one's own ends while simultaneously giving themselves excessive credit for good things to happen to themselves and their organizations.’
P.72 ‘Biologists Roberts Sapolsky and Lisa Share have followed a troop of wild baboons in Kenya since 1978. Sapolsky and Share call them “the Garbage Dump Troop” as they got much of their food from a garbage tip at a tourist lodge. But not every baboon was allowed to eat from the pit in the early 1980s: the aggressive, high status males in the troop refused to allow lower status males or any females to eat the garbage. Between 1983 and 1986, infected meat from the dump led to the deaths of 46% of the adult males in the troop. The biggest and meanest males died. As in other baboon troops studied, before they died, these ranking males routinely beat, bullied, and chased males of similar and lower status, and occasionally directed their aggression at females. But when the top ranking males died, aggression by the new baboons dropped dramatically, with most aggression occurring between baboons of similar rank, little of it directed toward lower status males, and none at all directed at females. The members also spent a larger percentage of the time grooming and sat closer together than in the past, and hormone samples indicated that the lower status males expressed experience less stress than underlings in other baboon troops. Most interestingly, these effects persisted at least through the late 1990s, well after all the original ’kinder’ males had died. Not only that, when adolescent males who grew up in other troops joined the Garbage Dump Troop, they too engaged in less aggressive behavior than in other baboon troops. As Sapolsky put it, “We don’t understand the mechanism of transmission… but the jerky new guys are obviously learning: ‘we don’t do things like that around here.’” So, at least by baboon standards, the Garbage Dump Troop developed and enforced what I would call a no ‘#*!&?%’ rule.’
P.74 ‘Pay is a vivid sign of power differences, and a host of studies suggest that when the difference between the highest- and lowest-paid people in the company or team is reduced, a host of good things have happened including improved financial performance, better product quality, enhanced research productivity, and, in Baseball teams, a better win-loss record.’
P.79 ‘A series of experiments and field studies done at the Kellogg Management School, Wharton Business School, and Stanford show that disruptive conflict is typically ‘emotional,’ ‘interpersonal,’ or ‘relationship based’ when people fight because they despise one another and, in some cases, have a history of trying to harm one another. Groups that fight in these ways are less effective at creative and routine tasks, and their people are constantly upset and demoralized. In contrast, these researchers find that conflict is constructive when people argue over ideas rather than personality or relationship issues, which they call ‘task’ or ‘intellectual’ conflict. Stanford’s Katheen Eisenhardt and her colleagues, for example, found that constructive conflict results when top management teams “base discussion on current factual information” and “develop multiple alternatives to enrich the debate.”’
P.94 ‘“Emotion contagion” researcher Elaine Hatfield and her colleagues concluded, “In conversation, people tend to automatically and continually mimic and synchronize their movements with the facial expressions, voices, postures, movements, and instrumental behaviors of others.’ … Experiments by Leigh Thompson and Caroline Anderson show that even when compassionate people join a group with a leader who is “high energy, aggressive, mean, the classic bully type,’ they are “ temporarily transformed into carbon copy of the alpha dogs.” …Contagion studies also show that when people ‘catch’ unpleasant expressions from others, like frowning or glaring, makes them feel grumpier and angrier – even though they don’t realize or even deny that it is happening to them.’
P.102 ‘When status differences between people (and baboons) at the top, middle, and bottom of the pecking order are emphasized and magnified, it brings out the worst in everyone. Alpha males and females turn into selfish and insensitive jerks and subject their underlings to abuse; people at the bottom of the heap withdraw, suffer psychological damage, and perform at levels well below their actual abilities. Many organizations amplify these problems by constantly rating and ranking people, giving the spoils to a few stars, and treating the rest as second-and third-class citizens. The unfortunate result is that people who ought to be friends become enemies, cutthroat jerks who run wild as they scramble to push themselves up the ladder and push their rivals down. … The organizational life… is nearly always a blend of cooperation and competition, and those organisations that forbid extreme internal competition not only are more civilized but perform better too – despite social myths to the contrary.’
P.110 ‘Hundreds of studies by psychologists show that nearly all human beings travel through life with distorted, and often inflated, believes about how they treat, affect, and are seen by others. If you want to confront the hard fact is about your self rather than wallowing in your protective delusions, try contrasting what you believe about yourself with how others see you.’
P.141 ‘When groups work mostly through e-mail or conference calls (rather than face-to-face), they tend to fight more and trust each other less… members develop incomplete, and often overly negative, opinions of one another… My Stanford colleagues Pamela Hinds and Diane Bailey show the conflict – especially “disagreements characterised by anger and hostility” – is more likely and trust is lower when groups do work that is “mediated” by information technologies than in face-to-face meetings.’
P.156 ‘Studies by Stanford’s Lara Tiedens and her colleagues suggest it is often a ‘kiss- up, slap-down world,’ and strategic use of anger and blame can help push yourself up the hierarchy and knock others down. Tiedens demonstrated this in an experiment in which, during US Senate debates about whether Bill Clinton should be impeached, she showed recent film clips of the then–President. In one clip, Clinton expressed anger about the Monica Lewinsky sex scandal, and in the other, he expressed sadness. Subjects who viewed an angry Clinton were more likely to say he should be allowed to remain in office and not be severely punished, and that ’the impeachment matter should be dropped’ – in short, he should be allowed to keep his power. Tiedens concludes from this experiment, and from a host of related studies, that although angry people are seen as “unlikable and cold,” strategic use of anger – outbursts, snarling expressions, staring straight ahead, and “strong hand gestures” like pointing and jabbing – “creates the impression that the expresser is competent.” More broadly, leadership research shows that subtle nasty moves like glaring and condescending comments, explicit moves like insults or put–down’s, and even physical intimidation can be effective paths to power.’
P.158 ‘Harvard’s Teresa Amabile in her Journal of Experimental Social Psychology article “Brilliant but Cruel” … did controlled experiments with book reviews; some reviews were nasty and others were nice. Amabile found that negative and unkind people were seen as less likable but more intelligent, competent, and expert than those who expressed the same messages in kinder and gentler ways.’
P.197 ‘Consider a 2008 national survey by Zogby of 8,000 Americans: 37% of working Americans reported being bullied by others; less than .05% reported that they are or had been bullies. In other words, people reported being bullied at roughly 80 times the rate they admitted to bullying others. This finding echoes hundreds of studies showing that most people see themselves in a more positive light than the facts actually warrant and that when things go wrong people usually blame others and outside forces (rather than themselves).’
P.200 ‘In 2008, UC Berkeley’s Dacher Keltner, an authority on power dynamics, wrote in ‘Greater Good,’ “when researchers give people power in scientific experiments, they are more likely to touch others in potentially inappropriate ways, to flirt in a more direct fashion,” to “interrupt others, to speak out of turn, to fail to look at others when they are speaking,” and, “to tease friends and colleagues in a hostile and humiliating fashion.” Experiments by Keltner and his colleagues also show when people get a little power, they feel less compassion when hearing others talk about painful experiences, such as the death of a friend. Another new twist comes from a 2010 study of 410 employees by Serena Chen and the Nathanael Fast, called ‘When the boss feels inadequate’: Their feelings of insecurity and incompetence magnify the chances that superiors will bully subordinates. Beware that no matter how wonderful and caring you may be or have been in the past, becoming top dog can transform you into an overbearing, insensitive, and selfish creep. According to Keltner, research on human groups shows they tend to elevate the most unselfish and cooperative members to leadership positions. But Keltner concludes that once these good-hearted people get the power, they often turn nasty and selfish – a problem apparently magnified when they feel insecure or incompetent.’
P.211-212 ‘One of my favorite songs is Johnny Mercer’s 1940s classic ‘Accentuate the Positive.’ Everything I know about human behavior confirms the truth of Mercer’s title. Accentuating the positive and, as he put it, latching “on to the affirmative” are hallmarks of healthy and successful leaders, followers, and organizational cultures. Mercer’s advice to “eliminate the negative” deserves particular attention. Writing this book and being the ‘#*!&?%’ guy taught me many lessons. But one stands above the rest: Eliminating the negative is the first and most important step to take in your work and the rest of your life. Recall that Andrew Mina and his colleagues found that negative interactions in the workplace packed a wallop on employees’ moods five times stronger than positive interactions. This isn’t a fluke. In their 2001 article in the Review of General Psychology “Bad is Stronger Than Good,’ psychologist Ray Baumeister and his colleagues report that the 5-to-1 rule has been replicated by numerous researchers. Long-term studies of close personal relationships, for example, show that “in order for a relationship to succeed, positive and good interactions must outnumber the negative and bad ones by at least 5 to 1. If the ratio falls below that, the relationship is likely to fail and break up.” This research reinforces my core message that if your workday entails relentless contact with coworkers, bosses, or customers who treat you like dirt, and you can’t get rid of them, then run for the exit as fast as you possibly can. This finding that “bad is stronger than good” is bolstered by a 2006 research in organizational behavior article on group members who are “bad apples.” Will Felps and his colleagues found that when a group had just one bad apple – that person I would call a chronic deadbeat, downer, or ‘#*!&?%’ - the ensuing distraction, direct damage, and contagious bad behavior injected by these deeply effective characters dragged down performance by 30-40%.’ Robert Sutton is Professor of Management Science and Engineering and a Professor of Organizational Behavior at Stanford. Sutton has been teaching classes on the psychology of business and management at Stanford since 1983. Especially dear to his heart is the Hasso Plattner Institute of Design, which everyone calls “the Stanford d.school.” He is a co-founder of this multi-disciplinary program, which teaches, practices, and spreads “design thinking.”
Sutton studies innovation, leaders and bosses, evidence-based management, the links between knowledge and organizational action, and workplace civility. He has published over 100 articles and chapters on these topics in peer-reviewed journals and the popular press. Sutton’s books include Weird Ideas That Work: 11 ½ Practices for Promoting, Managing, and Sustaining Innovation, The Knowing-Doing Gap: How Smart Firms Turn Knowledge into Action (with Jeffrey Pfeffer), and Hard Facts, Dangerous Half-Truths, and Total Nonsense: Profiting from Evidence-Based Management (also with Jeffrey Pfeffer). His last book The No ‘#*!&?%’ Rule: Building a Civilized Workplace and Surviving One That Isn’t and his current book Good Boss, Bad Boss: How to Be the Best…. and Survive the Worst are both New York Times and Wall Street Journal bestsellers. His current writing project (with Hayagreeva Rao) is (tentatively) called Spreading Excellence: The Art and Science of Spreading Success.
Professor Sutton’s honors include the award for the best paper published in the Academy of Management Journal in 1989, the Eugene L. Grant Award for Excellence in Teaching, selection by Business 2.0 as a leading “management guru” in 2002, and the award for the best article published in the Academy of Management Review in 2005. Hard Facts, Dangerous Half-Truths, and Total Nonsense was selected as the best business book of 2006 by the Toronto Globe and Mail. Sutton was named as one of 10 “B-School All-Stars” by BusinessWeek in 2007, which they described as “professors who are influencing contemporary business thinking far beyond academia.” Sutton is a Fellow at IDEO and a member of the Institute for the Future’s board of directors.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
The Poison of Power
Jonathan Becher, FORBES BUSINESS 1/05/2011
Researchers have shown that power – even artificial power – causes people to change their behavior. Stanford Professor Deborah Gruenfeld describes,
When people feel powerful, they stop trying to ‘control themselves.’ What we think of as ‘power plays’ aren’t calculated and Machiavellian – they happen at the subconscious level. [When we get power,] many of those internal regulators that hold most of us back from bold or bad behavior diminish or disappear.
While the evidence might be overwhelming, I think that this is a perfect example of the old maxim that correlation does not imply causation. Power doesn’t poison people; it lowers inhibitions. People in power that act poorly probably always had the tendency to do so but suppressed it when they felt they had to succumb to social norms. Once in power, they do what they want rather than what they are expected to do.
http://www.forbes.com/sites/sap/2011/01/05/the-poison-of-power/
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Violence and Enjoyment in Rivalry Contests
Examining Perceived Violence in and Enjoyment of Televised Rivalry Sports Contests
Abstract
The scholarly attention paid to the ways in which viewers perceive sports action as violent, how those perceptions may differ across games, and how those perceptions might impact enjoyment is limited. The current project addresses this oversight through an investigation of intercollegiate (American) football contests between two heated rivals. A total of 554 individuals viewed one of four televised contests featuring the same hometown team: two against heated rivals, two against nonrivals. Results reveal that viewers clearly saw rivalry games as more violent than nonrivalry games. Moreover, games won by the hometeam were seen as more violent than those lost. Also, those perceiving high levels of violence reported greater enjoyment than those who perceived low levels of violence in all games. Finally, hierarchical regression analyses revealed that perceived violence contributes differently to the enjoyment of games won than games lost.
...Bryant, Comisky, and Zillmann (1981) selected plays from NFL games and coded them according to the level of violence (i.e., low, intermediate, or high) displayed. Males and females then rated each play for enjoyment. A consistent pattern emerged: Enjoyment increased with the degree of violence. Although the trend occurred across the entire sample, the relationship was much stronger for males than females. In a related study, DeNeui and Sachau (1996) examined spectators’ reported enjoyment of 16 amateur games. The researchers then sought to predict spectator enjoyment using a variety of game statistics. They found that only the indicator of aggressive or violent play—the number of penalty minutes assessed, not how competitive the game was or even which team won—predicted reported enjoyment. Most recently, Raney and Depalma (2006) found in an experimental setting that viewers enjoyed clips from a violent sport (i.e., boxing) more so than viewers of clips from a nonviolent sport (e.g., baseball); this trend was particularly pronounced for males and self-reported sports fans.
...One study measured the effect of commentary on audience perceptions and enjoyment of violent hockey action (Comisky, Bryant, & Zillmann, 1977). Participants viewed either normal or unusually rough hockey play. Additionally, participants viewed the action either with or without accompanying commentary, indicating various perceptions about the play and overall enjoyment. The findings demonstrate the power of sports commentary: Normal play accompanied by conflict-centered commentary was perceived to be more intense and violent than identical action without commentary, while rough play without commentary was perceived as less rough. In a similar study, Sullivan (1991) showed participants a version of a televised basketball game that contained either (1) no commentary, (2) neutral commentary, or (3) commentary emphasizing the rough play by one of the teams. Viewers in the third condition perceived the play to be more aggressive than their counterparts in the other two conditions. Furthermore, the audience of the rough-commentary versions reported the highest level of enjoyment; this was particularly the case with the male viewers in the group. Additionally, Bryant, Brown, Comisky, and Zillmann (1982) demonstrated how even non-violent sports can be made to appear more aggressive through commentary. The researchers created three versions of a tennis match varying only in the reported relationship between the two players: they were best friends, they were bitter enemies, or no relationship was indicated. Participants viewing the players-as-enemies version reported more enjoyment, excitement, involvement, and interest than participants viewing the other two versions. The participants also perceived the players themselves to be more hostile, tense, and competitive in the players-as- enemies condition.
...In the U.S., college football rivalries are among the most heralded. From Alabama-Auburn to Washington-Washington State and everywhere in between (e.g., USC-UCLA, Ohio State-Michigan, Texas-Oklahoma, Notre Dame- everyone) rivalry games elicit tremendous media attention, exorbitant ticket prices, and rabid fan reactions. In fact each fall, ESPN hypes and highlights the importance of “Rivalry Saturday” with week-long analyses and special programming on game day.
http://citation.allacademic.com/meta/p_mla_apa_research_citation/1/7/2/6/7/pages172675/p172675-3.php
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
https://www.russellsage.org/sites/all/files/Fiske_Chap1.pdf
THIS PIECE IS IN PUBLIC DOMAIN:
P16 Although we recognize contempt when the situation suggests it, contempt is a neglected emotion, in that people report expressing and encountering contempt the least of any emotions. Contempt is hardly polite. Its cousins—scorn, disdain, and disrespect—are even more rarely addressed in the social and behavioral sciences.
P17 Scorn Scars the Scornful
Powerful individuals frequently fail to be compassionate in dealing with others. 81 For example, power increases exploitation, teasing, stereotyping, and even sexual harassment. Power-holders treat others instrumentally. In a study by Deborah Gruenfeld and her colleagues, some adults recalled a time when they had had power, and other adults, in a control condition, recalled a time when they had gone to the grocery store (not at all an experience of power). 82 Writing an essay about a time when they had power reliably primed participants to act powerful: they were sensitized to selfinterest, regardless of interpersonal concerns. Most people, for example, avoid a jerk (in this study, someone who had neglected to help a handicapped person); control participants avoided the jerk even when they would have made money from performing as a team. When they could make money off their monumentally unkind partner, in contrast, the power-primed participants were willing to tolerate the jerk who could benefit them. When there was no money in it for them, the power-primed participants rejected the jerk, as did the baseline control participants. The power-holders seemed perfectly willing to approach the jerk when it suited their own needs but rejected the person otherwise. Using someone this way is an expression of scorn. Scores of studies show that power-holders act with self-serving scorn for the needs of others; in experimental games conducted by David De Cremer and Eric van Dijk, for instance, power-holders took more for themselves. 83 Whereas most players split profits about equally, people designated as “leaders” are more likely to appropriate the lion’s share. Leaders do the power grab especially when they feel legitimately entitled to lead, such as when leadership has been determined by a selection test—even if their position as leader actually was randomly determined by the experimenter (see figure 1.5). Scornful power-holders are not only selfish but willfully clueless. Priming a powerful person has silly but also scary effects. Try this: Draw an upper-case letter “e” on your forehead. (If you are in public, do it in your imagination.) Which way do the “tines” of the e-fork face? Drawing the “e” so that it reads correctly from inside your head—with the tines pointing to your own right—correlates with failing to take the perspective of other people; drawing the “e” so that the tines point to your own left correlates with taking the perspective of other people outside your head. This demonstration and others come from the social psychology laboratory of Adam Galinsky and his colleagues. 84 Their scarier study shows that priming power makes us worse at reading other people’s facial expressions. This attitude of “I couldn’t care less about you” shows scorn. Other power researchers have replicated Galinsky’s priming results with people given actual power over other people. 85 In our experiments, Stephanie Goodwin, our colleagues, and I recruited students who were expecting to work with other students from various majors. 86 In this scenario, supposedly based on a Harvard Management Aptitude Scale but actually by random assignment, some students got to be “the boss” and others had to be “the assistant” on a joint task that included signifi- cant prize money as an incentive. In a preliminary management exercise, they were to judge a series of other students described only by college major and personality traits. (On campus, majors serve as shared stereotypes; consider the common images of engineers versus artists.) In their ratings, “bosses,” as predicted, used their personal stereotypes about college majors more than “assistants” did. That is, they made superficial judgments. Conversely, assistants used the individually revealing personality traits more than bosses did. As in the other studies, then, these power-holders were less sensitive to others as unique individuals—yet another form of scorn. To be sure, power-holders sometimes can take responsibility for others, under the right circumstances. Power and status are always accompanied, however, by the risk of developing a scornful insensitivity to subordinates as power-holders control them, derogate them, fail to individuate them, and undermine their agency, all the while being self-serving and instrumental. 87 Recent studies show that people induced to feel powerful develop deficits specific to understanding others’ emotions and thoughts. They fail to identify others’ emotional expressions, to consider others’ perspectives, and to appreciate others’ knowledge. Such disregard for people raises the disturbing possibility that power inhibits our ability to see others as fully human entities possessing minds; that is, power may allow scorn. Consistent with this suggestion, people often view social out-groups as less than human, a scorn-filled judgment if ever there was one. The emotional logic runs like this: we are more human than they are because we have a more complex inner life. As Jacques-Philippe Leyens and his collaborators have shown, we more readily see the in-group as experiencing subtle, complex, uniquely human emotions such as love, hope, grief, and resentment. 88 Out-group members—people unlike us—seem to experience only the same simple, primitive emotions that animals do (such as happiness, fear, anger, or sadness). Viewing “them” as feeling momentarily sad but not deeply grieving over the loss of family members, for example, makes it easier to avoid worrying about their misfortunes. This infrahumanization dynamic dampened empathy in the Hurricane Katrina debacle. Generally, white and black observers reported the other-race victims as experiencing less of the uniquely human emotions (anguish, mourning, remorse). To the extent that observers did perceive those emotions, however, they were more likely to offer help. 89 Certain forms of social power reduce our ability to understand others’ inner experiences (thoughts and feelings), thereby reducing our capacity for empathy and resulting in scorn directed downward. The social neuroscientist Lasana Harris and I took these ideas into the brain-scanning laboratory. Based on our lab’s previous work, much of it with Amy Cuddy, we predicted that the least sympathetic, lowest-of-the-low out-groups would be homeless people and drug addicts. Look again at the BIAS Map (table 1.1). Homeless people are outliers, located the farthest from the center of society. In our survey data, they were so far away from all other social groups, along both negative dimensions, that, statistically speaking, they differed from all other humans in people’s minds. 90 Brain scans confirmed precisely this pattern. How might brain patterns display scorn? Human brains have adapted beautifully to social life. The brain’s social cognition network reliably activates (comes on line) when we encounter other people, especially when we are thinking about their thoughts and feelings. 91 In particular, a swath of cortex curves vertically just behind the forehead (about where mystics locate the third eye, but I could be booted from social neuroscience for saying so). The medial prefrontal cortex (mPFC) lights up when we encounter people; this is our young field’s most reliable finding. As a social psychologist, I love this result, which points to our neural attunement to other people.
P21 According to an idea popularized in the 1970s, type A personalities, known for their driven styles, are at risk for heart disease; more recent work, however, identifies the hostility of this personality type as the main culprit. Health psychologists now blame a specific kind of hostility that is dominance-oriented.97 This may be an extreme version of scorn. As Paul Ekman and his colleagues note, facial expressions of contempt (but not anger) relate to hostility in heart patients, a finding that supports a more focused hostility as the risk factor. If borne out, this would fit the idea that contempt or scorn is worse for your health than sheer anger itself.98
Moral inferiority. … Importance or centrality of a trait has been identified by several authors (Tesser, 1991; Beach & Tesser, 2000; Major et al., 1991) as one of the necessary preconditions for upward comparison to represent threat, and morality seems to be central to most people’s self-concept. Park, Ybarra, and Stanik (2006) suggest that people’s self-enhancement tendencies fall along two dimensions, a sociomoral one (e.g., honesty, kindness, and helpfulness) and a taskability one (e.g., intelligence, creativity, and being knowledgeable), and that the former seems to loom larger for most people. Paulhus & John (1998) similarly discuss the prevalence of moralistic self-serving biases over egoistic ones more centered on competence. Allison, Messick and Goethals (1989) showed that morality seems to have a primary place in maintaining and enhancing one’s self-image (the “Mohammed Ali Effect”). These data converge to suggest the centrality of morality in people’s self concept (with some interindividual variability, see Aquino & Reed, 2002), making upward moral social comparison especially likely to lead to self threat and to trigger defense mechanisms.
P61 In cases where potentially greater virtue is experienced as a threat to the self, individuals may take one of three main courses of action to defuse this threat. Alicke (2000) describes how most social comparison theories assume that people deal defensively with unfavorable upward comparison by distorting their meaning, derogating the target, or avoiding them. In the moral domain, this triad will take the form of suspicion (denying moral meaning), trivialization (derogating the target on the potency dimension) or resentment (avoiding association with the threatening other). We describe each in turn.
Ybarra (2002) reviewed how the social psychological literature consistently suggests that whereas we see negative behavior as reflective of people’s true personality, we are quick to ascribe agreeable actions to social demands. This may play a defensive role, because morality is such a central and desirable trait in most people’s self-concept that they should be especially sensitive to threats (see above). This is similar to other kinds of defensive attributions typically observed in social comparison research in the case of unfavorable upward comparison (Alicke, 2000). Whereas the typical defensive attribution in ability comparison might be to ascribe an unfair advantage to the superior other, in the case of moral comparison we predict that it takes the form of suspicion, ascribing hypocrisy and ulterior motives instead of recognizing virtue as the true cause of behavior. Intentions are the crux of the argument in moral comparison, and one does not need to ignore the behavior (which may be difficult) as long as one can cast doubts on the purity of the intentions (much easier). Thus I might freely admit that my neighbor spends her weekends helping children with disabilities, but discount her volunteering as self-righteous posturing, as resulting form pressure from an overbearing church, or as a craving for human contact in an otherwise lonely life, rather than ascribing it to her greater human kindness and decency. Trivialization: Do-gooder derogation. When the virtuous nature of the behavior is too self-evident and cannot be easily brushed off, and the direct route is therefore blocked, a second approach is to remove the threat indirectly by putting down moral others on other traits implying a lack of competence, trivializing their moral gesture, patronizing would-be saints as well-intentioned but naïve fools, weak, unintelligent, with poor common sense and little awareness of the realities of the real world. With this infantilizing and emasculating move, potential threats are rendered into deluded idealists. This is apparent in common derogatory monikers like “do-gooder” and “goody-two-shoes.” It is also reflected in the work on the “might over morality” hypothesis (Liebrand, Jansen, Rijken, & Suhre, 1986), showing that individuals who defect in social dilemmas tend to see cooperators as moral but weak, recasting the situation as one that requires willpower rather than ethical clarity. Mainstream reactions to vegetarians typically exhibit this pattern, and the puzzling mild hostility that they report experiencing (Adams, 2003) can best be understood as defensiveness against an irksome moral claim. Surveys of omnivores reveal that they indeed will readily put down vegetarians, though they do so indirectly (see Monin & Minson, 2007), seeing them as good people (as reflected by higher ratings on Osgood, Suci and Tannenbaum’s 1957 evaluation dimension), while defusing their threat by calling them weak (as reflected by significantly lower ratings on Osgood et al.’s potency dimension).
P63 Resentment: Disliking and distancing. When the behavior is clearly moral and it is hard to call into question the fortitude of the moral other (as in cases of moral rebellion where others take a principled stance against a problematic situation), the previous two routes to self-protection are unavailable. One last resort may be to distance oneself from the threatening other, and to profess little desire to affiliate with him or her (as predicted by the SEM model, Tesser, 1991). This should be reflected in low rankings on sociometric choices, and low rating on liking scales (as in other types of social comparison jealousy, see Salovey, 1991), or other forms of distancing (such as physically moving away from the threatening other, e.g., Pleban & Tesser, 1981). We may realize that it’s difficult (without appearing petty) to question the other’s morality and potency, but still entitled to our preferences (De gustibus non est disputandum), we can decide that we just don’t like the person. This can take the form of outright hostility, rejection, or glee at the superior other’s fall (Schadenfreude, see Smith et al., 1996). In our laboratory (Monin, Sawyer & Marquez, 2007), we have shown that liking for moral rebels depends on the perceiver’s own involvement in the situation. Participants who just saw a confederate refuse to perform a decision task because of its racist undertones liked that rebel, respected him more, and saw him as more moral than a compliant confederate. However, participants randomly assigned to complete the racist task first (which nearly all of them did) actually liked the rebel less than a compliant other. For the latter “actor” participants, the rebel’s stance was an indictment of their own choice, whereas the former “observer” participants had the luxury of appreciating the moral exemplarity of the rebel’s refusal. The fact that this rejection of the rebel involves social comparison was suggested in another study showing that the actor-observer difference was strongest for individuals who scored high on Gibbons & Buunk’s ability subscale of the social comparison orientation scale (INCOM, 1999). The moral nature of the process was reinforced by the finding that the same difference was greatest for individuals who signaled that morality was important to their self-concept in Aquino & Reed’s moral internalization subscale (2002). As with do-gooders, anticipated moral reproach did play a role, as suggested in yet another study showing that the fear of being rejected by the moral rebel mediated the effect of role condition (actor vs. observer) on embracing the rebel. Again we interpret this effect in line with moral social comparison: When faced with a moral other, participants admired him as long as the moral other did not make them look bad, or had the opportunity to look down upon their morality. But as soon as moral others could cast doubt on their own morality, participants denied moral credit, put down others on competence-related dimensions, or simply expressed disliking of the comparison other.
Conclusion We have come a long way since Festinger’s depiction of social comparison as the selection of standards to understand one’s place in the world. Along the years, the 1954 Human Relations paper sparked vast amounts of research located at the core of the social psychological enterprise to understand the human experience in a social world. We hope that the present paper will make a modest contribution to this literature, by sketching possible specificities of social comparison in the moral domain, and by starting to document the way people react to threatening moral standards. Nadler & Fischer (1986) suggest a possible disjunction in upward social comparison, where negative affective consequences can apparently be accompanied by positive behavioral ones – where the more threatening the other, the unhappier we are, but the better we strive to be. By identifying pettier reactions to moral exemplarity, we are not trying to paint a dark picture of the human soul, but rather we hope in the long run to develop strategies that will help people to stop gnawing their teeth at saints and to be instead inspired to work on their own halos.
Above source was
http://psych.stanford.edu/~monin/papers/MoninIRSP2007.pdf
http://psych.stanford.edu/~monin/papers/MoninIRSP2007.pdf Upward social comparison can often be unpleasant (Alicke, 2000; but see Collins, 1996). We argue here that the sting of unflattering comparisons is greatest in the moral domain, because it can lead to three types of experiences that are especially aversive to individuals: Moral inferiority, moral confusion, and/or anticipated moral reproach.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Article|McKinsey Quarterly
Three steps to building a better top team
February 2011 | byMichiel Kruyt, Judy Malan, and Rachel Tuffield
In our work with top teams at more than 100 leading multinational companies,1 including surveys with 600 senior executives at 30 of them… [achieving] effective team dynamics... is a frequent problem: among the top teams we studied, members reported that only about 30 percent of their time was spent in “productive collaboration”—a figure that dropped even more when teams dealt with high-stakes topics where members had differing, entrenched interests.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
From scorn to envy
http://www.apa.org/monitor/2010/10/compassion.aspx
Award-winning scientist Susan Fiske explores our lack of compassion for those we deem different from ourselves.
By Michael Price, Monitor Staff, October 2010, Vol 41, No. 9, Print version: page 36
When we meet someone for the first time, we automatically make two judgments: whether they’re a friend and whether they have power. Princeton University social psychologist Susan Fiske, PhD, calls the first variable warmth and the second one competence.
Plot those concepts on a 2-by-2 square and you get the recipe for scorn and envy, said Fiske, who received this year’s APA Distinguished Scientific Contribution Award at the Annual Convention.
Envy, which results from seeing powerful people in a higher social sphere than yours, is dangerous in that it turns people resentful, sometimes even violently so, said Fiske. Scorn results from seeing someone who is powerless and below you socially, and it’s just as dangerous as it implies the scorned person is not even worth your attention. Even though both feelings are automatic and inevitable to some degree, Fiske said, they corrupt our ability to be compassionate.
The chart below shows how our automatic judgments of people are reflected in our emotions. We pity those whom we feel warmly about but who aren’t powerful, such as elderly people or those with disabilities, Fiske said. We take pride in those with whom we share similar life circumstances and those who are competent — what we call our in-group. To those whom we are neither warm toward nor confident in their power, such as the homeless and poor, we feel disgust. And to the ultra-powerful who aren’t our friends, we feel envy. Taken together, these concepts form the “stereotype content model,” which underlies much of Fiske’s work.
Americans should better understand this model since they often ignore or disregard harmful stereotypes, Fiske said. “We pretend that everyone is equal, so we don’t acknowledge the serious problems all around us.”
One of these problems, Fiske said, is that we seem to value people based on their social status.
To test whether that holds true in an experimental setting, Fiske and colleagues turned to the “trolley problem,” the philosophical dilemma that asks people whether they would switch a runaway trolley onto a different track, killing a single rail worker in order to save the lives of five rail workers in the trolley’s current path.
It turns out that most people would sacrifice that lone rail worker to save five others. But Fiske wanted to know what would happen if she threw social status into the mix. “We thought, let’s put different kinds of people on the tracks,” she said. In her experiment, published in February online in Social Cognitive and Affective Neuroscience, she found that most people were willing to sacrifice a member of their own in-group to save five homeless or poor people. But when researchers looked at fMRI images taken while people made their decisions, Fiske found higher-than-baseline activation in the medial prefrontal cortex and the occipital frontal cortex — regions associated with negotiating complex tradeoffs — indicating that people had a hard time making that decision.
In a follow-up study, she found that when participants in an fMRI machine looked at images of identifiably poor, homeless people, they had lower activation in their medial prefrontal cortex than when they looked at people with their same economic status.
Fiske suspects that this hesitation to value the lives of those we scorn comes from not fully recognizing members of scorned groups as fellow human beings.
But what if you could somehow reinforce those people’s humanity? In preliminary studies, Fiske has been able to boost participants’ empathy by priming them to relate to historically scorned people in pictures. For example, asking participants to consider whether the person in the picture would like a certain type of vegetable — and therefore asking them to step into the pictured person’s mind — erases the disparity in medial prefrontal cortex activation in fMRI readings.
Unfortunately, said Fiske, empathy seems only to move people up from the “disgust” category to the “pity” category. It doesn’t help participants to see homeless or poor people as any more competent.
Still, it helps move people away from scorn, which is a good thing, she said.
Q: Do we all have this impulse to compare? Is it more prevalent among certain groups?
Fiske: As individuals, we compare because of status ambiguity; it's only natural that we want to know where we stand, and comparison to our immediate neighbors in the status hierarchy provides the best information. Men compare more than women do, except on appearance, where women match men. People feeling uncertain and out-of-control compare more than more settled people do.
Q: You say comparison can be useful -- it's informative, it reduces uncertainty, it's protective. But at what point does it become harmful?
Fiske: Comparison becomes dangerous when we forget that we are all in this together. In the lab, we have observed that Schadenfreude (malicious glee) correlates with harming the envied others. But we can control this. If you compare upward to some prizewinner in your field, you can interpret that as a disparity and feel bad, or you can interpret it as "good for our tribe."
Fiske: Envied groups include high-status people of any kind: rich people and outside entrepreneurs, all over the world. We admit they are competent, but we view them as not on our side, so they seem cold, exploitative, and untrustworthy. In the U.S. at present, Asian and Jewish people are often seen this way, as are female professionals.
We haven't talked about the scorned groups so much because it doesn't bother people so much when they scorn someone lower. Scorn is simply not paying attention and wishing the other away. Groups are scorned especially if they are low-status and not-us, such as homeless people and drug addicts. Poor people (regardless of ethnicity) and Latino immigrants are also seen this way. Scorn dehumanizes them and makes us neglect them.
Table 1.7, taken from p. 23 of Envy Up, Scorn Down, shows the placement of different groups on a Behaviors and Intergroup Affects and Stereotypes (BIAS) Map:
[TABLE GOES HERE]
Our studies range from cultural comparisons across a couple dozen countries, to surveys of adults, to lab experiments with undergraduates, including neuro-imaging studies. One of our most depressing studies shows dehumanizing scorn: when people see pictures of homeless people and addicts, the part of the brain that normally activates to pictures of people (even outgroups) simply fails to come online. And people say they are not warm and familiar, not competent and autonomous, and that they would never interact with them. That's the bad news. The good news is that this brain region comes back online with what I consider the soup-kitchen manipulation: when you ask people to imagine what vegetables the homeless guy might eat.
My favorite envy study shows that when people watch investment bankers encounter everyday bad events (sitting in gum, getting splashed by a taxi), they smile. And in our other studies, such Schadenfreude activates reward areas of the brain, which as I mentioned, predicts harming the outgroup. Red Sox fans do this to Yankees fans when the other team loses (and vice versa). But we can short-circuit envy, too, by getting people to empathize.
http://homepage.psy.utexas.edu/homepage/group/busslab/pdffiles/evolution%20of%20envy.pdf
Researchers have long noted that people reserve their feelings of enviousness for those who are similar to themselves-save for their advantage in the desired domain and for advantages that are in self-relevant domains (Parrott, 1991; Salovey & Rodin,1984 Salovey & Rothman, 1991; Schaubrook & Lam, 2004: Tesser, 199I). That is,a core part of one's self worth must be linked to doing well in the domain of comparison.
researchers have demonstrated that women place greater a premium than do men on their potential mates' financial prospects and economic resources, whereas men's mate preferences reflect a preference for those cues most reliably correlated with these traits, namely a woman's youth and attractiveness (Buss, 1989b, 1994; Kenrick & Keefe , 1992; Singh, 1993; Symons, 1979). Applying this evolutionary logic to the exploration of envy predicts, for instance, that women should experience greater envy in response to same-sexed peers being more attractive than themselves, whereas a rival's having access to a greater 6 The Evolutionary Psychology of Envy amount of financial resources than themselves should be more likely to elicit envy in men. Existing empirical research supports these predictions (Hill & Buss,2006).
Researchers interested in the behaviors that envy motivates have noted that envy seems to motivate at least three categories of behavior: submission, ambition, ancl destruction. Submissive reactions to another's superiority may act to prevent one from being harmed by one's competitors (Allan & Gilbert,2002; Buss, 1999; Campos, Berrett, Lamb, Goldsmith, & Stenberg, 1983). In some contexts, the envy eliciting event may simply provide the motivation that one needs to get working to achieve the same outcomes for themselves (i.e., "white" or "competitive" envy; Frank & Sunstein,200l;Matt, 20A1 McAdams, 1992; Palaver, 2004).In yet other circumstances, envy motivates attempts to reduce the relative advantage of the envied rival (Berke, 1988; Elster, 1998; Neu, l9g0; Smith, l99l;Zizzo & Oswald,200l,i.e., "black" or "destructive" envy).
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
http://www.nytimes.com/2011/10/11/science/11tierney.html?pagewanted=all&_r=0
Envy May Bear Fruit, but It Also Has an Aftertaste
By JOHN TIERNEY, Published: October 10, 2011
The researchers were looking for quintessential envy, which is distinct from jealousy. Envy involves a longing for what you don’t have, while jealousy is provoked by losing something to someone else. If you crave a wife like Angelina Jolie, you’re envious of Brad Pitt; if you’re upset about losing your wife to him, you’re jealous.
The psychologists in Texas began, mildly enough, with an experiment dredging up past feelings of envy. Some of the students were asked to write about occasions on which they’d envied a friend or acquaintance. Then these students, along with a control group not asked to recall envious experiences, read interviews with a couple of people who were purportedly enrolled at their university. In these two interviews — which were fictions concocted by the researchers — the respondents answered questions about their studies and goals but didn’t say anything that would elicit envy.
Compared with the control group, the students who’d just finished describing their past envy spent more time studying the interviews and were better at recalling details about these two people. Merely reliving their envy of past rivals apparently caused them to pay more attention to current peers, even though there was nothing obviously threatening about these two people.
If past envy sharpened the mind, what would be the effect of brand-new envy? That was the next experiment for the researchers, Sarah E. Hill and Danielle J. DelPriore of Texas Christian, and Phillip W. Vaughan of the University of Texas.
They showed college students a half dozen bogus newspapers interviews and photographs of other purported students at their school. Female students saw photos of other young women, while male students saw photos of other men. Both sexes saw a similar mix of people, including some described by the researchers as “advantaged peers.” In the photographs, some of the fictitious students were hot and some were not. The interviews revealed clear disparities in wealth. One mentioning owning a new BMW; another drove an old clunker. One had a parent on the board of trustees of the school; another received financial aid. As the real students went through each of these profiles, the researchers asked them about their own emotions and measured how long they spent studying each one. Sure enough, they spent more time contemplating the ones toward which they expressed envy: the good-looking students with new BMWs and rich parents. And afterward they were better able to recall the names and other details of these “high-envy targets.”
The results show that envy can “evoke a functionally coordinated cascade of cognitive processes,” as the researchers put it in the October issue of The Journal of Personality and Social Psychology. In an interview, Dr. Hill compared it to another less than lofty cognitive experience: rubbernecking.
“It’s much like a car crash we can’t stop looking at,” she said. “We can’t get our minds off people who have advantages we want for ourselves.”
By paying more attention to these people, we might learn to emulate some of the strategies that yielded their advantages. Or we might notice something that we could use to embarrass and hinder them — again, not a terribly exalted cognitive experience, but potentially useful in winning struggles for status and resources.
To test evolutionary explanations for envy, Dr. Hill and her colleagues looked for differences between men and women in their reactions to the photos and the interviews of peers of their own sex. It turned out that women were more likely than men to be envious of a physically attractive peer, a result that jibed with evolutionary psychologists’ theories about beauty being more important to women for reproductive success.
Wealth is supposed to be more important to men’s reproductive success, but in this experiment there was no gender gap regarding money. Women envied a woman with a BMW as much as men envied a guy driving one.
You might interpret this as evidence that the gender gap on money is narrowing as more women work outside the home. Or it might demonstrate, as the researchers note, that the “fungible nature of money” makes it valuable for both sexes’ reproductive success.
“I wasn’t terribly surprised that women were just as envious as men about wealth,” Dr. Hill said. “After all, from an evolutionary perspective — or any other perspective, for that matter — men wouldn’t be so concerned with resource acquisition if women didn’t like resources so much.”
The new evidence from the Texas experiments is important because it clearly demonstrates that memory and attention are linked with envy, said Richard H. Smith, a psychologist at the University of Kentucky and the editor of “Envy,” a 2008 compendium of research on the subject. The Texas study also supports a very old notion about this vice.
“Traditionally, envy is linked with the eyes,” Dr. Smith said, noting that the word comes from the Latin “invidere,” which mean to look at with malice, or cast an “evil eye.” Just as an invidious comparison is by definition bad, so is envy defined by some psychological researchers to be inherently malign.
But other researchers, like the Dutch psychologist Niels Van de Ven, define envy in two different ways. There’s “benign envy,” in which you pay attention to superiors in order to emulate them, so as to raise your own standing. That’s different from “malicious envy,” in which you pay attention to superiors to find weaknesses that will lower them toward your level.
“With benign envy, the eyes are probably wide open and eager,” Dr. Smith said. “With malicious envy, they are squinting and resentful.”
By any name, envy requires mental effort, as the Texas researchers found in yet another experiment testing envious students. This time, after contemplating a wealthy, attractive peer, the students were asked to work on puzzles. Compared with a control group, they gave up sooner.
They were apparently victims of what psychologists call “ego depletion,” a state of mental fatigue originally documented in people whose energy was depleted by performing acts of self-control. Now it looks as if envy depletes that same resource. It may sharpen your eye and improve your memory, but the benefits come at a cost. Coveting thy neighbor’s goods is hard work.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Social Dominance: An Intergroup Theory of Social Hierarchy and Oppression
[Paperback] Jim Sidanius (Author), Felicia Pratto (Author)
Less Than Human: Why We Demean, Enslave and Exterminate Others
[Hardcover] David Livingstone Smith (Author)
The Science of Evil: On Empathy and the Origins of Cruelty
[Hardcover] Simon Baron-Cohen (Author)
Paradoxes of Group Life: Understanding Conflict, Paralysis, and Movement in Group Dynamics (Jossey-Bass Business & Management) Kenwyn K. Smith (Author), David N. Berg (Author)
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Psychologists have been looking into these questions, specifically the idea that we all toggle back and forth constantly between righteousness and immorality. Is it possible that we have a set point for morality, much like we do for body weight? Three Northwestern University psychologists recently explored this question in the laboratory, with some intriguing results.
Sonya Sachdeva, Rumen Iliev and Douglas Medin had the idea that our sense of moral self-worth might serve as a kind of thermostat, tilting us toward moral stricture at one time and moral license at another, but keeping us on a steady track. They tested this by priming volunteers’ feelings of moral superiority—or their sense of guilt—and watching what happened.
In one experiment, for example, they had the volunteers write brief stories about themselves. Some were required to use words like generous, fair andkind, while others wrote their stories using words such as greedy, mean andselfish. This was the unconscious prime, well known to activate feeling of either righteousness or regret. Afterward, all the volunteers were given a chance to donate money to a favorite charity; as much as $10 or as little as zero. The volunteers didn’t know their charity was being measured as part of the experiment, and the results were unambiguous. Those who were primed to think of their moral transgressions gave on average $5.30, more than twice that of controls; those who were primed to feel self-righteous gave a piddling $1.07.
These results suggest that when people feel immoral, they “cleanse” their self image by acting unselfishly. But when they have reason to feel a little superior, that positive self image triggers a sense of moral license. That is, the righteous feel they have some latitude to stray a bit in order to compensate. It’s like working in a soup kitchen gives you the right to cheat on your taxes later in the week.
The psychologists wanted to double check these findings, and they did so in the context of the environment. That is, do the same feelings of moral superiority and moral transgression shape the trade-offs we make between self-interest and the health of the planet? They used the same primes, and then had all the volunteers pretend they were managing a manufacturing plant. As managers, they had to choose how much they would pay to operate filters that would control smokestack pollution. They could simply obey the industry standard, or they could do more or less; that is, choose social responsibility or choose to cheat the common good.
The results, reported in the April issue of the journal Psychological Science, were clear. Those who were feeling morally debased were much more communitarian, spending more money for the sake of clean skies. The morally righteous were stingy, and what’s more, they took the view that plant managers should put profits ahead of green concerns. They saw it as a business decision, not an ethical choice.
So it appears that our inner moralist deals in a kind of moral “currency.” We collect chits through our good deeds, and debts through our transgressions, and we spend our chits to pay off our moral debts. That way, we keep the moral ledger balanced.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
In that vein, Schnall and her students posed ethical questions to people split into two groups. One consisted of people who'd been told to wash their hands before the session. Those people were more accepting of violations of what I've called common-sense morality (among Schnall's examples were using a kitten for sexual purposes, or taking money found in a lost wallet) than were the unwashed group. The same contrast emerged even if the distinction wasn't about a physical act. A group primed with words like ``pure'', ``washed'' and ``pristine'' also proved more accepting of moral violations than did a group primed with neutral words instead.
Physical cleanliness, of course, protects a person from disgust with himself. Schnall's evidence suggests that this protection can extend from the physical to the psychic realm. Showered, shaved, sweet-smelling in their cologne and blessed shirts, the 9/11 hijackers illustrate exactly why this fact of human nature is no blessing.
[[[A study just published in Psychological Science by Simone Schnall of the University of Plymouth and her colleagues shows that washing with soap and water makes people view unethical activities as more acceptable and reasonable than they would if they had not washed themselves.
Dr Schnall’s study was inspired by some previous work of her own. She had found that when feelings of disgust are instilled in them beforehand, people make decisions which are more ethical than would otherwise be expected. She speculates that the reason for this is that feeling morally unclean (ie, disgusted) leads to feelings of moral wrongness and thus triggers increased ethical behaviour by instilling a desire to right the wrong.
Feeling Pure and Doing Bad (Part 2)
A while back I linked to an interesting study in which people who made themselves physically cleaner were less leery of being morally dirty. I suggested that rituals of purification and cleanliness can be a means to separate people from their intuitive moral sense -- to make them feel less bad about moral filth because they feel themselves to be clean.
Another bit of evidence that (I would argue) supports this idea: This report on a recent study about self-image and behavior. Two groups of people were asked how much they'd like to donate to their favorite charity, between $0 and $10. Some had been asked to write essays about their moral failings -- they had to use words like ``greedy'' and ``selfish.'' Others were asked to write about their own goodness -- they had to use words like ``generous'' and ``kind.''
People who had been primed to see their ethical failures gave much more money (an average of $5.30) to charity. Those encouraged to feel good about their behavior gave an average of $1.07.
Two possible take-aways: First, there is such a thing as too much self-esteem. Second, when our religious, political and cultural institutions encourage us to see ourselves as ``good,'' they may well be making it easier for us to be bad. Might be worth viewing our feel-good rituals with this skeptical eye.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
http://psychology.edinboro.edu/rcraig/pdf/self-seving%20bias.pdf
Daniel Kahneman, a Nobel Prize-winning psychologist and the author of the new book “Thinking, Fast and Slow,”
Read more http://www.newyorker.com/online/blogs/books/2011/10/is-self-knowledge-overrated.html#ixzz1cUFsEnw5
http://www.newyorker.com/online/blogs/books/2011/10/is-self-knowledge-overrated.html
“This same theme applies to practically all of our thinking errors: self-knowledge is surprisingly useless. Teaching people about the hazards of multitasking doesn’t lead to less texting in the car; learning about the weakness of the will doesn’t increase the success of diets; knowing that most people are overconfident about the future doesn’t make us more realistic. The problem isn’t that we’re stupid—it’s that we’re so damn stubborn.
Kahneman, of course, knows all this. One of the most refreshing things about “Thinking, Fast and Slow” is his deep sense of modesty: he is that rare guru who doesn’t promise to change your life. In fact, Kahneman admits that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes. As a result, his goals for his work are charmingly narrow: he merely hopes to “enrich the vocabulary that people use” when they talk about the mind.
This new book will certainly accomplish that—Kahneman has given us a new set of labels for our shortcomings. But his greatest legacy, perhaps, is also his bleakest: By categorizing our cognitive flaws, documenting not just our errors but also their embarrassing predictability, he has revealed the hollowness of a very ancient aspiration. Knowing thyself is not enough. Not even close.”
http://m.theatlantic.com/life/archive/2011/10/why-more-americans-suffer-from-mental-disorders-than-anyone-else/246035/#slide5
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Why do humans reason? Arguments for an argumentative theory
Hugo Mercier & Dan Sperber, BEHAVIORAL AND BRAIN SCIENCES (2011) 34, 57 –111
Abstract: Reasoning is generally seen as a means to improve
knowledge and make better decisions. However, much evidence shows that
reasoning often leads to epistemic distortions and poor decisions. This
suggests that the function of reasoning should be rethought. Our hypothesis is
that the function of reasoning is argumentative. It is to devise and evaluate
arguments intended to persuade. Reasoning so conceived is adaptive given the
exceptional dependence of humans on communication and their vulnerability to misinformation.
A wide range of evidence in the psychology of reasoning and decision making can
be reinterpreted and better explained in the light of this hypothesis. Poor
performance in standard reasoning tasks is explained by the lack of
argumentative context. When the same problems are placed in a proper
argumentative setting, people turn out to be skilled arguers. Skilled arguers,
however, are not after the truth but after arguments supporting their views.
This explains the notorious confirmation bias.
This bias is apparent not only when people are actually arguing, but
also when they are reasoning proactively from the perspective of having to
defend their opinions. Reasoning so motivated can distort evaluations and
attitudes and allow erroneous beliefs to persist. Proactively used reasoning
also favors decisions that are easy to justify but not necessarily better. In
all these instances traditionally described as failures or flaws, reasoning does
exactly what can be expected of an argumentative device: Look for arguments
that support a given conclusion, and, ceteris paribus, favor conclusions for
which arguments can be found.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Alice G. Walton | Oct 4, 2011
That mental health disorders are pervasive in the United States is no secret. Americans suffer from all sorts of psychological issues, and the evidence indicates that they're not going anywhere despite (or because of?) an increasing number of treatment options. There are the mood disorders like depression, bipolar disorder, and the less severe dysthymia (low grade depression); anxiety disorders like generalized anxiety disorder, social phobia, agoraphobia, and obsessive-compulsive disorder (OCD); substance abuse; and impulse control disorder (like attention deficit/hyperactivity disorder). Research shows that while we're seeking treatment more, rates have not dropped much, if at all, in recent years. For depression alone, about one in 10 people in America has suffered from it in the last year. Twice that number will be affected over the course of a lifetime.
But how does the U.S. compare to other nations? The World Health Organization (WHO) has spent a good amount of time and resources determining how rates of mental health disorders fluctuate across the globe. It is no small task. The methods of data collection must maintain consistency across widely disparate cultures and languages, not to mention groups' varying willingness to talk about mental health problems in the first place. The WHO has come up with vast catalogues of mental health data, which they are constantly updating. See how the U.S. compares to other countries:
A perpetual problem with carrying out international research into the murky territory of mental health is the willingness (or unwillingness) of certain groups to talk about it candidly. In certain parts of Asia and in less developed countries, for instance, admitting mental health issues is still taboo -– so relying on self-reports can be dicey. If, however, you ask a husband or wife if his or her spouse suffers from a mental health disorder, says Kessler, you find much higher rates, even in the countries with lower apparent prevalence. The “ask-the-spouse” method, therefore, makes cross-country variation much less pronounced
Despite ongoing research, the predictors of mental health disorders are still evasive, even for the most common, like depression. While a nation’s wealth factor would seem to have an impact, it’s clear from the data that the relationship is complex. Ron Kessler, Ph.D., the Harvard researcher who headed much of the WHO’s mental health research, says that by and large people in less-developed countries are less depressed: After all, he says, when you’re literally trying to survive, who has time for depression? Americans, on the other hand, many of whom lead relatively comfortable lives, blow other nations away in the depression factor, leading some to suggest that depression is a “luxury disorder.”
Another variable that has strong predictive power, at least for mood disorders, has to do with what you’ve got, compared to the people around you. Kessler says, for example, that if your house is worth $500,000 but everyone else in your neighborhood has $1 million homes, this factor alone is one of the best predictors of depression. But when everyone is in the same boat, no matter how humble or lowly the quarters, there’s typically a lot less depression. Therefore, it’s not the objective conditions of life that matter, it’s your subjective perception of how you measure up -– or what you “lack.”
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Goethals, George R.; Messick, David M.; Allison, Scott T.
Suls, Jerry (Ed); Wills, Thomas Ashby (Ed), (1991). Social comparison: Contemporary theory and research, (pp. 149-176). Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc, xv, 431 pp.
in this chapter we report a number of studies of what we call the uniqueness bias, the tendency for people to underestimate the proportion of people who can or will perform socially desirable actions / we will show that it [uniqueness bias] is constrained for particular kinds of behavior, specifically where the motivation to see oneself as better than others is low or where one's standing on the behaviors at issue are easily reality-tested (PsycINFO Database Record (c) 2010 APA, all rights reserved)
J Pers Soc Psychol. 1995 Jun;68(6):1152-62. Overly positive self-evaluations and personality: negative implications for mental health.
Colvin CR, Block J, Funder DC. Source Department of Psychology, Northeastern University, Boston, Massachusetts 02115, USA. Abstract The relation between overly positive self-evaluations and psychological adjustment was examined. Three studies, two based on longitudinal data and another on laboratory data, contrasted self-descriptions of personality with observer ratings (trained examiners or friends) to index self-enhancement. In the longitudinal studies, self-enhancement was associated with poor social skills and psychological maladjustment 5 years before and 5 years after the assessment of self-enhancement. In the laboratory study, individuals who exhibited a tendency to self-enhance displayed behaviors, independently judged, that seemed detrimental to positive social interaction. These results indicate there are negative short-term and long-term consequences for individuals who self-enhance and, contrary to some prior formulations, imply that accurate appraisals of self and of the social environment may be essential elements of mental health.
Comment in • J Pers Soc Psychol. 1996 Jun;70(6):1250-1; discussion 1252-5. •
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Taylor, Shelley E.; Lobel, Marci Psychological Review, Vol 96(4), Oct 1989, 569-575. doi: 10.1037/0033-295X.96.4.569
Social comparison processes include the desire to affiliate with others, the desire for information about others, and explicit self-evaluation against others. Previously these types of comparison activities and their corresponding measures have been treated as interchangeable. We present evidence that in certain groups under threat, these comparison activities diverge, with explicit self-evaluation made against a less fortunate target (downward evaluation), but information and affiliation sought out from more fortunate others (upward contacts). These effects occur because downward evaluation and upward contacts appear to serve different needs, the former ameliorating self-esteem and the latter enabling a person to improve his or her situation and simultaneously increase motivation and hope. Implications for the concept, measurement, and theory of social comparison are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
Collins, Rebecca L. Psychological Bulletin, Vol 119(1), Jan 1996, 51-69. doi: 10.1037/0033-2909.119.1.51
Upward social comparison is generally regarded as ego deflating, yet people often compare themselves with those whose abilities and attributes are better than their own. Upward comparison provides useful information, which may partially account for this behavior. Furthermore, it is proposed that upward comparison only sometimes results in more negative self-evaluations; it can also be self-enhancing. A review of studies testing upward comparison effects on self-evaluations, self esteem, and affect is consistent with this conclusion. Thus, people may make upward comparisons in hopes of enhancing their self-assessment. It is concluded that upward comparison is not in conflict with the desire for positive self-regard and indeed serves it indirectly (through self-improvement) and sometimes directly (by enhancing the self). (PsycINFO Database Record (c) 2010 APA, all rights reserved) The Social Comparison Bias - or why we recommend new candidates who don't compete with our own strengths
Whether it's a gift for small talk or a knack for arithmetic, many of us have something we feel we're particularly good at. What happens from an early age is that this strength then becomes important for our self-esteem, which affects our behaviour in various ways. For example, children tend to choose friends who excel on different dimensions than themselves, presumably to protect their self-esteem from threat. A new study reveals another consequence - 'the social comparison bias' - that's relevant to business contexts. Stated simply, when making hiring decisions, people tend to favour potential candidates who don't compete with their own particular strengths.
Stephen Garcia and colleagues first demonstrated this idea in a hypothetical context. Twenty-nine undergrads were asked to imagine that they were a law professor with responsibility for recommending one of two new professorial candidates to join the law faculty. Half had to imagine they were a professor with a particularly high number of mixed-quality journal publications. These participants tended to say they would recommend the imaginary candidate with fewer but higher quality publications. By contrast, the other half of the participants were tasked with imagining that they were a professor with few but particularly high quality publications. You guessed it - they tended to recommend the candidate with the lower quality but more prolific publication record. In each case the participants favoured the candidate who didn't challenge their own particular area of (imaginary) strength, be that publication quality or quantity. The participants had been told that the department had a balanced mix of existing staff so it's unlikely their motive was a selfless one based on achieving a balanced team.
To make things more realistic, a second study involved a real decision. Forty undergrads completed verbal and maths tasks to which they were given false feedback. Next, they were presented with the scores achieved by two other students, one of whom they had to select to join their team for an up-coming group 'coordination task' that would involve throwing a tennis ball around. Participants tricked into thinking they'd excelled at the maths tended to choose the potential team member who was weak at maths but stronger verbally, and vice versa for those participants fed false feedback indicating they'd excelled verbally. Again, the researchers argued that it was unlikely the participants were simply striving for a balanced team because the maths and verbal skills in question weren't relevant to the tennis ball task.
A final study involved 55 employees at a Midwestern university - they were asked to imagine that they were in a company role with either high pay or great decision-making power. Next they had to recommend to their company that it either offer high pay or high decision-making power to a new recruit. The participants tended to advise offering the new recruit the opposite of whatever they had. The participants also said the particular perk of their imaginary post - pay or decision-making - would be the most important to their self-esteem.
'The present analysis introduces the social comparison bias: a social comparison-based bias that taints the recommendation process,' the researchers said. 'At a broader level, the social comparison bias might help partially to explain why some top-notch departments or organisational units lose prestige over time ... Individuals unwittingly fail to reproduce departmental strengths by protecting their personal standing instead of the standing of the broader department.' _________________________________
Garcia, S., Song, H., and Tesser, A. (2010). Tainted recommendations: The social comparison bias. Organizational Behavior and Human Decision Processes, 113 (2), 97-101 DOI:10.1016/j.obhdp.2010.06.002
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
You don't know me, but I know you: The illusion of asymmetric insight.
Pronin, Emily; Kruger, Justin; Savtisky, Kenneth; Ross, Lee. Journal of Personality and Social Psychology, Vol 81(4), Oct 2001, 639-656. doi:10.1037/0022-3514.81.4.639
People, it is hypothesized, show an asymmetry in assessing their own interpersonal and intrapersonal knowledge relative to that of their peers. Six studies suggested that people perceive their knowledge of their peers to surpass their peers' knowledge of them. Several of the studies explored sources of this perceived asymmetry, especially the conviction that while observable behaviors (e.g., interpersonal revelations or idiosyncratic word completions) are more revealing of others than self, private thoughts and feelings are more revealing of self than others. Study 2 also found that college roommates believe they know themselves better than their peers know themselves. Study 6 showed that group members display a similar bias—they believe their groups know and understand relevant out-groups better than vice versa. The relevance of such illusions of asymmetric insight for interpersonal interaction and our understanding of "naive realism" is discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)
GUILT v SHAME x LEADERSHIP We posit that guilt plays a positive role in the workplace. Specifically, we find that the personality trait of guilt-proneness motivates employees to work hard on their tasks, perform well in their jobs, and feel committed to their employers. Although often described as a dysfunctional affective experience, guilt can be highly energizing, particularly for those individuals who are inordinately prone to experience it. To examine the benefits of guilt-proneness, we have collected data from several field sites, including a software development firm, a mid-sized regional bank, and a small healthcare provider, and conducted several laboratory experiments. The evidence clearly suggests that those employees who are more guilt-prone work harder, perform better, and exhibit higher levels of commitment to the firm relative to those who are less guilt-prone. In short, it seems that guilt is good, particularly when one feels guilty on the job.
We propose that guilt proneness is a critical characteristic of leaders and find support for this hypothesis across 3 studies. Participants in the first study rated a set of guilt-prone behaviors as more indicative of leadership potential than a set of less guilt-prone behaviors. In a follow-up study, guilt-prone participants in a leaderless group task engaged in more leadership behaviors than did less guilt-prone participants. In a third, and final, study, we move to the field and analyze 360° feedback from a group of young managers working in a range of industries. The results indicate that highly guilt-prone individuals were rated as more capable leaders than less guilt-prone individuals and that a sense of responsibility for others underlies the positive relationship between guilt proneness and leadership evaluations. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
A case for putting guilt-prone people in charge
Leadership research has gained an appetite for dispositional affect, a person's tendency to feel one way more than another. Individuals who regularly express positive affects like pride or enthusiasm are seen as better leaders and produce better outcomes. Negative affects, meanwhile, are less consistently useful: although bursts of appropriate anger can help to focus efforts, frequent expressions of negative emotions lead to poor outcomes for followers such as stress and poor coordination. But recent study may change the conversation, as it suggests that a dispositional affect towards feeling guilty makes you more suitable for leadership, both in the eyes of others and through your efforts.
Stanford researchers Rebecca Schaumberg and Francis Flynn began online, asking 243 employed people to review a personality profile full of dummy responses to a set of questions, including some linked to unfortunate scenarios such as running down an animal. Half the participants looked at a fabricated profile with responses to the scenario focusing on guilt-proneness: how true is it that "You’d feel bad you hadn’t been more alert driving down the road"? The researchers believed that participants in this group would rate the profile as having more leadership potential when it contained higher (vs lower) ratings of guilt, an emotion which leads you to review your behaviour and seek to fix things. Meanwhile the other half saw responses to shame-proneness ("You would think ‘I’m terrible’"), shame being another 'self-conscious' emotion but one that lacks the urge to act and involves simply a self-directed negative reaction. As expected, profiles high rather than low in guilt proneness were rated as more capable leaders, but levels of shame-proneness had no effect. People who are emotionally involved in redressing bad situations are seen as better leaders.
In the next study, things got real. 140 university staff and students completed surveys including a measure of guilt-proneness, before meeting in groups to carry out two exercises, one figuring out how to survive in the desert, another marketing chosen products by generating taglines and pitches. Participants then rated each team-mate on the degree of leadership that emerged during the sessions. A neat analysis technique allowed Schaumberg and Flynn to put aside relational effects (I get on best with you) and perceiver biases (I rate everyone high on leadership) to derive a true leadership score for each participant. As before, those scores were highest for the most guilt-prone.
A final study combined survey data with that from a prior 360-degree feedback process for a group of 139 MBAs. The researchers found that 360 items that related to leader effectiveness were rated higher for individuals who expressed higher guilt proneness in the survey. This study also suggested that guilt proneness partially makes its effect through another variable, how much responsibility to lead the participant felt. To reverse the aphorism, with great responsibility can come great power.
The evidence then suggests that being driven by guilt to be conscious and caring about how your actions affect the wellbeing of others can help people to be perceived as leaders, emerge as leaders, and have an impact as leaders. However, Schaumberg and Flynn point out that the guilt-prone may be hesitant to take control, taking seriously the potential impact of their actions, and not wanting to displace others hopeful for the role; in summary, "the kind of people who would make outstanding leaders may, in some cases, be reluctant to occupy leadership roles." It may be the job of organisations to coax out these reluctant leaders and cultivate their responsibility to lead.
Schaumberg, R., & Flynn, F. (2012). Uneasy Lies the Head That Wears the Crown: The Link Between Guilt Proneness and Leadership. Journal of Personality and Social Psychology DOI:10.1037/a0028127
The finding: People who are prone to guilt tend to work harder and perform better than people who are not guilt-prone, and are perceived to be more capable leaders.
The research: Francis Flynn gave a standard psychological test, which measured the tendency to feel guilt, to about 150 workers in the finance department of a Fortune 500 firm and then compared their test results with their performance reviews. People who were more prone to guilt, he found, received higher performance ratings from their bosses. Related studies showed that they also were more committed to their organizations and were seen as stronger leaders by their peers.
The challenge: Is guilt good? Would companies benefit from putting more neurotic people, and fewer ultrarational types, into leadership roles? Professor Flynn, defend your research.
Flynn: From a researcher’s perspective, the correlation is stunning. There’s a lot of “distance” between the TOSCA [Test of Self-Conscious Affect, which assesses guilt-proneness] and the performance measure. They’re completely independent. Yet in the research that my coauthor, Rebecca Schaumberg, and I have done, the link between guilt and performance is clearly there. Not only that—in a follow-up study we found that more guilt equaled more commitment. Those who felt guilty worked harder and were more likely to promote the organization to others. And one surprising finding was that guilt-ridden people were more likely to accept layoffs and carry them out.
HBR: Wouldn’t those people feel too guilty about the fact that other people were losing their jobs to handle layoffs well?
It’s not that they don’t feel guilty about laying people off; it’s that they feel obligated to support their employer, so they accept layoffs as a way to reduce costs. They feel it’s their job to be “good soldiers,” and if that means laying off a few to protect the interests of the many, that’s what they’ll do. In short, they’re more sensitive to the overall goals of the firm. They see the forest for the trees.
So guilt-prone people are hardworking high performers who believe in the organization and see the big picture. In other words, they’re leaders.
Exactly. In another study, we had 200 or so MBA students take the TOSCA survey and had their former coworkers rate them on leadership behaviors, such as the ability to lead teams. The students who were more guilt-prone were considered better leaders. Our take is that guilt activates a keen sense of responsibility for one’s actions. What I wonder is, Does guilt make people better leaders but, at the same time, make them averse to taking on leadership positions because they feel that responsibility? We don’t know yet.
Why study guilt?
I asked myself, “What are the less intuitive characteristics that make someone a good employee?” We’ve researched to death the predictable ones. Everyone knows that people who are conscientious are good workers. But would we assume that guilt is part of the secret sauce for the ideal employee? Probably not.
Is guilt widely studied? In psychology, yes. But in organizational research, it’s noticeably absent, which is kind of surprising because you’ve got all these performance expectations in organizations, and we’re not studying how people respond emotionally when they fail to meet them. That seems to be a real oversight. Maybe organizational researchers just assumed that guilt couldn’t be constructive.
So if organizations start inducing guilt in employees, they’ll end up with harder-working, more-loyal staffs? Inducing guilt can sometimes backfire by eliciting resentment, but it can also be highly effective. If it weren’t, then why is my mother so good at it? That doesn’t mean managers should try to inspire guilt or that the approach would work long-term. One thing we haven’t mentioned is that while there are benefits to guilt, there are probably costs as well.
What costs? Well...we haven’t actually found some of the costs we expected to find. We thought, “They’re working so hard, maybe they have lower job satisfaction.” They don’t. “Are they more stressed?” They’re not. It appears that guilt-prone people are good at alleviating their negative feelings. But we still believe there may be costs associated with guilt. We just need to explore other possible disadvantages. One way might be to look beyond work. OK, so guilty people are harder workers, but does this commitment spill over into their personal life so that they’re less able to relax at home?
Hard workers. Good leaders. Not stressed. Can manage feelings. Guilt-prone people sound like model citizens!
They may be more selfless as well. We see a strong connection between guilt tendencies and altruistic behavior. The guilty are more willing to make charitable contributions and assist colleagues in need. There seems to be a link between guilt and positive social behavior.
You and I both were raised Catholic. How are we not running major corporations with large philanthropic foundations by now?
We purposely stayed away from religion in this research. We don’t have any empirical evidence of a link between guilt and certain religious denominations.
What else do you want to learn about guilt?
Right now we’re looking at guilt and absenteeism. Take the retail sector. Low guilt may be a good predictor of employees’ playing hooky. Conversely, guilt-prone people might practice presenteeism—showing up for work when they’re sick. We’re also interested in how people reconcile multiple foci of guilt. What happens when a manager feels compelled to stay late to ensure that a key project is completed on time but also feels obligated to go home and spend more time with their kids?
The TOSCA survey is designed so it’s hard for subjects to tell what it measures. The finance staffers you used as guinea pigs didn’t know what you were studying. Be honest: You feel a little guilty about that. I was sitting with the CFO of the company, and he was going over the TOSCA questions and looking perplexed. He said, “Do you really think this is going to predict anything?” I felt somewhat guilty because I could tell he thought this was a waste of time. He was just peering at me incredulously, like I was this ridiculous academic. Which, of course, I am. But then he let me do it, and we wound up finding some amazing results, so now I don’t have to feel so guilty anymore.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
RESEARCH ON SHAME vs. GUILT, with former but not latter correlated to psychopathology, great article, http://brown2.alliant.wikispaces.net/file/view/Averill2002.pdf
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
ANOTHER SUMMARY: This was exactly the kind of situation that intrigued Rebecca Schaumberg, a graduate student at Stanford University's School of Business.
"Why would people take charge in a group situation when there's not really an incentive to do so? We started thinking maybe they just feel really guilty about not doing it."
Many psychologists believe we vary widely in the extent which we are susceptible to feelings of guilt, and that the emotion can be a spur to action. Some people stay late on a Friday night to finish a piece of work, knowing they won't enjoy their weekend unless they do. Other people go home and watch a movie.
"The emotion is really uncomfortable and it's something that we tend to want to get rid of but the drive to reduce our feelings of guilt can actually propel us to act in really positive ways," says Schaumberg.
"It's that sort of anticipation of feeling guilt that might lead individuals to emerge or take on leadership roles."
She and her colleagues conducted experiments with volunteers in the US. The participants were asked to fill out a questionnaire containing a number of hypothetical scenarios, such as "You are driving down the road, and you hit a small animal."
Each scenario was followed by a range of emotional responses and the participants had to select one that matched how they would feel. This allowed Schaumberg and her colleagues to gauge their "guilt proneness".
The participants were then asked to complete group tasks in which they had an opportunity, but no real incentive, to take charge. It became clear that the higher someone's level of guilt proneness, the more likely he or she was to step up as a leader in the activity.
The team also examined performance feedback for people in real management positions. They found that those who were more prone to feelings of guilt were more often judged by their colleagues and direct reports to be effective leaders.
The results are striking because guilt is experienced as a negative emotion. Previous studies have suggested that what psychologists call "positive affectivity" - being upbeat - can contribute to effective leadership.
Article: One lazy worker can spoil the team Kiri Beilby Edited down from http://www.abc.net.au/news/2011-06-24/one-lazy-worker-can-spoil-the-team/2770600 Updated June 24, 2011 15:27:00
Teams containing even one person with an unconscientious personality performed poorly in comparison to their conscientious peers.
Is there someone in your workplace always milling by the water cooler, on Facebook, or taking the liberty of a long lunch?
Their lazy disposition is not just reducing your workplace morale, but also reducing your team's effectiveness, according to an Australian psychologist.
"We found that a single lazy person - someone low in proactivity - drags the team down, reducing its satisfaction and performance," says PhD candidate Benjamin Walker, from the Australian School of Business at the University of New South Wales.
In research to be presented this weekend at the ninth Industrial and Organisational Psychology conference in Brisbane, Mr Walker examined the conscientiousness of people working in small teams.
Mr Walker placed 158 undergraduate students into 33 teams of four or five people. He measured the groups' overall ability to perform by providing them with a task that required all hands on deck.
Each member was then asked to rate their own character by agreeing or disagreeing with statements relating to personality.
Teams containing even one person with an unconscientious personality performed poorly in comparison to their conscientious peers.
Previous research has looked at how the personality types within a team impact effectiveness by calculating the group's average personality and relating this value to performance.
Mr Walker's study differs by looking at individual personalities and their contribution to overall outcomes.
If negative personality traits do not just affect individual performance, but instead influence the whole team, can employers afford turning a blind eye to the odd unconscientious worker? It seems not.
~ ~ ~ ~ # # # ~ ~ ~ ~ ~ ~ ~ ~ ~ * * * * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ # # # ~ ~ ~
THIS ARTICLE BELOW IS FASCINATING AND INTEGRATIVE PAR EXCELLENCE...
'As Babies, We Knew Morality'
New research supports the understanding that all people are born with a sense of good and bad. What does that say about altruism, community, and the capacity to kill one another?
EMILY ESFAHANI SMITH. NOV 18 2013, 11:00 AM ET
Several years ago, an energetic young mother, Tia, was out and about with her infant Aimee when disaster struck: a group of men, accompanied by vicious dogs, surrounded the pair, snatched up Aimee, and brutalized Tia. They left her helpless and without her daughter.
Aimee was eventually rescued. But Tia was too battered to look after her. While Tia tended to her wounds, her acquaintance Mike offered to take care of baby Aimee. Mike's generous behavior, observers agreed, was the very definition of compassion. In a bygone era, it might even have been called gentlemanly.
"Babies and young children are not just disposed to favor those close to them, they are also prone to hate and fear those outside of their group. This is a tragic limitation in our psychologies."
Mike, a squat and especially hairy fellow, didn't exactly look the part of a knight in shining armor. Like his fellow chimpanzees, Tia and Aimee, he wasn't even human. The trio are research subjects of primatologist Jill Pruetz, whose fellow researchers rescued Aimee from a group of poachers in Senegal several years ago. Mike's altruism was especially remarkable given the violent behavior that male chimps are generally known for. Just last year, an adult male chimp killed a baby chimp at the Los Angeles Zoo in front of a large group of visitors.
Is it correct to say that Mike's actions were "moral"? Where does morality come from? Are human beings born with an innate moral sense, something like a conscience that helps us tell right from wrong? Or are we born as blank slates and learn morality as we make our way through life from infancy to childhood and beyond? If morality is innate, are we born good and corrupted by society, as Jean-Jacques Rousseau thought? Or are we born as brutes and civilized by culture, as “Darwin’s bulldog” T.H. Huxley thought?
Though we share more than 95 percent of our DNA with these apes, many people think that morality is a uniquely human creation. The prevailing and enormously influential view for hundreds of years—championed by intellectual giants from John Locke, Sigmund Freud, and Jean Piaget—was that human beings are born as blank slates and acquire knowledge about right and wrong through their parents, teachers, and other civilizing engines of culture.
Another idea, equally influential, is what the primatologist Frans de Waal calls veneer theory. Veneer theory, which arises from a botched up understanding of Darwinian natural selection, holds that morality is "a cultural overlay, a thin veneer hiding an otherwise selfish and brutish nature,” as de Waal explains. Nature is red in tooth and claw so the point of civilization is to tame the inner beast that lurks inside each of us.
Primatologists have shown that our closest kin in the animal kingdom, from chimps to bonobos, treat each other with empathy, compassion, and self-sacrifice.
But over the last decade, a growing body of evidence has challenged both the blank slate view of morality and veneer theory. Morality, it seems, is hard-wired. Chimps, who lack the tools of civilization, have the building blocks of morality and moral goodness. Primatologists like Frans de Waal, Jill Pruetz, and Christophe Boehm have shown that our closest kin in the animal kingdom, from chimps to bonobos, treat each other with empathy, compassion, and self-sacrifice. Macaque monkeys, more distant from us on the evolutionary chain than the great apes, won’t take food if doing so causes another monkey harm. Even rats show empathy. “Faced with a choice between two containers, one with chocolate chips and another with a trapped companion,” writes de Waal in his recent book about the origins of morality, The Bonobo and the Atheist, rats often choose to rescue their companions first.
Through studying the emotions and behaviors of animals, Darwin himself concluded that they are quite capable of sympathy, affection, and altruism. He wrote about one dog who wouldn’t pass by a sick cat without licking it a couple of times. Dogs, like chimps and humans, also follow social rules that keep the peace in the community. Darwin thought that it is from their social instincts that morality arises. “It would be absurd to speak of these instincts as having been developed from selfishness,” he wrote.
Studying animals is one way to learn about the origins of morality, but another is of course to look at baby humans. Human babies, before they learn how to speak and even hold up their own bodies, are capable of not only telling the difference between right and wrong, but of making morally fraught decisions, a finding that shocked scientists when it was uncovered about ten years ago.
“It knocked our socks off,” says Yale’s Paul Bloom, one of the psychologists behind a series of groundbreaking studies of infant morality and the author of a fascinating new book, Just Babies: The Origins of Good and Evil. It turns out that babies, who are too young to have learned about morality, have an innate moral sense. On top of that, they show a basic disposition to goodness. They are not the little monsters that veneer theorists thought they were. Without prodding, for instance, infants start sharing after they’re six months old. When they’re a little bit older than that, toddlers will help a stranger in need.
“They can say this is the good guy and this is the bad guy and I want to help the good guy and I want to hurt the bad guy. This blows me away.”
In one study by Felix Warneken and Michael Tomasello, a toddler was in a room with his mother when a stranger walked in with his hands full. The stranger walked over to a closet to open the door but couldn’t manage it. As this drama was unfolding, no one looked at the toddler or encouraged him to do anything. Yet about half of all of the infants tested spontaneously got up and walked over to the closet to open the door for the person in need—an all the more remarkable feat when you realize that toddlers are very reluctant to approach adult strangers at all.
“The child is a natural moralist, who gets a huge helping hand from its biological makeup,” writes de Waal in The Bonobo and the Atheist. But that helping hand from nature is rounded out by nurture. From his research on babies, conducted in the Infant Cognition Center at Yale, Bloom has come to see that we are born with this innate moral sense but that it gets fine-tuned over time through learning.
In one experiment, Bloom and his fellow researchers presented 6-and-10-month-olds with a little morality play. The babies watched as a puppet would try to push a ball up a hill. Then, the babies saw one of two things happen. Either another puppet would come along and help the first puppet push the ball up the hill, or another puppet would show up and hinder the first puppet by pushing the ball down the hill.
After the babies watched these scenarios, the researchers presented each puppet to the babies. They wanted to see which puppet the babies would reach for. It turns out that nearly all of the babies, no matter how old they were, reached for the nice helping puppet. But are babies attracted to goodness or are they simply repelled by meanness? To find out, the researchers introduced a third character into the mix—a neutral one who neither helped nor hindered the main puppet. Then, they let the babies choose which puppet they wanted. The babies preferred the neutral character to the mean character, and the good character to the neutral character.
That babies can make moral judgments about scenarios they have never before seen with strangers they’ve never before encountered doing things that they’ve never before seen was surprising. As Bloom said, “They can say this is the good guy and this is the bad guy and I want to help the good guy and I want to hurt the bad guy. This blows me away.”
Bloom was even more surprised when babies as young as three months old showed moral awareness. When Bloom’s research colleagues suggested that they look at babies just twelve weeks out of the mother’s womb, Bloom objected. At that age, babies "really are sluglike," he writes in his book—they’re "mewling and puking in the nurse’s arm," as Shakespeare put it. They can’t reach for puppets the way 6-and-10-month-olds can and it’s unclear what their awareness of the world is.
But even in their slug-like state, these young babies can control their eyes, which “really are windows into the baby’s soul,” as Bloom writes. You can tell what a baby likes by what it looks at. So the researchers showed the three-month olds the same morality play with the helping and hindering puppets and then placed the puppets in front of them afterward. Most of the babies looked toward the nice puppet.
“Babies,” Bloom writes, “have a general appreciation of good and bad behavior.” Beyond distinguishing between good and bad, young children also have an understanding of fairness and justice. In one version of the helping/hindering study, one of the babies actually reached over to the mean puppet and smacked it on the head.
"Morality does not deny self-interest, yet curbs its pursuit so as to promote a cooperative society.”
But just because human beings are born with a moral sense doesn’t mean they are born good. “There is a moral core,” says Bloom, “but it is limited.” Like chimps, we are capable of extraordinary acts of moral goodness. Like chimps, we are capable of moral evil. “The line dividing good and evil cuts through the heart of every human being,” wrote Aleksandr Solzhenitsyn. “And who is willing to destroy a piece of his own heart?”
From an early age, babies show bias to their in-group. Babies are quick to separate the social world into “us” versus “them.” For example, if a baby is raised by a woman, it prefers to look at female faces; if it raised by a man, it prefers looking at a male face; if it is raised by a caucasian, it prefers looking at white faces rather than black or asian ones, while an Ethiopian baby prefers looking at Ethiopian faces rather than those of other nationalities.
The in-group bias shows up in language too. Minutes after they are born, babies who are American prefer listening to English speakers, babies who are French prefer listening to French, and babies who are Russian prefer listening to Russian. Babies also prefer interacting with people who don’t have strange accents.
Preferring one language over another or one type of face over another may seem like two minor and innocent details. After all, babies, like the rest of us, prefer what’s familiar. What’s unfamiliar is a threat, especially to a young and vulnerable infant.
But these biases have important implications, good and bad, for morality. Your language and race are markers of your group identity. Preferring members of your in-group can come at the expense of the out-group. In a study conducted at the University of Zurich, men watched as fans of their soccer club and fans of the rival club got electrically shocked. When the fans of their own club got shocked, the men felt empathy. But when the rival club’s fans got shocked, they felt something quite different. They felt happiness. Their brain’s pleasure centers lit up.
Fortunately, for most people most of the time, there is a wide chasm between impulse and action.
Humans have evolved to be groupish. But our groupishness raises a puzzle for morality. De Waal says the group is the reason for morality. “Morality,” he explains, “is a system of rules concerning the two H’s: Helping or at least not Hurting fellow human beings. It addresses the well-being of others and puts the community before the individual. It does not deny self-interest, yet curbs its pursuit so as to promote a cooperative society.”
Members of other groups, i.e. strangers, inspire “fear and disgust and hatred,” as Bloom writes. When one group of male chimpanzees comes across a smaller gang, for instance, mayhem ensues. “If there is a baby in the group,” Bloom writes, “they may kill and eat it. If there is a female, they will try to mate with her. If there is a male, they will often mob him, rip flesh from his body, bite off his toes and testicles, and leave him for dead.” Human beings can be equally brutal to members of the out-group, as the history of slavery, genocide, and oppression show.
“The special bonds we have with family, friends, and community are part of what gives life meaning,” Bloom says. “But our parochial biases are also the source of great suffering—the ugly truth is that even babies and young children are not just disposed to favor those close to them, they are also prone to hate and fear those outside of their group. This is a tragic limitation in our psychologies and anyone hoping to create a better world has to work to suppress and override these nastier aspects of our natures.”
Fortunately, for most people most of the time, there is a wide chasm between impulse and action. Feeling good when a member of your out-group gets hurt, as in the study of soccer fans, is not the same as hurting that person. “In each of us,” wrote poet Robert Louis Stevenson, “two natures are at war—the good and the evil. All our lives the fight goes on between them, and one of them must conquer. But in our own hands lies the power to choose—what we want most to be we are.”
~#~ ~#~ ~#~ ~#~ ~#~ ~#~ ~#~
In “Just Babies,” Paul Bloom argues that humans are in fact
hardwired with a sense of morality. Drawing on his research at Yale, Bloom
demonstrates that, even before they can speak or walk, babies judge the
goodness and badness of others’ actions; feel empathy and compassion; act to
soothe those in distress; and have a rudimentary sense of justice.
Still, he
contends, this innate morality is limited, sometimes tragically. People are
naturally hostile to strangers, prone to parochialism and bigotry.
He documents both good and bad news: "Babies are moral
animals" who appear to have the ability to judge others' actions and to
prefer both fairness and kindness; but they also are distressed by strangers
and "prone toward parochialism and bigotry.”
THIS ARTICLE BELOW IS FASCINATING AND INTEGRATIVE PAR EXCELLENCE...
'As Babies, We Knew Morality'
New research supports the understanding that all people are born with a sense of good and bad. What does that say about altruism, community, and the capacity to kill one another?
EMILY ESFAHANI SMITH. NOV 18 2013, 11:00 AM ET
Several years ago, an energetic young mother, Tia, was out and about with her infant Aimee when disaster struck: a group of men, accompanied by vicious dogs, surrounded the pair, snatched up Aimee, and brutalized Tia. They left her helpless and without her daughter.
Aimee was eventually rescued. But Tia was too battered to look after her. While Tia tended to her wounds, her acquaintance Mike offered to take care of baby Aimee. Mike's generous behavior, observers agreed, was the very definition of compassion. In a bygone era, it might even have been called gentlemanly.
"Babies and young children are not just disposed to favor those close to them, they are also prone to hate and fear those outside of their group. This is a tragic limitation in our psychologies."
Mike, a squat and especially hairy fellow, didn't exactly look the part of a knight in shining armor. Like his fellow chimpanzees, Tia and Aimee, he wasn't even human. The trio are research subjects of primatologist Jill Pruetz, whose fellow researchers rescued Aimee from a group of poachers in Senegal several years ago. Mike's altruism was especially remarkable given the violent behavior that male chimps are generally known for. Just last year, an adult male chimp killed a baby chimp at the Los Angeles Zoo in front of a large group of visitors.
Is it correct to say that Mike's actions were "moral"? Where does morality come from? Are human beings born with an innate moral sense, something like a conscience that helps us tell right from wrong? Or are we born as blank slates and learn morality as we make our way through life from infancy to childhood and beyond? If morality is innate, are we born good and corrupted by society, as Jean-Jacques Rousseau thought? Or are we born as brutes and civilized by culture, as “Darwin’s bulldog” T.H. Huxley thought?
Though we share more than 95 percent of our DNA with these apes, many people think that morality is a uniquely human creation. The prevailing and enormously influential view for hundreds of years—championed by intellectual giants from John Locke, Sigmund Freud, and Jean Piaget—was that human beings are born as blank slates and acquire knowledge about right and wrong through their parents, teachers, and other civilizing engines of culture.
Another idea, equally influential, is what the primatologist Frans de Waal calls veneer theory. Veneer theory, which arises from a botched up understanding of Darwinian natural selection, holds that morality is "a cultural overlay, a thin veneer hiding an otherwise selfish and brutish nature,” as de Waal explains. Nature is red in tooth and claw so the point of civilization is to tame the inner beast that lurks inside each of us.
Primatologists have shown that our closest kin in the animal kingdom, from chimps to bonobos, treat each other with empathy, compassion, and self-sacrifice.
But over the last decade, a growing body of evidence has challenged both the blank slate view of morality and veneer theory. Morality, it seems, is hard-wired. Chimps, who lack the tools of civilization, have the building blocks of morality and moral goodness. Primatologists like Frans de Waal, Jill Pruetz, and Christophe Boehm have shown that our closest kin in the animal kingdom, from chimps to bonobos, treat each other with empathy, compassion, and self-sacrifice. Macaque monkeys, more distant from us on the evolutionary chain than the great apes, won’t take food if doing so causes another monkey harm. Even rats show empathy. “Faced with a choice between two containers, one with chocolate chips and another with a trapped companion,” writes de Waal in his recent book about the origins of morality, The Bonobo and the Atheist, rats often choose to rescue their companions first.
Through studying the emotions and behaviors of animals, Darwin himself concluded that they are quite capable of sympathy, affection, and altruism. He wrote about one dog who wouldn’t pass by a sick cat without licking it a couple of times. Dogs, like chimps and humans, also follow social rules that keep the peace in the community. Darwin thought that it is from their social instincts that morality arises. “It would be absurd to speak of these instincts as having been developed from selfishness,” he wrote.
Studying animals is one way to learn about the origins of morality, but another is of course to look at baby humans. Human babies, before they learn how to speak and even hold up their own bodies, are capable of not only telling the difference between right and wrong, but of making morally fraught decisions, a finding that shocked scientists when it was uncovered about ten years ago.
“It knocked our socks off,” says Yale’s Paul Bloom, one of the psychologists behind a series of groundbreaking studies of infant morality and the author of a fascinating new book, Just Babies: The Origins of Good and Evil. It turns out that babies, who are too young to have learned about morality, have an innate moral sense. On top of that, they show a basic disposition to goodness. They are not the little monsters that veneer theorists thought they were. Without prodding, for instance, infants start sharing after they’re six months old. When they’re a little bit older than that, toddlers will help a stranger in need.
“They can say this is the good guy and this is the bad guy and I want to help the good guy and I want to hurt the bad guy. This blows me away.”
In one study by Felix Warneken and Michael Tomasello, a toddler was in a room with his mother when a stranger walked in with his hands full. The stranger walked over to a closet to open the door but couldn’t manage it. As this drama was unfolding, no one looked at the toddler or encouraged him to do anything. Yet about half of all of the infants tested spontaneously got up and walked over to the closet to open the door for the person in need—an all the more remarkable feat when you realize that toddlers are very reluctant to approach adult strangers at all.
“The child is a natural moralist, who gets a huge helping hand from its biological makeup,” writes de Waal in The Bonobo and the Atheist. But that helping hand from nature is rounded out by nurture. From his research on babies, conducted in the Infant Cognition Center at Yale, Bloom has come to see that we are born with this innate moral sense but that it gets fine-tuned over time through learning.
In one experiment, Bloom and his fellow researchers presented 6-and-10-month-olds with a little morality play. The babies watched as a puppet would try to push a ball up a hill. Then, the babies saw one of two things happen. Either another puppet would come along and help the first puppet push the ball up the hill, or another puppet would show up and hinder the first puppet by pushing the ball down the hill.
After the babies watched these scenarios, the researchers presented each puppet to the babies. They wanted to see which puppet the babies would reach for. It turns out that nearly all of the babies, no matter how old they were, reached for the nice helping puppet. But are babies attracted to goodness or are they simply repelled by meanness? To find out, the researchers introduced a third character into the mix—a neutral one who neither helped nor hindered the main puppet. Then, they let the babies choose which puppet they wanted. The babies preferred the neutral character to the mean character, and the good character to the neutral character.
That babies can make moral judgments about scenarios they have never before seen with strangers they’ve never before encountered doing things that they’ve never before seen was surprising. As Bloom said, “They can say this is the good guy and this is the bad guy and I want to help the good guy and I want to hurt the bad guy. This blows me away.”
Bloom was even more surprised when babies as young as three months old showed moral awareness. When Bloom’s research colleagues suggested that they look at babies just twelve weeks out of the mother’s womb, Bloom objected. At that age, babies "really are sluglike," he writes in his book—they’re "mewling and puking in the nurse’s arm," as Shakespeare put it. They can’t reach for puppets the way 6-and-10-month-olds can and it’s unclear what their awareness of the world is.
But even in their slug-like state, these young babies can control their eyes, which “really are windows into the baby’s soul,” as Bloom writes. You can tell what a baby likes by what it looks at. So the researchers showed the three-month olds the same morality play with the helping and hindering puppets and then placed the puppets in front of them afterward. Most of the babies looked toward the nice puppet.
“Babies,” Bloom writes, “have a general appreciation of good and bad behavior.” Beyond distinguishing between good and bad, young children also have an understanding of fairness and justice. In one version of the helping/hindering study, one of the babies actually reached over to the mean puppet and smacked it on the head.
"Morality does not deny self-interest, yet curbs its pursuit so as to promote a cooperative society.”
But just because human beings are born with a moral sense doesn’t mean they are born good. “There is a moral core,” says Bloom, “but it is limited.” Like chimps, we are capable of extraordinary acts of moral goodness. Like chimps, we are capable of moral evil. “The line dividing good and evil cuts through the heart of every human being,” wrote Aleksandr Solzhenitsyn. “And who is willing to destroy a piece of his own heart?”
From an early age, babies show bias to their in-group. Babies are quick to separate the social world into “us” versus “them.” For example, if a baby is raised by a woman, it prefers to look at female faces; if it raised by a man, it prefers looking at a male face; if it is raised by a caucasian, it prefers looking at white faces rather than black or asian ones, while an Ethiopian baby prefers looking at Ethiopian faces rather than those of other nationalities.
The in-group bias shows up in language too. Minutes after they are born, babies who are American prefer listening to English speakers, babies who are French prefer listening to French, and babies who are Russian prefer listening to Russian. Babies also prefer interacting with people who don’t have strange accents.
Preferring one language over another or one type of face over another may seem like two minor and innocent details. After all, babies, like the rest of us, prefer what’s familiar. What’s unfamiliar is a threat, especially to a young and vulnerable infant.
But these biases have important implications, good and bad, for morality. Your language and race are markers of your group identity. Preferring members of your in-group can come at the expense of the out-group. In a study conducted at the University of Zurich, men watched as fans of their soccer club and fans of the rival club got electrically shocked. When the fans of their own club got shocked, the men felt empathy. But when the rival club’s fans got shocked, they felt something quite different. They felt happiness. Their brain’s pleasure centers lit up.
Fortunately, for most people most of the time, there is a wide chasm between impulse and action.
Humans have evolved to be groupish. But our groupishness raises a puzzle for morality. De Waal says the group is the reason for morality. “Morality,” he explains, “is a system of rules concerning the two H’s: Helping or at least not Hurting fellow human beings. It addresses the well-being of others and puts the community before the individual. It does not deny self-interest, yet curbs its pursuit so as to promote a cooperative society.”
Members of other groups, i.e. strangers, inspire “fear and disgust and hatred,” as Bloom writes. When one group of male chimpanzees comes across a smaller gang, for instance, mayhem ensues. “If there is a baby in the group,” Bloom writes, “they may kill and eat it. If there is a female, they will try to mate with her. If there is a male, they will often mob him, rip flesh from his body, bite off his toes and testicles, and leave him for dead.” Human beings can be equally brutal to members of the out-group, as the history of slavery, genocide, and oppression show.
“The special bonds we have with family, friends, and community are part of what gives life meaning,” Bloom says. “But our parochial biases are also the source of great suffering—the ugly truth is that even babies and young children are not just disposed to favor those close to them, they are also prone to hate and fear those outside of their group. This is a tragic limitation in our psychologies and anyone hoping to create a better world has to work to suppress and override these nastier aspects of our natures.”
Fortunately, for most people most of the time, there is a wide chasm between impulse and action. Feeling good when a member of your out-group gets hurt, as in the study of soccer fans, is not the same as hurting that person. “In each of us,” wrote poet Robert Louis Stevenson, “two natures are at war—the good and the evil. All our lives the fight goes on between them, and one of them must conquer. But in our own hands lies the power to choose—what we want most to be we are.”
~#~ ~#~ ~#~ ~#~ ~#~ ~#~ ~#~
GENERAL DISCUSSION
The four studies in this paper add to a growing set of findings that question whether trait assessments are depicted accurately as summaries of trait-relevant behaviors. In the first three studies, participants compared their traits with those of the average college student or a randomly selected peer. By asking participants to evaluate their traits in terms of the percentage of times they exhibited specific behaviors, and then to evaluate the average peer’s (or another person’s) traits in terms of those exact same percentages, we were able to examine differences in trait judgments while equating behavior estimates. The better than myself effect is reflected in the finding that people consistently evaluate themselves more favorably than others even when the behavior estimates upon which they base their ratings of another person (or the average person) are the identical estimates they provided for themselves. Study 1 showed that participants evaluated themselves more favorably than the average college student based on estimates of average performance that were identical to the ones they had previously provided. In Study 2, participants were given the opportunity to revise their behavior estimates after seeing the average performance of others. Although participants in Study 2 again demonstrated a very pervasive better than myself effect, they did so without consistently altering their behavior estimates. This finding eliminates the possibility that participants mentally altered their behavior estimates after seeing what they believed were the average estimates of others. Instead, participants generally adhered to their original estimates while continuing to elevate their trait ratings. Previous research has shown that people are much less egocentric when comparing themselves to a real person, even one they have not interacted with and have no specific information about, than when comparing to an average peer (Alickeet al., 1995). Accordingly, the results of Study 3 showed that the better than myself effect was diminished when people compared themselves with randomly selected peers, but was nevertheless significant across the 34 trait comparisons. Thus, the better than myself effect is not limited to abstract comparisons with the average student. The fourth study circumvented possible limitations of the better than myself paradigm. In this experiment, participants first listed all the behaviors they could think of that were relevant to their standing on one of four trait dimensions and then evaluated themselves on that dimension. Participants then read a randomly selected peer’s behavior listings for the same trait and rated that person on the same trait scale. Despite having primed specific trait-related behavioral exemplars, and assuming no overall differences in the favorableness of the behaviors listed by randomly selected peers, the results of this study show that participants continue to evaluate themselves more favorably on the trait dimension when they are asked to base their evaluations on the behaviors they have listed or read about. ... Once trait conceptions are established, they develop a degree of autonomy from their behavioral exemplars. This autonomy allows people to maintain their better than average self-images even when confronted with contradictory behavioral information. In fact, the ability to ignore contradictory behavioral data is implied by the better than average effect because of the obvious fact that people cannot all, in reality, be better than average.
~#~ ~#~ ~#~ ~#~ ~#~ ~#~ ~#~
'Overconfidence effect' - Wikipedia...