If may intrude on this thread again, consider the following:

(1)  Remember that people appear to be relying upon popular
media accounts of the research and we all know how reliable
they can be.  Perhaps the abstract from the Cognitive Neuroscience
Society's program website will help: 

|Slide Session 4
|
|Monday, April 4, 10:00 am - 12:00 pm, Grand Ballroom B
|Emotion and Social Cognition
|Chair: Kevin Ochsner
|
|Speakers: Oriel FeldmanHall, Robert Spunt, Emile Bruneau, Jamil Zaki, 
|Agnes Jasinska, Diana Tamir, Jennifer Silvers, Joan Chiao
|
|Presentation 1: Not What We Say, But What We Do: A Neural Basis 
|for Real Moral Decision-Making
|
|Oriel FeldmanHall1,2, Tim Dalgleish1,2, Russell Thompson1,2, David 
|Evans1,2, Susanne Schweizer1,2, Dean Mobbs1,2; 1Cambridge 
|University, 2Medical Research Council Cognition and Brain Sciences Unit
|
|Some of the most fundamental psychological questions concerning 
|social organisation and human relations centre on morality and altruism. 
|While moral choices in the real world are highly susceptible to pressures 
|of both context and consequence, little research has examined real moral 
|decision-making; instead, research has focused on hypothetical moral 
|reasoning. Consequently, little is known about behavioural responses to 
|real moral challenges and how the brain processes these decisions. Here 
|we show that hypothetical moral decisions do not approximate real moral 
|action and that real moral decisions recruit distinct neural circuitry. Under 
|both real and hypothetical conditions, we measured subjects’ responses 
|when deciding between financial self benefit versus preventing physical 
|harm to a confederate. In a behavioural study, we found that subjects 
|dramatically prioritise their own financial benefit at the expense of harming 
|others, keeping over three times as much for themselves in the real task 
|as compared to the hypothetical. In two functional magnetic resonance 
|studies, we showed that decisions made under hypothetical conditions 
|activated neural networks identified in the existing literature, including 
|the posterior cingulate cortex (PCC)—a region also implicated in 
|imagination. However, decisions made during the real condition activated 
|these networks as well as additional regions in the posterior and middle 
|insular cortex (pINS-mINS)—areas essential in integrating affective body 
|states to create a preliminary neural template of subjective feelings. We 
|conclude that the pINS-mINS activity provides a rudimentary marker 
|for real moral decisions.
http://www.cnsmeeting.org/index.php?page=slide_sessions

NOTE: (a) There is no information about how many participants were
in the study.  News accounts, like that on ScienceNews, provide 
more details but certain questions about what happended are left
unasnwered -- perhaps when/if the study is published, there will be
additional details
(b) The Sciencenews story is available at:
http://blog.sciencenews.org/view/generic/id/72278/title/Shocking_experiment_shows_talk_is_cheap
 Wired copied this story.  Notice that Milgram's affiliation is corrected.
I wonder who told the author?

(2)  What exactly is new or novel about the FeldmanHall et al
study?  Well, identifying which brain areas are active during the task
might be one thing.  But is the finding that hypothetical judgments
about one's behavior not matching actual behavior all that surprising
as implied in the following statement from the above abstract:

|Here we show that hypothetical moral decisions do not approximate 
|real moral action and that real moral decisions recruit distinct neural 
|circuitry.

Milgram pointed out in his original "Behavioral Study of Obedience"
that:

|There was considerable agreement among the respondents on the 
|expected behavior of hypothetical subjects. All respondents predicted
|that only an insignificant minority would go through to the end of the 
|shock series. (The estimates ranged from 0 to 3%; i.e., the most 
|"pessimistic" member of the class predicted that of 100 persons, 
|3 would continue through to the most potent shock available on the 
|shock generator—450 volts.) The class mean was 1.2%. The question 
|was also posed informally to colleagues of the author, and the most 
|general feeling was that few if any subjects would go beyond
|the designation Very Strong Shock. (p375)

So, the expectation was that very few participants would go all the way
in using shocks.  FeldmanHall are reported as saying (see SciNew article):

|When researchers gave a separate group of people a purely 
|hypothetical choice, about 64 percent said they wouldn’t ever 
|deliver a shock — even a mild one  — for money. Overall, 
|people hypothetically judging what their actions would be netted 
|only about four pounds on average. 

Now this is a different question from that asked by Milgram but
the point it that few people in the FeldmanHall would shock another
person for money (though, apparently more would relative to
Milgram's unpaid participants).

Milgram found that 26 of 40 participants "went all the way" or 
65% of the sample.  The remaining 14 participants used a maximum 
shock of 330 (Intense Shock) to375 (Danger: Severe Shock; see 
Table 2 on page 376 and discussion).  Clearly, back in 1963 Milgram 
showed that what people think they would do is not consistent with 
what they will actually do.  Old news. FeldmanHall did not really
find anything new on this point.

FeldmanHall et al however did use money as an incentive to see what
effect it would have on the highest shock used by the participant.
FeldmanHall et al report that 96% of the sample chose to administer
shocks for money (see SciNew article, 5th paragraph).  Note that ALL
of the unpaid subjects in the Milgram experiment not only administered
shocks but used levels of Intense Shock and higher.  FeldmanHall
do not identify the levels of shocked used or even the total number
of participants (I suspect that it was not close to the 40 participants
in the Milgram study).  So, is the novel finding that if you pay participants
to shock someone, less than 100% will use shock while if one is
unpaid 100% will use shock (perhaps a cognitive dissonance effect?)?

(3)  Some readers might be thinking "But Mike, what about the difference
in only seeing the shocked person's hand versus seeing the hand and person's
face while being shocked" (NOTE:  although I did participate if a
friend's experiment on the psychophysics of pain and received electric
shocks, I am not in these videos ;-).  Well, maybe that is a unique
finding.  On the other hand, I suggest that one looks at Thomas Blass'
book "The Man Who Shocked the World", specifically pages 94-97,
especially Figure 6.1 which shows the mean maximum shock used
in different experiments/conditions where Milgram varied the "distance"
between the "teacher" and the "learner", from the classic condition where
the learner is in another room to the condition where the teacher had
to place the learner's hand on a shock plate.  Only 30% of the teachers
went to the top shock in this condition (actually having to touch the
learner to make them get shock makes a big difference).  So, do the
FeldmanHall results coincide with what Milgram found, that is, the
smaller the "distance" between the learner and teacher, the less likely
one will use the maximum shock.  Apparently.  But we'll have to
wait until the research is published to see.

In summary, does getting paid to apply shocks to a person in a
Milgram-type experiment affect whether a "teacher" will shock
a "learner"?  Yes -- apparently it will make one less likely to use
shock (i.e., FeldmanHall et al 96% vs Milgram's 100%).  The 
only really unique feature of FeldmanHall et al's study is that it
identified which brain areas are "lit up" during this task.  Maybe
the one of the more cognitively neuroscience-oriented Tipsters
will be able to say whether even this result is unique.

-Mike Palij
New York University
[email protected]


---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=9905
or send a blank email to 
leave-9905-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to