Roger, your final question is spot on:

"Is this the behavior of a community of researchers
that collectively seeking a consensus of reproducible
observations?"

No, it is not. You skewer mainstream psychology most effectively. What is this
behavior then? 

It is the
behavior of a group that is not working towards consensus, and that is not
clear on what the value of specific replicable results would be. It is the
behavior of a group that vies for prestige through popularity contests and
through bean counting publications regardless of replicability or actual
progress being made. It is self-serving behavior, well adapted to the landscape
of a field that lacks a core theory. 

A core
theory wouldn't change this overnight, but it is likely a key component of any
long term efforts to change the culture. 

Eric

P.S. On a related note, my suspicion is that you find
 much more willingness to share data amongst the sub-disciplines of 
psychology with more clearly defined cores. This is because the core 
provides plausible "positive" reasons why someone would ask for your 
data. You would also see less fabrication of data in these areas, 
because colleagues would quickly notice if a lab made a habit of 
publishing non-replicable data. 



On
Sun, Nov 13, 2011 07:35 PM, Roger Critchlow <[email protected]>
wrote:
>
>
>
>>On Sat, Nov 12, 2011 at
>7:29 PM, ERIC P. CHARLES <<#>> wrote:
>
>>
>
>Roger, 
>You are correct that it might seem like psychology should have
>other things to worry about, but frankly the problems you mention (rampant
>misuse of statistics and the rare forged data scandals) would be a lot easier
>to deal with if we had a more unified theoretical base. 
>
>
>
>
>> 
>>Eric --
>> 
>>Well, admittedly, it's been a bad few weeks for psychology in the news,
>not the sort of run of luck one would want to generalize too far.  
>>
>
>
>
>>But I don't see how having a theory helps if the practice doesn't involve
>sharing observations made under reproducible conditions so they can be
>independently verified.  
>>
>
>>Forget the statistical faux pas, and look at the PLOS paper:  49
>papers from the Journal of Personality and
>Social Psychology  and Journal of Experimental
>Psychology: Learning, Memory, and Cognition published in the second half of
2004,  "all corresponding authors had signed a statement that they would share
their
>data for such verification purposes", the data was requested in the summer of
>2005, and 
>
>>
>
>
>
>>>Responses to Data Requests
>
>>>
>
>
>sans-serif; font-size: 12px; line-height: 21px; ">
>Of the 49 corresponding authors, 21 (42.9%) had shared some data with Wicherts
>et al. Thirteen corresponding authors (26.5%) failed to respond to the request
>or any of the two reminders. Three corresponding authors (6.1%) refused to
>share data either because the data were lost or because they lacked time to
>retrieve the data and write a codebook. Twelve corresponding authors (24.5%)
>promised to share data at a later date, but have not done so in the past six
>years (we did not follow up on it). These authors commonly indicated that the
>data were not readily available or that they first needed to write a
>codebook.


>
>>In more than half of the papers the
>supporting data effectively doesn't exist?  And more than a
>quarter of the authors don't even feel obliged to make excuses?  Is this
>the behavior of a community of researchers collectively seeking a consensus of
>reproducible observations?
>>
>
>>-- rec --
>
>
============================================================
>FRIAM Applied Complexity Group listserv
>Meets Fridays 9a-11:30 at cafe at St. John's College
>lectures, archives, unsubscribe, maps at http://www.friam.org
>
>

Eric Charles

Professional Student and
Assistant
Professor of Psychology
Penn State University
Altoona, PA
16601



============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to