In behavioral psych, the term is "variable ratio reinforcement" for the
kind of reinforcement schedule, your refer to, that produces
long-persisting behaviors/models/beliefs.  Pseudoskeptics would,
undoubtedly, like to point to that as an explanation for why cold fusion
researchers are irrational.  If we view cold fusion researchers as mice in
a Skinner Box pressing a lever for food pellets, where food pellets are
cases of observed nuclear products such as excess heat, then clearly they
would be correct, except for two things:  the mouse isn't irrational and
the implied payoff of a cold fusion event is far greater than a food pellet
is to a mouse.  As Norman Ramsey pointed out in his preamble to the DoE's
original review of cold fusion: "However, even a single short but valid
cold fusion period would be revolutionary."

The payoff for cold fusion, if true, is so huge that it would be a mistake
of monstrous proportions to invest anything less than an enormous amount of
resources in determining that it could not be reproduced, once there was
evidence for it.

PS:  After a brief web search, there are competing theories out there for
the evolution of confirmation bias.  One is the "payoff" bias, that demands
taking into account the risk adjusted value of a behavior.  I tend to go
along with that.  There is another theory that it originates in social
interactions of advocacy.   The idea that reasoning is primarily social
seems unwarranted and tendentious.  If confirmation bias is adaptive for
the individual interacting with nature, one needn't explain its persistence
in the social setting.  The converse, as was presented in the podcast, is
not true.  Individuals interact with nature all the time, even though they
are within a social setting.  It seems therefore that not only William of
Ockham, but reality demands some explanation of individual confirmation
bias.

On Sun, Jan 13, 2013 at 6:21 PM, Abd ul-Rahman Lomax 
<a...@lomaxdesign.com>wrote:

> At 05:18 PM 1/13/2013, James Bowery wrote:
>
>  So-called "confirmation bias" must have had some adaptive value.  I
>> wonder what it was or perhaps even is?
>>
>
> Okay, let me guess. We are good at guessing. We might occasionally even
> get it right.
>
> Much human behavior is learned. We need to be able to create models of the
> world, and we don't have a whole lifetime to do that. Without working
> models, we may not survive very long.
>
> At the same time, behavioral models, in the wild where we evolved (or to
> which we are adapted), to find food, say, are often unreliable. So we must
> persist in the face of evidence that the model fails. We keep looking
> consistently with the model we have formed. Sometimes too long.
>
> With mice, behavior that always finds a reward is extingished more rapidly
> than behavior that only sometimes finds a reward. In the former case, the
> no-reward conditions may be more likely to represent a *real change* in the
> environment, rather than "just the breaks" for that day.
>
> I'm getting that the phenomenon is related to language, and that it arises
> in language. Without a concept of "truth," we might not have such
> attachment to being "right." So this would apply to models constructed in
> language, and the problem arises when we think we need to find the "truth."
> And, of course, to reject what is "false." So we start to think that models
> are true or false. Actually, they are just models.
>
> The use of language, in spite of this problem, is very powerful, it
> obviously confers survival value. So far, anyway. If "language" takes us
> into global extinction, well, I suppose that idea would have been
> falsified....
>
> An old metaphor for the "ego" is the camel. Very, very useful creature.
> However, they may step on your face if they get the chance. Be careful with
> camels. Be careful with your self.
>
>

Reply via email to