At 05:18 PM 1/13/2013, James Bowery wrote:

So-called "confirmation bias" must have had some adaptive value. I wonder what it was or perhaps even is?

Okay, let me guess. We are good at guessing. We might occasionally even get it right.

Much human behavior is learned. We need to be able to create models of the world, and we don't have a whole lifetime to do that. Without working models, we may not survive very long.

At the same time, behavioral models, in the wild where we evolved (or to which we are adapted), to find food, say, are often unreliable. So we must persist in the face of evidence that the model fails. We keep looking consistently with the model we have formed. Sometimes too long.

With mice, behavior that always finds a reward is extingished more rapidly than behavior that only sometimes finds a reward. In the former case, the no-reward conditions may be more likely to represent a *real change* in the environment, rather than "just the breaks" for that day.

I'm getting that the phenomenon is related to language, and that it arises in language. Without a concept of "truth," we might not have such attachment to being "right." So this would apply to models constructed in language, and the problem arises when we think we need to find the "truth." And, of course, to reject what is "false." So we start to think that models are true or false. Actually, they are just models.

The use of language, in spite of this problem, is very powerful, it obviously confers survival value. So far, anyway. If "language" takes us into global extinction, well, I suppose that idea would have been falsified....

An old metaphor for the "ego" is the camel. Very, very useful creature. However, they may step on your face if they get the chance. Be careful with camels. Be careful with your self.

Reply via email to