Ah-ha! I see now my miscommunication, many thanks! On 23 June 2013 15:14, Kilopascal <[email protected]> wrote: >> In other words, that we don't form our views from the arguments we hear, but >> choose the arguments we accept on the basis of their compatibility with our >> pre-existing views. > > The way I interpreted your statement is that metric supporters should choose > the topic of the argument and not let our detractors chose the argument for > us.
No, what I meant by that statement was that we (meaning human beings, not the metrication community) have a natural tendency towards selective listening. In other words, that people tend to start with their minds already made up, and when faced with arguments for/against something will just give more credence to the arguments that fit in with their already-made-up minds. Throwing more arguments at people is sometimes counter-productive because it will just cause them to dig their heels further in. It doesn't mean that all hope is lost for changing minds, just that if this sort of polarisation is happening, then arguments have to be made in a way that accounts for this selective-listening effect. You have to tailor your arguments in such a way that they are not thrown away out of hand, in other words, meet people where they are. In the paper Cultural Cognition as a Conception of the Cultural Theory of Risk, Kahan talks about a pretty stunning experiment where people are shown different versions of different newspaper articles on climate change, one suggesting pollution controls as a solution, and one suggesting removing restrictions on nuclear power… As backwards as this sounds, people with hierarchical-individualist values were more likely to believe facts about climate change being a risk if the solution proposed solution was compatible with their values (nuclear power) than threatening (pollution controls). Logically it doesn't make sense that whether or not you believe in a problem depends on whether or not you like the solution, but that's people for you! Of course I may be mangling the work and way way overgeneralising from it. This after all one experiment, and was limited to the domain of risk communication (cf. vaccination, GMOs, etc), rather than general persuasion. Still, if this there is any value in the idea that communication is more likely to be successful if you do it in a way that affirms rather than threatening people's worldviews… and if we can maybe apply it to the metrication effort, maybe we have a bit more of a chance… -- Eric Kow <http://erickow.com>
