My apologies for this extremely long post.
Michael E. Welker wrote:
> The Precautionary Principle is flawed and can be used inappropriately
by agenda pushers. Check out this article that explains how and why:
http://www.reason.com/news/show/30977.html. We do not want to use
Almost any tool, conceptual, rhetorical, or otherwise, can be used
inappropriately.
The article you cite states:
"Anyone who merely raises "threats of harm" with no more evidence than
their fearful imagination gets to invoke precautionary measures.
Precautionists would not need to establish any empirical basis for their
fears; they may simply posit that something might go wrong and thus
stymie any proposed action."
This slippery slope is no worse than the current state of affairs:
someone with a fear has the ability to start a debate. If "data,
science, logic, reason, understanding, reality and fairness" point the
other way, then those people should lose the argument.
This is the problem at the intersection of science and politics, and the
problem both exist to solve. As ugly as that process can be, it is
necessary and hopefully acceptable in the long run.
I'm going to skip critiquing the rest of the linked article, though I
will include two more quotes (last one at the end, first one inflammatory):
"Politics is always win/lose, while market decisions are generally win/win."
> conservation agenda. We want to use data, science, logic, reason,
understanding, reality and fairness. We also don't want to use it to
take away the Constitutional rights of American citizens. You can't take
away the rights of folks because you "feel, think, or believe."
Some people believe that in essence, the reason for advocating caution
in matters affecting the environment is precisely to preserve rights. In
their view and in mine, the environment in general is a form of the
Commons. As usual in these situations, it's easy to miss the target of
the optimum solution because we don't know - and we can't know - that
solution with certainty in advance. We're lucky if we figure it out in
retrospect.
And that brings us to risk management and modeling.
Those of us who have been following the global financial situation might
notice some parallels. One of the problems (recently described in a
relatively short article in the New York Times) was that nearly everyone
in big finance started relying on the same risk management model. The
result was near-monoculture and eventually a gigantic feedback cycle.
Where there had been some degree of balance, widespread acceptance of a
statistically flawed (Gaussian? seriously?) model began encouraging
decision-makers in competing companies closer and closer to the same
tactics. Ultimately, the idea of questioning the model (which minimized
the probability of outliers and threshold-dependent forcings) became a
non-starter because it was successful in the short term.
Questioning environmental models is a good activity. Completely
necessary. But, I'd argue, it's good to the extent that it's done well:
on the basis of "data, science, logic, reason, understanding, reality
and fairness." Done poorly, model-questioning is sophist rhetoric:
flawed but still potentially convincing for people who aren't critically
assessing the debate, its inputs and exclusions. Ripe for bullying.
Unfortunately, we have plenty of lousy critique on all sides of this issue.
All tools are models. Tools are effective and useful to the extent that
they accurately model the reality to which they are applied. This
thesis, as far as I can tell, applies from hammers to rockets to climate
to psychology. Humans and the other abstract thinkers and experimenters
will continue to revise, scientifically or not, and technology will
continue to improve. But the crucial point is that the technology must
be *used* optimally to achieve the optimum scenario... and that seldom
happens. The best achievable technology probably isn't the same kind of
technology we imagine today. The optimum, the absolute best case
scenario, is not very likely. Neither is the worst case scenario. The
future is likely to be somewhere in between, but - and this is the issue
- we don't know where in between.
So we try to balance the magnitude of the risk with the magnitude of the
reward, and we try to take into account that certain risks carry
unacceptable results. We estimate our opportunity costs and hope that
we can live with the decision. With the environment, we hope that we
can live and indeed prosper indefinitely, for the foreseeable and the
unforeseeable future. We need extremely high availability from the
environment.
As I see it today, the precautionary principle is an attempt to
encapsulate the most significant complexity we will ever encounter into
an understandable statement of risk management against the unacceptable
outcome: irreversible decline of humanity. (I know some people don't
care about that, but I suspect the vast majority of us do.) Sure, the
precautionary principle is flawed. Almost every tool is flawed.* But
it is extremely difficult to communicate risk management concepts on the
scale required by global environmental issues using few enough words to
carry significant impact. If you can make a better statement, or if you
can reference or provide a critique that holds more water than the link
posted previously, by all means please prove the point!
*Just leaving myself some wiggle room.
And the final quote, the end of the linked article:
"Should we look before we leap? Sure we should. But every utterance of
proverbial wisdom has its counterpart, reflecting both the complexity
and the variety of life's situations and the foolishness involved in
applying a short list of hard rules to them. For some people in some
situations, "Look before you leap" is good advice. Others might be wiser
to heed the equally proverbial, "He who hesitates is lost."
People have understood this maxim for millennia, and the chances are
that its message will eventually reach even Wisconsin's Wingspread
Conference Center. And when it does, I want the Wingspreaders to
understand that the moral equivalent of a Federal Anti-Hesitation
Commission isn't such a good idea, either."
Too bad this final sanity is buried at the end of an otherwise
disappointing exercise in logical fallacies.
Be well,
Jon