On Thu, May 9, 2013 at 1:50 PM, Jed Rothwell <jedrothw...@gmail.com> wrote:

> Cude wrote:
>
>
>> After 24 years, there is still not an experiment that anyone skilled in
>> the art can do, and get quantitatively predictable positive results,
>> whether it's excess heat, tritium, or helium (or an unequivocally positive
>> result).”
>>
>
> Yes, there is. It was published in 1996. See:
>
> http://www.lenr-canr.org/acrobat/StormsEhowtoprodu.pdf
>
>
It's not a description of an experiment, it doesn't predict a quantitative
result, and it gives no indication the likelihood of success.


In 1989, P&F said if you do electrolysis of Pd in heavy water, and you have
enough patience, you'll see excess heat. The Storms paper is a kind of
collection of observations from many experiments, and he's a little more
specific than P&F, but basically he still says if you follow these
instructions, some of which may not be essential, and there may be other
factors, and you have enough patience (which he says explicitly), you'll
see excess heat.


That's no more of a quantitatively predictable result than P&F offered. And
of course, with the benefit of this paper, the quality of the results did
not improve. Storms never claimed the kind of power P&F claimed for
example. (And he also recommends flawless crack-free palladium in that
paper, whereas now the business is believed to happen in the cracks and
flaws.)


That's why, *after* this paper, you wrote "After twelve years of
painstaking replication attempts, most experiments produce a fraction of a
watt of heat, when they work at all. Such low heat is difficult to measure.
It leaves room for honest skeptical doubt that the effect is real."


It's why an executive director at the Office of Naval Research, who had
funded experiments by Miles and others said (from a NewScientist article in
2003):  "For close to two years, we tried to create one definitive
experiment that produced a result in one lab that you could reproduce in
another,” Saalfeld says. “We never could. What China Lake did, NRL couldn't
reproduce. What NRL did, San Diego couldn't reproduce. We took very great
care to do everything right. We tried and tried, but it never worked."


It's why McKubre said in 2008 that there is no quantitive reproducibility
nor inter-lab reproducibility.


Even if the effect is small, if it is quantitatively reproducible, then
it's possible to use systematic experiments to scale it up like Curie did,
or Lavoisier did, and then it becomes credible.


But again, a single really prominent effect (especially from an isolated
device) would suffice if it were reliable enough so that it can be widely
demonstrated, or so that anyone can follow a prescription and with suitable
enough devices see the it in a reasonable amount of time.


But cold fusion has neither a reliable indisputable demonstration (at any
statistical level), nor a more subtle, but statistically reproducible
effect that can be carefully studied, and so credibility eludes the field.



> See also:
>
> http://lenr-canr.org/acrobat/CravensDtheenablin.pdf
>
>
>

This is a statistical analysis of all the experiments up to 2007. That's
the opposite of what I was asking for; namely a single experiment that
produces an expected result.


I have respect for statistics, but this is nonsense. I'm sure Cravens and
Letts could do a  Bayesian study of bigfoot sightings and come up with a
vanishingly small probability that it's not real, and it would be taken
about as seriously. Probably someone's done it.


Statistics have an important place, but I think this is sort of thing
Rutherford was talking about when he said: if your experiment needs
statistics, you should have done a better experiment.

Reply via email to