At 09:54 AM 6/7/2012, Peter Gluck wrote:
Dear Friends,
I am very pleased that I have found a partner for discussion
the essence of LENR, therefore see please:
<http://egooutpeters.blogspot.ro/2012/06/reliability-discussion-continues.html>http://egooutpeters.blogspot.ro/2012/06/reliability-discussion-continues.html
I hope that soon we will have the opportunity to excahnge opinions
and ideas about some paradigm changing events, not only history.
For now the focus is on the question how can be used the scientific
data if they are not reliable. (reproducible) Being an engineer I
have limited
understanding of that.
Briefly, if the outcome of an individual test is variable,
nevertheless the outcome of many tests can be analyzed statistically,
and this can be done to demonstrate the reality of a physical effect
just as it can be done to demonstrate the efficacy of a medicine.
As a practical example from electronics engineering, it has happened
that processes for producing complex integrated circuits have been
quite unreliable, but nevertheless adequate where the working devices
can be culled from those that do not work. If it were true that a
cold fusion device was unreliable, i.e., that its heat output could
not be predicted, except within certain outlines, it could remain
possible that a collection of a large number of such devices might be
reliable, overall. But we can certainly hope for improved reliability
with improved understanding.
Dr. Gluck, if your theory about contamination being behind CF cell
unreliability is true, then ways might be found to control the contamination.
I happen to think that, while there may be some effect from
contamination, the problem is rather one of solid-state engineering;
it's quite likely that either Storms' crack theory is correct, or
that some similar phenomenon is taking place that, so far, remains
very difficult to control with the gross techniques being applied.
Following Storms' theory, and applying it to what we know reasonably
well, the behavior of PdD electrochemical cells, cracks grow in PdD
with repeated loading and deloading, which stresses the lattice. We
may suspect that a *particular* size of crack sets up a condition
that allows or creates fusion conditions, which might be according to
any of a number of possible proposed mechanisms. Storms is theorizing
electron catalysis in a particular manner which I don't find
plausible, and Takahashi and Kim suggest mechanisms involving the
formation of a Bose-Einstein Condensate, which Storms thinks
implausible for some rather obvious -- and possibly false -- reasons.
But we don't know the mechanism. Storms is on pretty solid ground
with general suggestion that normal lattice does not support the
reaction. That cathodes which show no effect, then show the effect,
then don't show the effect, with all observed conditions remaining
the same, *except nanostructure, which is shifting,* we can expect,
leads almost inexorably to the nanostructure theory of the Nuclear
Active Environment being controlling. An alternate hypothesis of
contamination (i.e., trace or impurities) remains on the table, but
seems unlikely, given the variety of reports.
It is possible that the unreliability is intrinsic, but I consider
that unlikely unless we are limited to electrochemical cold fusion.
The electrochemical approach remains useful as an investigative tool.
At least it works! (And, as I mentioned, the unreliability is a
nuisance but not an intrinsic obstacle, particularly once correlated
effects are sought and found.)
And now, a detailed (and necessarily long) response to Dr. Gluck's blog post.
Thursday, June 7, 2012
RELIABILITY -THE DISCUSSION CONTINUES.
I hope this discussion will continue because it is constructive,
calm, empathy
laden and I can learn a lot of it. It seems both Abd and I have the
rare ability to not be angry with people who have other opinions
than ours. If it could be created a vaccine for this virtue!
The nearest thing I know to that would be the Landmark Forum. (
http://landmarkeducation.com )
(BTW during my 8.5 years of journalism-writing the INFO KAPPA
Newsletter I have stated that the most aggressive, trolls, Forum
Monsters are not the extremists, not soccer team fans but the
anti-vaccine activists- at an unbelievable intensity.
Well, people do get stuck on ideas. Doesn't mean they are wrong, by
the way. "Stuck" and "right" are not correlated. They exist in
different realms.
Dear Abd, I am very grateful for this opportunity. It is very
difficult to discuss on our Forums about essential problems- due to
the epidemics/endemics of Detailitis.
Our discussion will not lead probably to agreement but let's try
I could disagree on the probability, or I could agree on trying,
i.e., on communicating openly and frankly. In fact, both. So do we
agree or do we disagree?
to generate some Important Questions is more useful for the future
development of the field.
Agreed.
I don't see that we are far from agreement, but maybe Peter sees
something I don't.
Actually it is mainly about the role of reliability in Science and
in Engineering. I am simply not able to believe that:
"We do not need reliability for Science.* It is desirable, that's all.
"Improvement in reliability is desirable, but not necessary." (Quoting you)
Please support this with examples of valuable unreliable scientific
results that have generated valuable science. Or, unreliable
products or processes made by engineering that are used. People need
almost-certainty and safety. But please give priority to Science, I
cannot find an example similar to LENR in the sciences of matter or
energy- not psychology or sociology where too many things are possible.
The example that comes to mind first, for me, is medicine. A medicine
may not successfully treat a disorder in all cases. Yet by controlled
research, we may find that it is helpful in enough cases to be
useful. Cold fusion, particularly in electrochemical cells, is a
complex phenomenon, and the necessary conditions are poorly
understood and apparently chaotic. Nevertheless, we can study the
Fleischmann-Pons Heat Effect (in PdD) and can measure helium in the
evolved gases, or we can even go deeper and do a comprehensive
analysis of cell contents. Either way, from what's been done, the
helium found is highly correlated with the anomalous heat generated.
It is not necessary for this that the generation of heat may be
reliable, all that is necessary is that a significant number of
cells, where this experiment is done, do show the FPHE. The number
that I have for cells showing the FPHE in the original work is one
out of six. Suppose that 60 cells are run, and 10 show anomalous heat
above noise. Suppose that, for all 60 cells, helium is measured, and
helium is above background levels for 10 cells. And all ten of these
showed anomalous heat, and none of the cells with no anomalous heat
showed helium. This is really enough, but suppose that, as well, the
more anomalous heat found, the more helium is found, within
experimental error. If the study is comprehensive (which might
require taking steps like dissolving the cathode or melting it to
drive off all retained helium), suppose that the ratio of heat to
helium corresponds to 25 +/- 5 MeV/He-4.
(That value is what Storms estimates from the extant studies. I
consider the estimate quite rough, as is obvious. Much more work
could be done, that's part of what I want to encourage.)
That result would be conclusive that the heat is nuclear in origin.
It would create a high probability that the nuclear reaction involved
is some kind of fusion that takes deuterium and converts it to
helium. It is not necessarily "d-d fusion." There are other possibilities.
Obviously, though, such an experiment would not establish any kind of
practical possibility. That's what I mean by distinguishing the
science from the engineering of energy generation. We don't need
reliability to extend our scientific knowledge (at least not
reliability in the sense of each experiment producing specifically
predictable results, specific values of energy -- and we don't
necessarily need great reliability with our measurement methods.
Properly done, the experiment I mentioned would adequately establish
the reliability of the calorimetry and the helium measurement!)
This is the power of correlated results. A lot of cold fusion work
has neglected this. An experiment is done that, say, finds tritium in
an electrochemical cell. Heat is not even reported, nor is helium. No
relationship is shown in the experimental series between, say,
loading ratio, H/D ratio, current density, or even total electrolysis
energy. It is considered enough that tritium is reported, yet this
provides us no clue as to the cause.
As an example I came across, consider Oriani's work with measuring
radiation in NiH electrolysis cells. His findings are interesting,
but, in fact, establish almost nothing except that he saw some odd
effects. They were not correlated with input current. Some of the
experimental cells, it seems, had levels of radiation found (i.e,
CR-39 tracks) that were less than with some of the control cells. The
controls were not single-variable controls, matching the experimental
series. The report claimed a "repeatable" effect, but the report
didn't show that, it simply showed that *some kind of anomaly* could
be found with every cell (or most cells?).
Because of poor control, it's not surprising that Kowalski's
replication project did little more than expand the data set a
little. Kowalski correctly concluded that his results did not confirm
a "reproducible experiment," but the experiment wasn't well-defined
in the first place. It was not disciplined, Kowalski simply followed
an electrolysis protocol, as I recall, that was within the Oriani
history, looking for repeatable results. What results? The existence
of an anomaly, an unexplained effect, is not enough to establish much
of anything. Oriani's original work was weak, it may not havbe been
the best candidate for a replication project.
(But that Kowalski even attempted this is commendable, and his
attempt to replication the SPAWAR charged-particle radiation results,
as part of the Galileo project, was also commendable. It's too bad
that military secrecy prevented disclosure, at that point, of the
SPAWAR neutron results, which could have been much more interesting!
Wet CR-39 is pretty much an intrinsic problem, at least on the side
of the CR-39 that is adjacent to the cathode. Scott Little showed
fairly well that the cathode environment, up close and personal, can
damage CR-39. The back side CR-39 results are practically guaranteed
to come from neutrons, through proton knock-on, entirely aside from
those beautiful triple-tracks.)
See, we don't know very well what happens if we take some CR-39 chips
and expose them for a time in the gases of a *normal* electrolysis
cell. If tracks show, we don't know the origin of the tracks. That
there is no dose-response shown is highly suspicious (this is
standard with testing of medications, when there is no dose-response,
artifact is suspected) -- i.e., in this case, no correlation between
electrolysis current and radiation. Just take that to the extreme: no
current. If the effect still exists, then electrolysis is not the
cause. There's lots of radioactive stuff floating around, and some
experiments might concentrate it. (Thus there might even be a
variation of tracks with current, it doesn't necessarily show, by
itself, some kind of nuclear activity other than through the presence
of already-existing radionucleides.)
If we find tritium in the heavy water of a CF experiment, it does not
itself, by itself, demonstrate that a nuclear process took place in
the cell. There are a number of possible origins that would need to
be ruled out by controls. However, completely aside from that,
suppose that excess heat and tritium were correlated, in a particular
series under identical conditions, as far as what could be
controlled, such as H/D ratio in the heavy water. If they were,
across a significant number of experiments, with no data selection
(i.e., all members of the series are reported, treated identically,
and are measured blind), that would show, with a probability that can
be calculated, that the tritum and excess heat are both nuclear effects.
*A great deal can be discovered about an unreliable reaction.*
I'm a writer, so it's my business to be effectively communicative.
I'm still learning, though.
You are a good writer and this is the reason a discussion with you
is both pleasant and instructive.
Yes. Until you identified the cause, it was totally mysterious.
Gremlins. Bad juju. Whatever.
I hope one day you will agree that this poisoning destroys the LENR
experiment- re-read please Piantelli's patent WO 2010/058288
I have little doubt that the presence of some materials will poison
the reaction. I don't see the point. Poisoning alone is unlikely to
be a generic explanation for cold fusion variability. I can't
absolutely rule it out, though.
Electrochemical PdD experiments are *extremely* complex. With
gas-loading, the complexity may be reduced, but a great deal depends
on the exact structure of the particles or Pd material. And it will
change with loading and deloading.
I just want to add that adapting/scaling up an electrolysis cell to
an energy source is an engineering nightmare.
Yes. Bad idea, mostly, unless no other approach can be verified. We
use the approach only because it's known to produce results,
sometimes even large results. If necessary, a way might be found to
engineer this to create some practical application, but NiH is far
more attractive. If it works.
Gas phase is kind of must, it seems Rossi and DGT are working at
an active surface temperature of over 600 C.
Gas phase seems likely to be where pay dirt will be found. I'm not
willing to base *anything* on the alleged work of Rossi and
Defkalion. It's equivalent to rumor. And the big question is, as you
know, reliability. If there is a big effect, but it is not reliable
(or a way has not been found to engineer around certain kinds of
unreliability), the basic problem has not been solved.
I'll believe it in that I consider it possible.
This is in reference to the concept of gas-poisoning of LENR.
Why not? However, I don't see this as explaining the difference
between the first, second, and third current excursions in SRI
P13/P14, which was a sealed cell. It's not impossible, though,
because the first and second excursions, showing no heat, may have
cleaned off the cathode.
One of my lab colleagues/friends at the Stable Isotopes Institute
was working with high vacuum 10 exp -9 to 10 exp -11 mmHg and he has
convinced me that the gases adhere unbelievably strongly to the metals.
Not surprised. However, it should still be possible to use uniform
material, or to create it (as with deposition techniques).
When you and other colleagues will eventually believe in my
poisoning idea, I will be already busy smelling the flowers from
the side of the roots- send please a good thought to my memory then
Enjoy the smell. I'll hold that thought.
But why wait until we "believe"? Smell the flowers while you are
alive, I highly recommend it.
What's to believe? You call it an "idea." Is it reduced to specific,
testable hypotheses?
It was crucial to identify the reasons for such variability. The
skeptics did not get the import of variability; they thought that
it meant that the effect was down in the noise. However, that's
what SRI P13/P14 showed so clearly: the effect, when it appears, is
striking, not marginal. Of course, sometimes there is an effect
close to the noise. But a strong, quite visible effect is one of
the characteristics of a successful replication of the FPHE, not
something questionable, where we look at a plot and say, "Well,
see, it's a little bit above the noise there, for a few hours."
Maybe. Or maybe that is just noise a little higher than usual.
Not exactly a good situation for a researcher who has to understand
and solve the problem, isn't it? However poisoning, partial or
complete is uncontrollable and can explain the variability.
My point was that this is quite a stretch in that experimental
series. The hypothesis that the cause of the variability is the
shifting nanostructure of the material is far simpler, and consistent
with all the data on this class of cold fusion experiment. However,
"shifting structure" could include shifts in the chemical composition
of the surface layer of the cathode, that's why I say it cannot be
completely ruled out.
To resolve this would require identifying the contaminant, then
controlling it. Again, the investigation would likely involve running
many cells, probably with multiple cathodes, and individual cathodes
could be withdrawn and the surface analyzed. This much is helpful:
the reaction almost certainly takes place at or very close to the surface.
I'll repeat: the kind of data we need to understand cold fusion is
most likely to come from comprehensive study of the effect *as it
is*. In this investigation, one our of six cells showing excess heat
would be quite adequate! We don't need a "more reliable method," it
is, as I've written over and over, merely desirable, not necessary.
Finding -- and then confirming -- what conditions are associated with
"dead cells" and not live ones, or vice-versa, will lead to the gold.
Ultimately, it appears, reality does play hide and seek, at the
quantum level. But I don't think that's happening here. Regardless,
reality is not "bad." Period. It's just reality. We make up good
and bad. This is not you, but "scientists" who reject experimental
data because they don't see repeatability in it are just fooling
themselves. What they don't see means nothing. Saying "I don't
understand this" is fine. Saying "you must have made a mistake," is
the problem, unless the error can be identified. Not just guessed.
Unfortunately some aspects of reality are not good for us- cold,
disasters, illness, old age- then reality is really bad sometimes. I
know nature has no problems just solutions, but we have problems-
the energy situation is one and any obstacle to a solution is bad.
You say so. "Bad" is not a condition that exists in reality. We make
that up. And "obstacle" is also made up, out of a concept of a path
to a solution. What might seem an obstacle for one path might be a
stepping-stone on another.
See, my hobby is collecting proverbs, quotation aphorisms and in my
opinion the most false and cruel one is one by John Ruskin:
"there is really no such thing as bad weather, only different kinds
of good weather"
That is not truer than "bad weather," but it's probably more useful
as an attitude!
I have arrived to this idea when once during the terrible winter of
1959/1960 I was trying to defreeze a pipe at the top of a very high
distillation column with live steam coming through a rubber hose as
you could seen in the Rossi experiments Just to mention-I
accomplished the task nad did not get pneumonia..
And hopefully you did not get burned either.
Weather can be bad, reality is sometimes hostile, Murphy is a
sadistic techno god.
Take Murphy and remove the story of "wrong": "If something can, it
will." You made up "reality is sometimes hostile." I suspect this is
an anthropomorphic projection. You think that reality is out to get
us? "Hostile"?
We do have the term "hostile" as in "hostile conditions," which
presumes a goal, it exists in relation to the goal, and the goal is
made up. We could say, with equal truth (or the opposite) that
"reality doesn't care if we live or die," or "reality exists to
create life." Those are both stories, but they are likely to arouse
different human responses. We imagine that there is some story that
is "the truth," but the truth is not a story at all.
Since we make up stories, it's a quintissential human function, we,
once we realize this, can make up stories that *function* to create
what we seek. I'm not claiming some kind of magic, nor am I claiming
that just any story will function this way. There are skillful
stories and there are unskillful ones.
The story that "low-energy nuclear reactions are impossible" was a
disempowering one, one that created blindness to observation and
understanding, besides being obviously contradictory to experimental
fact, beginning with muon-catalyzed fusion, a known LENR. That MCF is
impractical because of the difficulty of generating muons is
*irrelevant* as to scientific possibility. Just as CF unreliability,
or the COP of CF experiments, was ultimately irrelevant as to the
science and only relevant to the practicality issue. MCF is
completely impractical for energy generation, which has nothing to do
with the existence of the effect.
It's not as powerful, and it runs the risk of an enormous waste of
time. Look, it was obvious from the beginning that there *might be*
enormous promise from cold fusion. But it was also obvious, within
a few months, that this was not going to be easy, at least not with
the FP approach. Yet people had done stuff for a long time with no
clear evidence of fusion, and casting about to find a new approach
was probably not so wise, either, in the sense that it was likely
to be obscure itself.
The deepest error that Pons and Fleischmann made was in not
disclosing how difficult it was, with the original announcement,
and, if not there, with the original paper.
For those convinced that LENR was real by the P&F results, and by
other confirmation, including perhaps their own, pursuing more
reliable approaches did make some sense. However, if these people
were convinced it was real, and especially if they had success
replicating P&F, they might consider the value of carefully
studying what they already were able to make happen. Some did that,
perhaps. Some did not.
What else could I say other that you are right? But it is a bit
late. A long series of hopes and disillusions followed, the
disillusions were the continuous phase but hope remained indestructible,
Late for what? Twenty years isn't a long time, in the larger scale of
things. Yeah, twenty years of delay, perhaps, had a cost. But what now?
Okay, for the future, the story of cold fusion, what Huizenga called
"the scientific fiasco of the century" -- and he only knew the half
of it -- should be thoroughly explored and told, as an object lesson
for the history of science. If there is some artifact or collection
of artifacts behind cold fusion, why did it take more than twenty
years to demonstrate it? (And it still has not been demonstrated.)
But if what is now passing peer review is actually worthy of that --
as I expect -- then why did it take twenty years for the tide to
turn? After all, replications of the basic effect began well within a
year, and continued to accumulate.
(And the conception that cold fusion was conclusively rejected twenty
years ago is still common, an example of a widespread delusion held
by scientists, especially, whom we might expect to know better. It
can be argued that there are still grounds for skepticism, but the
position that cold fusion was found to be artifact is
*unsupportable.* The most that has ever been concluded was that the
hypothesis of fusion was not proven. And even that would be very,
very difficult to get past peer review now, in any journal worth its
salt, so strong is the evidence.)
Not from that example!!! The correlation there is quite weak, and,
if this is a real CF experimental series, I'd suspect that the heat
is close to the noise. That is, from the expectation d -> He, we'd
expect half as much heat with the first as with the second, but you
have only the second showing heat.
This is too short an experimental series to do more than provide an
indication, and the indication here could be that one of the heat
measurements is punk.
Thank you for the next examples given by you; they are the best ones
possible, emphasis on 'possible'
Real example, one of the two or three best:
Miles' work. Miles did a set of CF experiments and controls. His
full series as reported by Storms involved 33 helium samples taken
and analyzed blind. These were samples of the cell gases. Miles had
data on heat generation from these cells before the samples were
taken. Multiple samples were taken from cells, I originally though
this was 33 cells. Not. A weakness, but not a disaster. (Better if
all cells had been treated equally, all cells were identical, etc.
There were some differences, which actually weakens the result,
i.e., included in the series was some cells where something quite
different was going on, and that makes the work look *less*
conclusive. But I won't go into that here.)
Of the 33 cells, 12 were showing no anomalous heat, and no
anomalous helium was detected. 18 showed heat, and, from them,
helium was detected within an order of magnitude of the helium
expected from d -> He-4. The more heat, the more helium, within
experimental error. (The measurements were rough, unfortunately,
only order-of-magnitude detection.)
That leaves three cells. One experienced a power failure and
deloading and calorimetry error was thus suspected, the other two
were a cerium-palladium alloy. They showed heat, but no helium.
What happened? We don't know. Nobody followed up, the classic story
of cold fusion. Mysterious results, sitting in the record, with no follow-up.
This is a strong correlation, even with those three anomalous
results. Miles calculated one chance in 750,000 of this happening by chance.
You could also look at the SRI Case replication, reported in the
2004 DoE review paper. It was poorly explained. When it's fully
understood (I had to read other papers to get it), it shows this
same phenomenon: no heat, no helium. Varying amounts of heat,
varying amounts of helium. SRI also studied the time behavior of
accumulated helium, and did one experiment where they attempted to
recover all the helium (that's the hard part!), finding a ratio of
heat/helium quite close to the theoretical value for d -> He-4.
It could be a very different situation with say, ten times more
results of this kind.
It would be stronger and the Q-value, if the work were well
performed, would be known much more tightly (if we assume it is
constant; it might not be). It would not be a greatly different
situation; that is, the correlation is reasonably established, and
there is very little -- if any -- contrary evidence. Miles was
criticized, but the criticism mostly ignored the real point, the
correlation. It ended up being irrelevant.
It was largely reward-less because many researchers were not
looking at the treasure they had in their hands, if they managed to
occasionally see excess heat. They bought the idea that this was
some kind of failure. No, it was success. It was indeed difficult
to arrange a demonstration of the FPHE. However, it seems that
those who persisted did find it. Indeed, it may have been most
difficult for those who were lucky and found it quickly! -- because
it then disappeared. I can imagine the agony. However, the gold was
in investigating the conditions of appearance and disappearance.
A very complex situation, difficult to appreciate in retrospect.
I want to emphasize that I'm looking at this with the benefit of
hindsight. Some of this could have been anticipated, however, at
least in theory. If Pons and Fleischmann had sat down and considered
the likely effect of their announcement, and the effect of their
allowing it to appear that this was "fusion in a jam-jar," a simple
experiment, they might possibly have forseen the problem and averted
it. That oversight, together with their error regarding neutron
radiation, made their work appear flawed, not reproducible, etc. It
made an early and famous paper showing that the neutron radiation
could not be above a certain (low) level seem to be a refutation of
cold fusion, while we now recognize the almost-complete lack of
neutron radiation as a characteristic of the FPHE.
In other words, the *basic effect* discovered by Pons and Fleischmann
wasn't even touched by the neutron paper. The basic effect was a heat
effect. They also reported tritium, helium, and neutrons, with only
the neutron report actually being found to be artifact.
(Tritium has been reported by so many investigators that this is
likely to be real, though it clearly is not a major product of the
reaction, that's just about got to be helium, at least in PdD experiments.)
And if a practical application is possible, setting Rossi et al
aside, it will very likely be from theory enabled by the presence of
more data from what should have been done twenty years ago. The idea
that it was necessary to get reliability permeated the field, and
that was an error. Reliability would very likely follow from a
successful theory. Or not.
A beautiful idea, but how does this go in practice? Or not, to cite
you. In the 70-ies I had lead many research and development programs
and one of our slogans was "one experiment is no experiment, one
result is no result" we have always followed till we were
convinced the method, process, step, whatever is repeatable,
reproducible, reliable. We scaled up from lab to pilot plants and
to industrial scale.
Yes. That's the way to do real science. One experiment can be
valuable, to be sure, but it's only a start. An indication.
Make your blunders on a small scale and your profits on a great
scale. And many times scale up is not a linear process
you can have surprises of any kind. It is an adventure.
I have to confess that I cannot understand exactly how a good theory
can remove the reliability problem, but it is about my limited
imagination here.
A complete theory will include explanation of the variability (which
you call "the reliability problem"). Once we know how and why
something is happening, the possibility of control can open up. Not
necessarily, mind you. But it's probable.
You don't have to imagine the specifics, just recognize the
possibility. I hope I have shown that the variability does not make
investigation impossible -- and it can even facilitate it in some
ways, as long as some minimal level of "success" is obtainable,
through the power of correlation.
For example, if we can identify the exact NAE, as being, say, cracks
of a *certain size*, we may then be able to engineer material with
those precise structures, ab inito. We might be able, for example, to
operate devices with those exact structures at lower D2 pressure and
prevent the destruction of the structures by local overheating.
If some impurity is *necessary*, we may then supply that impurity,
possibly even placed exactly where needed. If, in the other
direction, as per your idea, some impurity poisons the reaction, we
may be able to entirely exclude it from the cell environment.
And on and on. We won't know the specifics until we know, and, my
guess, we are not likely to know until the basic experimental work is
done, work that was only sporadically and incompletely done.
If the U.S. Department of Energy reports are merely followed, as to
their actual recommendations, funding would be available. What those
reports recommended against was a massive special federal program,
and I have to agree with that recommendation, that's prudent *until
the basic research is completed,* and then it might be advisable to
go to a major program *if practical applications appear within reach.*
(That reports that actually recommended continued research were
framed in the mind of the general scientific public as having
conclusively rejected cold fusion is a really great example of how
public opinion, even among reasonably knowledgeable scientists,
diverged from simple fact. There are some credible claims that this
was a deliberate manipulation, part of the design of the first D.O.E.
review, but I'm happy to leave that to the historians.)
(With Rossi, if that's real, the investigation will follow and
theory will be developed based on that. Rossi, in a sense, got
lucky -- if this is real -- though he "got lucky" from what he says
was a thousand variations he tried. Essentially, he explored the
parameter space, trying lots of combinations. It can work. In fact,
I'm suggesting something like that, only with systematic
exploration, with special focus on answering extant experimental questions.)
With Rossi if real, a great question arises: what has he changed in
LENR? What new dimension he has added to LENR? I think BTW that he
has gone outside the parameter space. LENR+ has added new unexpected
parameters to those of LENR. (It is a pity that I am not inerrant,
we will have to wait if this is true or completely false.
Well, trying dopants or structural changes would be part of the
obvious exploration of the parameter space. What Rossi did, if we
take his story at its face value, is try a thousand combinations.
That's the kind of work that could pay off. Whether or not he did
this with maximum efficiency, I don't know. I'd think of making
smaller cells and running many of them at a time. Smaller is also
safer, by the way.
Rossi, though, is not a scientist and seems to have little
appreciation of the scientific method. He has no concept, it seems,
of the value of control experiments. As a result, it's entirely
possible that his calorimetry is seriously out of whack, so much so
that it's even possible that his "massive heat" is all the way down
to no excess heat at all. At least in some of the public demonstrations.
His answer to a suggestion that he run control experiments, by
someone friendly to his work, was that it would be a waste of time,
since he already knows what will happen: nothing. A scientist would
not think this way. A certain level of experimental controls would
remain in place, and a control experiment is far more necessary for a
demonstration, if we have input power involved.
What Rossi has added to LENR is massive confusion, as a result of his
secrecy, demonstrations that failed to be conclusive when examined
closely, and the problem is not only with Rossi. Kullander et al made
major blunders in their analysis and reports.
None of this proves that Rossi doesn't have a real device. However,
if Rossi has a real device, it's become quite clear that he does not
care to allow this to be clearly known.
And then the rest of us waste time and energy speculating. No, at
this point, it appears to me that Rossi may have actually damaged the field.
I worry about fraud, on the one hand (and there is a lot of nonsense
out there on this possibility), but I also worry that he has a real
effect but it isn't yet practical, which then explains the delay. And
that delay could stretch out forever.
A scientist would not announce what is not established or known. And
would properly include in reports the necessary data about
reliability, there would not be a mystery.
Sometimes the situation is presented that "People are demanding that
Rossi do this or that, but he is not obligated...." Which is true,
i.e., he's not obligated, but I should make it clear that I'm not
demanding that Rossi do anything. I'm just saying that the
information has not been provided and demonstrations have not been
arranged that would allow us to make firm conclusions about his work.
It is, essentially, on the level of rumor, not much more than that.
Yes. "Wicked problem." Peter, you caught the disease, you looked at
cold fusion with an eye that only saw value in high COP (which is
very different from reliability, by the way, 10% excess power,
reliably, would be spectacular *for the science*), and you compared
a few thousands of what you called "sick cathodes" with heat less
than 30% with "many thousands" of "dead cathodes\". 30% of input
power, with the FPHE, is actually way above noise, more than
adequate for systematic study. Pons and Fleischmann, as I recall,
had a "dead cathode" rate of 5/6. The practical implication of this
is that one must run many cathodes, and, from what I'm seeing
(Letts is graciously allowing me to watch his work-in-progress), a
"dead cathode" can become "live" by continued electrolysis,
sometimes. So it's not the cathode that is dead, but the patience
of the researcher.
Mea culpa, I have understood Cold Fusion as a future energy source
not as a system for new scientific discoveries. However with 5 dead
cathodes vs. 1 working one, it is difficult to be either.
So you run many more cathodes. *Many*. You got one working one.
Great! That's one out of six, not bad. Were all these cathodes
apparently identical, or did you keep changing conditions to try to
make it work. You are aware, I presume, that all that variation may
have been for nothing. But perhaps you did learn to run the
experiment more successfully. So great. Run more cathodes as close to
what you did with the successful one as you can.
And, my suggestion, scale down. As long as you don't get so small
that calorimetry can't be sufficiently accurate to detect the effect.
But also look for correlated conditions or effects. Lots of the early
work did not even attempt to measure loading ratio, and probably did
not reach the necessary minimum for the FPHE. So measure loading
ratio. Measure H/D ratio; heavy water absorbs moisture from the air,
if exposed to it, so the H/D ratio will increase with time, and
hydrogen is a known poison for the FPHE.
Helium is the obvious correlated effect to study, where possible. It
takes access to mass spectrometry, which for most researchers will
take finding someone cooperative, or will take money. Storms has two
mass spectrometers in his lab, including one that is designed to
discriminate between D2 and He-4. If you found helium only with the
live cathode, that's an important result. My view is that all results
should be published, by the way, even if only on-line. There is a
fairly strong tendency to only publish results considered
"important." If you look out there, you can easily find reports of
positive results with no discloser of all the "failures" on the way,
and that then leads to a cogent criticism, that results are being
cherry-picked, which, when outcomes are variable, is a serious
problem. You destroy the power of correlation by cherry-picking results!
Experimentally speaking some cathodes are hibernating and then
suddenly without any visible cause or logical/correlational
explanation start to work. Mystery!
Well, not really. There is an obvious, default "logical" explanation,
that the nanostructure of the material has been altered, which for
PdD is simply a reality. It changes. Cracks form and grow. Perfect
material apparently produces no results, and when the cracks are too
large, high loading, also apparently necessary for significant
results, becomes impossible. The material apparently must be *just so*.
But, yes, no "visible cause." Unless one can examine the cathode with
an SEM. Maybe it can be seen and measured.
The point is that one out of six is actually fine, not terribly
difficult, except for one thing: it can take months to run one of
these experiments. So, if one is serious, one must run many cells
in parallel, which is exactly what Pons and Fleischmann did in
their later work. I've been suggesting expanding this, by making
cells smaller and cheaper, the limit is the smallest cell for which
heat can be measured with reasonable separation from noise. NASA is
apparently exploring cells-on-a-chip, with many cells built on a
substrate perhaps using techniques common in electronics. I assume
that with the connections through the substrate, individual cells
can be run together with the others, or separately, all being
immersed in the same electrolyte (if this is electrolytic, or in
the same gas if this is gas-loading.)
OK, in the heroic period of the research you can work with many
cells in parallel, try to understand why some work and why the
others re inert, but later attention has to be focused on the active cells.
Sure. You would tighten the parameter space, looking for what Schwarz
calls the OOP, optimal operating point. You'd still maintain
controls, but they might shift in character. You'd still want a null
control, if possible, i.e., a cell that is not expected to generate
heat, so that you can maintain a check on the calorimetry.
As you get closer to 100% reliability, you do *not* start running
fewer cells, probably. You might even increase the numbers. However,
as more and more data is collected, it becomes more likely that sound
theory will be developed. Theoreticians will suggest tests of theory,
and parameters will be more thoroughly explored.
Letts has apparently found that a magnetic field is necessary for
dual-laser (beat frequency) stimulation to work. Great. How large a
magnetic field? What is the "turn-on" behavior, i.e, at what level of
magnetic field does the stimulation start to work, and is there a
relation between field intensity and XP? Letts has only used a strong
field or no extra field. (It's been pointed out that it is possible
that a small magnetic field is generally necessary, even in
non-laser-stimulation experiments, but that this might be supplied by
the earth's magnetic field. So this could be tested by nulling out
that field, easily done.)
Letts will, I'm confident, investigate this as time and circumstances allow.
It is difficult to deny that we still are in the prehistory of LENR.
If research can identify markers of the reaction other than heat
and helium, it could be *extremely* useful. For example, suppose
that active PdD produces a characteristic sound. (This is reported
by SPAWAR, by the way). It might then be possible to monitor
instantaneous reaction levels, even more quickly than through
calorimetry. Monitoring IR emission could do this as well. I've
wondered about visible light. There should be some, if palladium is
being melted, as appears in some SEM images of cathodes. (Etc.)
This kind of research would vastly speed up engineering the effect,
even without a sound theory.
I am absolutely enchanted with the idea of "singing cathodes" and
will ask my active experimentalist friend to test the idea.
SPAWAR fabricated a cathode by elecroplating a piezoelectric sensor,
and found, as I recall, pulses about 10 usec wide. These may have
been correlated to flashes they see in the infrared, but no
correlation was shown. Look, there is a wide range of possible
investigations, it's all exciting.
Suppose, for example, that the number of such pulses (or shock waves,
as they might be considered) is found to be correlated to XP. Bingo.
An independent measure of reaction rate, creating possible
instrumentation to real-time monitor the reaction more easily and
quickly, thus speeding up research.
A focus on "proving that this is fusion" created an environment where
searching for such effects was not considered so important.
Obviously, a few 10 usec pulses, even dozens or more per second,
doesn't prove anything "nuclear." But so what? If it is found to
correlate well with the nuclear effect (helium and -- we can assume
-- heat), it becomes highly useful.
It may be possible to detect the shock waves without a special
cathode, I intend to look at what I see with piezo mikes on the cell.
I could run into all kinds of problems, but -- this is really cheap
and easy to try.
I also will be set up to watch the cathode under a digital
microscope, with adequate resolution to see 10 micron features.
However, my microscope has automatic level control, and may not
function well under very low light conditions, so I'll also be able
to watch the cathode with an optical long-focus microscope that I
have. The human eye may be quite a good light detector under very
low-light conditions. I'll have to watch out for those N-ray effects,
though! If I'm lucky, the digitial microscope will show something.
We play and find stuff. Or don't. But looking is fun. It can
ultimately be profitable, as well. Sometimes. No guarantees.
*Without needing any new approach to be invented.* Of course, if
more reliable methods of triggering LENR are found, great. I expect
the same kind of work can be done with NiH, for example.
It seems NiH (transition metal hydrogen systems) are somewhat
simpler and more practical than electrochemical cells.
In the long run, yes. I have nothing against NiH work, but I don't
want to see electrochemical PdD abandoned merely because it is not
likely to be practical for power generation. Those who only want to
pursue power, great, I'd agree that NiH is more likely. But for those
who love science, as well as those who want to advance science
(knowing that this ultimately advances engineering), PdD is there,
and can be done fairly cheaply.
(I think reliability in Science, engineering, business,
marriage, musical interpretation is, grosso modo, the same
overall. Statistical reliability in engineering, production is
about a small proportion of under-quality pieces. A minimum is
say 98.5% good items.)
Depends on the nature of the application. However, reliability of
an effect is not necessary in science, it is simply one more
characteristic that is measured, by accumulating experience and
quantifying it. X out of 100 cells tested following Protocol Y were
found to exhibit anomalous heat above 5% of input power. Then we
look for associations present with X and not with not-X, or
vice-versa. We try variations, etc. And we also run the *same* series again.
There difficulty is that electrolytic cold fusion is extremely
sensitive to seemingly trivial variations in the material. This is
one reason why I think the most productive work will be with
electro-deposited palladium, because it may, particularly with thin
layers, be easier to control that deposit. But there are still many
ways to mess it up, apparently. An advantage of deposited
techniques: generally cheap.
I remember the volcano eruption of optimism when Stan Szpak has
invented the method. It definitely gives improvements, however not
spectacular ones. The liquid phase being saturated with air, the
newly formed surfaces are also poisoned from the start..
There have apparently been problems replicating what was sometimes
true codeposition and sometimes it was deposition followed by
deuterium evolution. There is no sign in the SPAWAR work that initial
air was excluded. It could be, but if they got results without
excluding it, one would expect others to be able to do so as well.
I think you are probably too focused on your gas-poisoning model.
It's worth investigating, I'm sure. Indeed, any mysterious
variability is worth investigating. Mystery is where the future is
found, where limitations are transcended.
Whenever a reputable group (as SPAWAR certainly is) finds a result
and others have difficulty replicating it, the community as a whole
should, my opinion, see this as an opportunity to learn more.
Instead, it seems, people just shrug it off. I know that a major
scientist in the field attempted to get codeposition to work and
failed, but this was never published. That's a loss. Publishing such
a result does not mean that there is error in the original report,
not at all. It means, though, that it is likely that there are
unspecified but necessary conditions! Or something is present that
was absent in the original work.
Very likely.
Codep is not necessarily simple, and the growth of the deposit could
vary greatly, one of the reasons why I'd probably want to look at
*very early* behavior, while the deposit is fairly simple. Codep is
rumored to produce rapid results. If, however, the cell voltage is
high enough to generate deuterium, for true codeposition, the deposit
tends to fall off the cathode, it does not adhere well. Miles reports
pieces of palladium (probably with attached hydrogen bubbles)
floating about the cell and causing little explosions as they contact
oxygen coming off of the anode.... Nifty, eh? Palladium would do that, eh?
So just look at the very early behavior. That's where seeing those
"pops" could be very useful....
People will develop their own work and approaches. However, I'm
hoping that a community consciousness can develop that will be aware
of the research issues to be resolved. Examples:
1. How do excess heat, H/D ratio, and tritium production correlate in
an FPHE cell?
2. How can a "standard CF cell" and basic protocol be designed, so
that results of researchers who choose to use such a design/protocol
can be compared, allowing wider correlations to be made. Such a cell
should be cheap and easy to make. As improvements are developed, new
"revisions" of such a design would be issued, but speed with this
could defeat the purpose. The lifetime of a "standard cell" design
should probably be at least a few years.
(It's been thought that this second goal involves "herding cats." No,
it doesn't. It is not necessary for all cats to agree, and, in fact,
anyone could develop and propose and offer such a design. It's just
that cooperation will probably be more efficient! It would remain voluntary!)
It becomes possible with experience. One of the big concerns about
CF is that occasionally, heat production has been enormous, cf.
Pons and Fleischmann's cell meltdown. However, if cell performance
becomes reliable, within a few percent, say, such an outlier
becomes quite unlikely. That meltdown cell was bulk palladium, a 1
cm cube. It would be interesting if someone, taking appropriate
precautions, were to run that again. The worry: that the meltdown
was at the low end of what might happen.... but it's unlikely.
We still cannot explain those events- the cubic cathode, Mizuno's
unquenchable 100 grams cathode, Piantelli's molten rod, cathode 64
of Energetics- on a causal or rational basis.
Right. Yet it's highly likely that the "cubic cathode" -- the P&F
cell meltdown -- did actually happen and that this was a nuclear
event. It could be investigated, but ... it's also very dangerous!
I've been suggesting scaling down, not scaling up. However, the goal
is scientific knowledge, and that knowledge may eventually be able to
explain those events, and then, rather obviously, it may be possible
to engineer much larger heat release and to scale up with reliable results.
What I imagine for the meltdown is that conditions happened to come
together to set up widespread and fairly sudden NAE formation, so a
lot of reactions happened in very short order, before the heat could
melt/vaporize the palladium, destroying the NAE. It remains hard to
understand. The reaction rate probably increases as the palladium
gets hotter, accelerating the heat release. But I would not expect 1
cm^3 of molten palladium to burn that hole into the concrete after
burning through the lab table (and those tables are usually designed
to handle significant heat, though not molten palladium). If a
continued reaction is possible in molten palladium, all bets are off.
I don't expect it at all, but that could explain the hole in the
concrete floor.
What I'd expect is that molten palladium would release the deuterium
immediately, which would burn, of course, maybe even explode (purely
from pressure, not from oxidation), if released quickly enough, but
that wouldn't release enough heat to do much more. Too little
available oxygen. This would not be an oxy-deuterium torch. Just a flame.
Yes. However, Science makes Engineering more efficient.
Let's discuss engineering later, please! We will indeed learn
something from Defkalion and Rossi about engineering.
We have to explore Science and Reliability first.
We don't yet know if we have anything to learn from them.