The other day I posted the message below, describing recent progress in AI.
An aspect of this may be instructive to cold fusion researchers.

This recent progress has various causes. One of the main ones is a dramatic
improvement in the neural network technique. (See
https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html and
many other articles.)

The neural network AI technique has been around for decades. It did not
work well in the past because the programs used a single network. Nowadays
they use multiple networks, in layers, where one layer feeds output to
another layer.

Here is the lesson for cold fusion. There may be techniques in cold fusion
that have been abandoned which, with some modification, might work well.
For example, we assume that Pd-D cold fusion has no future because
palladium is so expensive. Perhaps this is not such a limitation. As I
pointed out in the past, thin film Pd is used in catalytic converters,
where it is exposed to a fairly large fraction of all of the heat produced
in the world. If there is enough Pd for this application, perhaps there
would be enough to produce a large fraction of world energy with similar
thin-film Pd.

Many techniques have been described in the literature that worked a few
times spectacularly, but most of the time they do not work. They are
irreproducible. The SuperWave technique once produced, "Excess Power of up
to 34 watts; Average ~20 watts for 17 h." (
http://www.lenr-canr.org/acrobat/DardikIexcessheat.pdf) I have heard that
despite strenuous efforts, it has never done that at U. Missouri. Does that
mean the technique is flawed? Hopelessly irreproducible? Maybe. But perhaps
with a modification or extension it will work, just as the neural network
technique began to work when it was extended to multiple levels. Adding
levels to neural networks was not such a big change, conceptually. In
retrospect, it seems like a natural extension of the technique. It may be
how naturally occurring neural networks in the brain work. There might some
analogous "natural" extension to the SuperWave technique that will
dramatically improve it.

Or there might be something about the earlier, more successful experiments
that has been overlooked, or forgotten. Neural network computing was
denigrated during the long period now called the AI winter, when the
research reached a nadir, around 1990. Techniques that have now been
demonstrated to work were dismissed at that time. Some were not given a
good enough chance. Others may have been ahead of their time, meaning the
could not work without today's massively larger hardware. Along similar
lines, I expect there are many new tools and technologies available now
that would benefit cold fusion, that were not available in the 1990s.

Along the same lines, a technique or a material that cannot work at one
stage in the development of a technology might suddenly come into its own a
short while later. Transistors began with germanium. Silicon would not have
worked at first, because of various limitations. Silicon began to work in
1954 and rapidly replaced germanium.

In aviation, people assume that the propeller is old technology that has
been superseded. That is not true. Modern fan-jet engines incorporate
propellers. Propellers were used for a while, and then put aside, and then
used again. It is a complicated history that I described briefly on p. 2
here:

http://lenr-canr.org/acrobat/RothwellJtransistora.pdf

Quoting an aviation historian:

". . . the commercial development of the turbine passed through some
paradoxical stages before arriving at the present big jet era. Contrary to
one standard illusion, modern technology does not advance with breathtaking
speed along a predictable linear track. Progress goes hesitantly much of
the time, sometimes encountering long fallow periods and often doubling
back unpredictably upon its path."


---------- Forwarded message ----------

Progress in AI seems to be accelerating, according to a paper in *Nature*
from the AI people at Google. See:

http://www.slate.com/blogs/future_tense/2017/10/18/
google_s_ai_made_some_pretty_huge_leaps_this_week.html

They developed a new version of their go-playing program, called AlphaGo
Zero. Features:

Self-training. No use of existing datasets.

Efficient. It uses only 4 processors. The previous version used 48.

Effective. This one beat the old program in 100 to zero matches. (The old
program beat the world's best go player last year).

Quote:

"This version had taught itself how to play the game. All on its own, given
only the basic rules of the game. (The original, by comparison, learned
from a database of 100,000 Go games.) According to Google’s researchers,
AlphaGo Zero has achieved superhuman-level performance: It won 100–0
against its champion predecessor, AlphaGo."

The same technology is being used to develop software modules. They work
better than human-written modules. Quote:

". . . [R]esearchers announced that Google’s project AutoML had
successfully taught itself to program machine learning software on its own.
While it’s limited to basic programming tasks, the code AutoML created was,
in some cases, better than the code written by its human counterparts. In a
program designed to identify objects in a picture, the AI-created algorithm
achieved a 43 percent success rate at the task. The human-developed code,
by comparison, only scored 39 percent on the task."

Reply via email to