[Vo]:ICCF24 proceedings uploaded

2024-05-01 Thread Jed Rothwell
See:

JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE

Experiments and Methods in Cold Fusion

Proceedings of ICCF24 Solid State Energy Summit, Mountain View, California,
July 25–28, 2022

VOLUME 38, May 2024

https://lenr-canr.org/acrobat/BiberianJPjcondensedzk.pdf




I have not added this title or individual papers to the index. That will
take a while. I figured readers here will want to see this now.

- Jed


[Vo]:85 papers uploaded to LENR-CANR.org

2024-03-13 Thread Jed Rothwell
I uploaded 85 papers. The latest papers are shown here, but there are so
many this list is unhelpful:

https://lenr-canr.org/wordpress/?page_id=3009

I have appended another list below, which includes most of the new papers.


I uploaded 4 papers by W. B. Clarke. These cast doubt on results from
Arata, and Case. I recall that he and Mike McKubre did not get
along, although they co-authored some papers.

900. Clarke, W.B., *Search for 3He and 4He in Arata-Style Palladium
Cathodes I: A Negative Result.* Fusion Science and Technology, 2001. *40*
 ACC

http://lenr-canr.org/acrobat/ClarkeWBsearchforh.pdf
901. Clarke, W.B., *Search for 3He and 4He in Arata-Style Palladium
Cathodes II: Evidence for Tritium Production.* Fusion Science and
Technology, 2001 ACC

http://lenr-canr.org/acrobat/ClarkeWBsearchforha.pdf

903. Clarke, W.B., *Production of 4He in D2-Loaded Palladium-Carbon
Catalyst I.* Fusion Science and Technology, 2003. *43*(1): p. 122-127 ACC

http://lenr-canr.org/acrobat/ClarkeWBriproduction.pdf

904. Clarke, W.B., S.J. Bos, and B.M. Oliver, *Production of 4He in
D2-Loaded Palladium-Carbon Catalyst II.* Fusion Science and Technology,
2003. *43*(2): p. 250-255 ACC

http://lenr-canr.org/acrobat/ClarkeWBriproductiona.pdf

Clarke had some unkind things to say about Arata. So did Ed and I:

https://lenr-canr.org/acrobat/RothwellJreportonar.pdf

Arata was a genius. I think his palladium power approach was fruitful. It
was groundbreaking. But I think he was a little sloppy as an
experimentalist. I am no experimentalist at all, so I hesitate to say that.


Here are most of the papers I added today:

New to the database and newly uploaded

New: Ambadkar, A., electrolysis of d2o with a palladium cathode compared
with electrolysis of h2o with a platinum electrode: procedure and
experimental details
New: Bockris, J., do nuclear reactions take place under chemical simulation?
New: Bockris, J. O'M., speculative interpretation of overunity experiments
involving water electrolysis
New: Bush, B. F., data for 4he measurement
New: Bush, B. F., comments on "search for 3he and 4he in arata-style
palladium cathodes i: a negative result" and "search for 3he and 4he in
arata-style palladium cathodes ii: evidence for tritium production"
New: Cirillo, D., experimental evidence of a neutron flux generation in a
plasma discharge electrolytic cell
New: De Ninno, A., consequences of lattice expansive strain gradients on
hydrogen loading in palladium
New: Drebushchak, V. A., excess heat release during deuterium
sorption-desorption by finely powdered palladium deuteride
New: Dubinko, V. I., on the role of disorder in catalysis driven by
discrete breathers
New: Fralick, G. C., transmutations observed from pressure cycling
palladium silver metals with deuterium gas
New: Holmlid, L., heat generation above break-even from laser-induced
fusion in ultra-dense deuterium
New: Karabut, A., possible nuclear reactions mechanisms at glow discharge
in deuterium
New: McKubre, M. C. H., conditions for the observation of excess power in
the d/pd system
New: McKubre, M. C. H., electrochemistry and calorimetry in a packed-bed
flow-through electrochemical cell
New: Mizuno, T., excess heat evolution and analysis of elements for solid
state electrolyte in deuterium atmosphere during applied electric field
New: Stepanov. I. N., experimental measurement of excess thermal energy
released from a cell loaded with a mixture of nickel powder and lithium
aluminum hydride
New: Ohmori, T., enrichment of 41k isotope in potassium formed on and in a
rhenium electrode during plasma electrolysis in k2co3/h2o and k2co3/d2o
solutions
New: Srinivasan, M., excess heat and tritium measurements in ni-h2o
electrolytic cells
New: Vysotskii, V. I., the formation of correlated states and optimization
of the tunnel effect for low-energy particles under nonmonochromatic and
pulsed action on a potential barrier
New: Yamada, H., carbon production on palladium point electrode with
neutron burst under dc glow discharge in pressurized deuterium gas


Previously listed in database but not uploaded

On file: Adachi, G., (3)He and (4)He from D2 absorbed in LaNi5
On file: Alguero, M., An interpretation of some postelectrolysis nuclear
effects in deuterated titanium
On file: Asami, N., Material characteristics and behavior of highly
deuterated loaded palladium by electrolysis
On file: Battaglia, A., Neutron emission in Ni-H systems
On file: Belzner, A., Two fast mixed-conductor systems: deuterium and
hydrogen in palladium - thermal measurements and experimental considerations
On file: Bertalot, L., Study of deuterium charging in palladium by the
electrolysis of heavy water: heat excess production
On file: Bhadkamkar, A., Electron Charge Cluster Sparking in Aqueous
Solutions
On file: Bockris, J., Nuclear Transmutation: The reality of cold fusion
(Book Review)
On file: Bush, R. T., Electrolytically Simulated Cold Nuclear Synthesis of
Strontium from Rubidium
On file: Celani, F., Further 

Re: [Vo]:OT: AI generated video is progressing rapidly

2024-02-22 Thread Jed Rothwell
I asked the Bot about F's original paper:
https://lenr-canr.org/acrobat/Fleischmanelectroche.pdf

My questions were mainly about these two paragraphs:

(a) Excess enthalpy generation is markedly dependent on the applied current
density (i.e. magnitude of the shift in
the chemical potential) and is proportional to the volume of the
electrodes, i.e. we are dealing with a phenomenon in
the bulk of the Pd electrodes.

(b) Enthalpy generation can exceed 10 W cm-3 of the palladium electrode;
this is maintained for experiment times
in excess of 120 h, during which typically heat in excess of 4 MJ cm-3 of
electrode volume was liberated. It is
inconceivable that this could be due to anything but nuclear processes.

*Overview* (auto-generated)
The document is about a study on electrochemically induced nuclear fusion
of deuterium.

*What is the H/D separation factor for absorbed hydrogen and
deuterium?* (auto-generated
question)
The document mentions that the H/D separation factor for absorbed hydrogen
and deuterium is of special interest and is explained in Figures 4 and 6 of
reference 2. However, the specific value of the H/D separation factor is
not provided in the document.


[The following are my questions, not auto-generated]
*What is the energy density of the reaction?*
The document does not explicitly state the energy density of the reaction.


*Is the reaction in proportion to surface area or volume?*
Based on the information provided in the document, it is not explicitly
stated whether the reaction is in proportion to surface area or volume.
However, we can make some inferences based on the data provided in Table 1.

Table 1 shows the generation of excess enthalpy in Pd cathodes as a
function of current density and electrode size. The "Excess specific rate
of heating" column provides the rate of excess heating per unit volume (W
cm-3).

>From the data in Table 1, we can observe that the "Excess specific rate of
heating" increases with increasing electrode size (dimensions) and current
density. This suggests that the reaction is likely dependent on the volume
of the electrode rather than just the surface area.

Additionally, the document mentions that the effects of enthalpy generation
are proportional to the volume of the electrodes and that the experiments
are close to the break-even point. This further supports the idea that the
reaction is dependent on the volume of the electrode.

Therefore, based on the information provided, it can be inferred that the
reaction is likely in proportion to the volume of the electrode.


*Could the enthalpy generation be caused by a chemical process?*

No, the enthalpy generation observed in the experiments is not likely to be
caused by a chemical process. The document mentions that the magnitude of
the effects observed in the experiments is too large to be explained by
anything other than nuclear processes. The excess enthalpy generation is
proportional to the volume of the electrodes and can exceed 10 W cm-3 of
the palladium electrode. This level of enthalpy release is inconceivable to
be due to a chemical process. Additionally, the document states that the
effects observed in the experiments are consistent with nuclear processes
and that reactions (v) and (vi) (2D + 2D fusion reactions) are only a small
part of the overall reaction scheme. Therefore, it is highly unlikely that
the enthalpy generation is caused by a chemical process.


Re: [Vo]:OT: AI generated video is progressing rapidly

2024-02-22 Thread Jed Rothwell
Document processing AI is also making progress. Adobe Acrobat just added a
new feature, the AI Assistant. It works with documents in native Acrobat
format, not a scanned document converted to Acrobat. I tried it with my
book, but it said, "the feature only supports files with less than 120
pages." So I tried it with my paper, "How to Fix Global Warming."
https://lenr-canr.org/acrobat/RothwellJhowtofixgl.pdf

Here are some of the responses from the AI:


*Overview* (generates automatically when you turn on the AI)

The document discusses how cold fusion technology can help mitigate global
warming by eliminating carbon emissions and enhancing carbon sequestration
through reforestation projects.


*Provide a list of the 5 most important points* (one of the many
auto-generated questions that the AI suggests)

   1. Cold fusion does not produce carbon dioxide, so if it replaces other
   sources of energy, carbon emissions would stop. [1]
   2. Cold fusion can enhance the removal of carbon from the atmosphere by
   reforestation. [2] It can increase the amount of carbon captured by a
   factor of ten and reduce the cost of the project significantly.
   3. Cold fusion technology can be used to stop the natural boom and bust
   cycle of forest carbon by burying charcoal made from deadwood underground,
   speeding up the formation of coal.
   4. Cold fusion devices do not produce dangerous radiation and can safely
   contain tritium.

[The footnotes in square brackets jump to the place in the document where
this assertion appears.]


*What are the two technologies used to expand forests in the carbon
sequestration project?* (Another auto-generated question)

The two technologies used to expand forests in the carbon sequestration
project are indoor farms and "cultured meat" grown from animal cells.


*What is the power density by volume of a cold fusion cathode?* (Auto
generated question)
The power density by volume of a cold fusion cathode is 300 W/cm3. This
information can be found in Table 1 of the document, which compares the
power density of cold fusion cathodes from the best experiments to fission
reactor fuel pellets. The table states that the volume of a cold fusion
cathode is 0.3 cm3 and its power density by volume is 300 W/cm3.


*What did the author in the New York Times have to say about forest
sequestation [spelled wrong!]?* (My question. I deliberately did not name
the author. But I spelled sequestration wrong by accident.)

The author in the New York Times wrote an op-ed titled "Let’s Not Pretend
Planting Trees Is a Permanent Climate Solution." The author argues that
while trees can remove carbon dioxide from the atmosphere, relying solely
on tree planting to offset emissions is not a permanent solution. The
author suggests that using trees to offset emissions can be seen as hitting
the "climate snooze button," as the carbon in trees eventually returns to
the atmosphere through forest fires or decomposition. The author emphasizes
the need for more permanent solutions to address climate change.


[Correct]


*What was the name of that author?* (My question. Too ambiguous)

The name of the author mentioned in the document is Jed Rothwell.


*What was the name of the author of the New York Times article?* (My
question.)

The name of the author of the New York Times article is not mentioned in
the document.


[The author is listed in footnote 9. I guess it is asking too much for the
AI to go from the text describing "Let's Not Pretend . . ." to the footnote
listing the author name.]


[This works much better than the AI that I installed at LENR-CANR.org.
Which, alas, I removed. I suppose it works better because: 1. It is only
running on one document; 2. Adobe tweaked the AI bot to work well with
academic documents, with abstracts, tables and whatnot. In one response it
said: "Please double-check my response carefully. Like many AI models, I'm
still learning how to understand tables in documents."]


Re: [Vo]:Nissan Leaf

2024-01-31 Thread Jed Rothwell
Robin  wrote:

Does anyone know if the original version of the Nissan Leaf, released in
> 2010, had bi-directional charging capability?
>

I do not think so.

I had an older Leaf, which I gave to my daughter. I do not recall anything
like that.


[Vo]:New paper from B-J. Huang et al.

2024-01-02 Thread Jed Rothwell
Huang, B.-J., et al., *Water can trigger nuclear reaction to produce energy
and isotope gases.* Scientific Reports, 2024. *14*(1): p. 214.

https://www.nature.com/articles/s41598-023-50824-8.epdf


[Vo]:Papers about the controversy

2023-12-12 Thread Jed Rothwell
Someone suggested I upload papers about the controversies in cold fusion,
and papers by skeptics. So I uploaded some papers about this. I mentioned
these two already:

*Editorials from the early history of cold fusion*, in *New York Times and
others*. 1989. (As I mentioned before)
http://lenr-canr.org/acrobat/editorials.pdf

APS, *Information from the APS meeting in Baltimore, May 1-2, 1989*. 1989.
http://lenr-canr.org/acrobat/APSinformatio.pdf

I added --

Two papers that were published together in New Scientist magazine:
Close, F., *Cold Fusion I: The Discovery That Never Was.* New Scientist,
1991. *1752*: p. 46.
Bockris, J., *Cold fusion II: The Story Continues.* New Scientist, 1991.
*1752*: p. 50.
http://lenr-canr.org/acrobat/CloseFwhateverha.pdf


Hoffman, N.J., *Book Review BAD SCIENCE The Short Life and Weird Times of
Cold Fusion.* Fusion Technol., 1994. *25*: p. 225.
http://lenr-canr.org/acrobat/HoffmanNJbookreview.pdf


Five papers about the controversy between Jones and Miles, which are
together in one file:

http://lenr-canr.org/acrobat/JonesSEexaminatio.pdf
The essence of the dispute was summarized by Miles in his second response:

"My journal publications criticized by Jones and Hansen report only
experimental results; hence, theoretical arguments are not germane to this
debate. In science, theory guides but experiments decide."


Miles said some unkind things about Jones, such as:

"More serious errors by S. E. Jones et al. are found in their presentation
of the electrochemical aspects of the cell operation. In particular, they
stated that the exchange current density depends on the electrode surface
area. The exchange current density always has dimensions of A/m2 or similar
units; hence, it cannot depend on the electrode surface area. Furthermore,
there is no such thing as an exchange current density for their reaction 4
in ref 8. This cell reaction consists of the oxidation of hydrogen at the
anode and the reduction of oxygen at the  cathode; hence, there are two
distinctly different exchange current densities associated with the cell
reaction."


(Jones said that current density depends on the surface area. That is a
geometry error. Current density is a function of surface area. It is as if
Jones said that mass density varies with volume. Or that joules per gram of
fuel depend on how many grams you have.)


Skeptics wrote many papers attempting to show theoretical reasons why cold
fusion is wrong. But there are only a few papers by skeptics that attempt
to find actual experimental errors. For a while I thought that Morrison
versus Fleischmann was the only one that was formally published in a
journal:

https://lenr-canr.org/acrobat/Fleischmanreplytothe.pdf

I knew there was a dispute between Jones and Miles. I have seen letters
between them. But I forgot that they both published journal papers about
this. So I should say that both Morrison and Jones attempted to find errors.

So did Shanahan. That is rather strange, and more like a theoretical
objection than an experimental one. As I recall, he does not point to any
specific experiment that has the problems he describes.


As always the most recent uploads are listed here:

https://lenr-canr.org/wordpress/?page_id=3009


[Vo]:Information from the APS meeting in Baltimore, May 1-2, 1989

2023-12-08 Thread Jed Rothwell
I uploaded this infuriating collection of documents:


APS, *Information from the APS meeting in Baltimore, May 1-2, 1989*. 1989.


https://lenr-canr.org/acrobat/APSinformatio.pdf


Re: [Vo]:ICCF8 proceedings uploaded

2023-12-06 Thread Jed Rothwell
Robin  wrote:


> Do you have any personal highlights?
>

When they held this conference in 2000, the conference organizers told me I
had to ask individual authors to send me papers. Some authors were anxious
to have their work at LENR-CANR.org. Others did not want their work
uploaded. I ended up with several papers, listed here:

https://lenr-canr.org/wordpress/?page_id=2130#ICCF8

These include several that I liked, so I bugged the authors to send them.
Such as McKubre.


Miles added a postscript to this one in 2018:

https://lenr-canr.org/acrobat/MilesMcasestudie.pdf

I added a postscript taken from other papers by Fleischmann, explaining his
complicated heat transfer coefficient notation. I put this in my own paper
as well:

https://lenr-canr.org/acrobat/RothwellJreviewofth.pdf

You need a cheat sheet for Fleischmann's equations.


[Vo]:ICCF8 proceedings uploaded

2023-12-06 Thread Jed Rothwell
Proceedings uploaded:

Scaramuzzi, F., ed. *ICCF8 Proceedings of the 8th International Conference
on Cold Fusion*. Vol. 70. 2000, Italian Physical Society, Bologna, Italy:
Lerici (La Spezia), Italy.

https://lenr-canr.org/acrobat/Scaramuzziiccfprocee.pdf

This is a large document. It may take a while to download.


Re: [Vo]:Video: Making activated palladium with Dr. Edmund Storms

2023-11-27 Thread Jed Rothwell
Robin  wrote:


> A few comments:-
>
> 1) I seem to recall someone else having used Calcium Oxide before.
>

Dufour in transmutation studies.

Iwamura also in transmutation studies.


Note that Ed explains the role of the inert calcium oxide particles here:

https://lenr-canr.org/acrobat/StormsEthenatureoc.pdf (starting on pages 4
and 5)


[Vo]:Video: Making activated palladium with Dr. Edmund Storms

2023-11-27 Thread Jed Rothwell
Wonderful!!

See:

https://www.youtube.com/watch?v=zjtPZR55r30


[Vo]:Claytor paper presented at NSF/EPRI Workshop in 1989

2023-11-27 Thread Jed Rothwell
I uploaded an early paper by Claytor:

Claytor, T.N., et al. *Tritium and neutron measurements of a solid state
cell*. in *NSF/EPRI Workshop on Anomalous Effects in Deuterated Materials*.
1989. Washington, DC.

https://lenr-canr.org/acrobat/ClaytorTNtritiumand.pdf

Abstract


A solid state "cold fusion" cell was constructed to test for
nonequilibrium D-D fusion in a solid. The stimulus for the design was the
hypothesis that the electrochemical surface layer in the Pons -Fleischmann
cell could be replaced with a metal- insulator-semiconductor (MIS) barrier.
Cells were constructed of alternating layers of palladium and silicon
powders pressed into a ceramic form and exposed to deuterium gas at 110
psia , resulting in a D/Pd ratio of 0.7. Pulses of current were passed
through the cells to populate nonequilibrium states at the MIS barriers.
One cell showed neutron activity and had a large amount of tritium. Other
cells have produced tritium at a low rate consistent with neutron emission
at or below the threshold of observability. The branching ratio for n/p was
about 3 x 10^-9 in all the experiments where a substantial amount of
tritium has been found.


One of the cells produced a substantial amount of tritium:

. . . [T]ritium analysis showed that cell 2 had 1300 times the fill gas
concentration of tritium, amounting to 3.5 x 10^15 atoms of tritium. This
level, although substantially above background, is equivalent to only 65
ppb.


The NSF/EPRI Workshop is described here:

http://lenr-canr.org/acrobat/EPRInsfepriwor.pdf

These experiments are also described here:

https://lenr-canr.org/acrobat/ClaytorTNtritiumgen.pdf

Several other experiments produced large amounts of tritium, such as
Bockris, Storms and Will. See:

https://lenr-canr.org/acrobat/WillFGtritiumgen.pdf

Skeptics ignore the tritium because it is compelling proof that cold fusion
is a nuclear reaction. They pretend that heat is not compelling, even
though it exceeds the limits of chemistry thousands of times over. They
want to claim that cold fusion does not produce clear evidence of a nuclear
reaction, even though anyone can see that it does. They mean it does not
produce the evidence *they want to see.* They are looking for proof that
cold fusion is actually plasma fusion, and it produces a deadly flux of
neutrons and no significant heat. They want that because it fits
their theories and -- more importantly -- because it means cold fusion has
no practical use, and does not threaten plasma fusion funding. Messinger
correctly described the infuriating, know-nothing attitude of the skeptics
at ARPA-E and elsewhere:

The hypothesis is that excess heat is caused by the release of nuclear
binding energy through low-energy nuclear reactions. But, as I have written
before, and ARPA-E stressed in their funding opportunity announcement, such
kind of evidence for LENR is insufficient due to the ambiguous nature of
heat . . .



I have uploaded a number of new papers lately:

https://lenr-canr.org/wordpress/?page_id=3009


Re: [Vo]:different temperatures

2023-11-17 Thread Jed Rothwell
Robin  wrote:


> I have an electric heater that can be controlled to within 1/10 of a
> degree centigrade . . .


That is remarkable. That is a laboratory grade thermostat.



> The only explanation I can think of is that the house is well insulated
> and has a long time constant, so that early in
> the morning the walls are still warm from the previous afternoon, while
> the air in the room is cool, thanks to contact
> with the cooler glass window, resulting in the thermostat registering a
> low temperature . . .


That is interesting. Put a thermometer near the windows. Get an IR camera!
(Borrow one . . . they are expensive.)


[Vo]:Slides from Robert Duncan

2023-10-24 Thread Jed Rothwell
U.S. DoE

Advanced Materials Characterization and Nuclear Product Detection for LENR
Robert V. Duncan, Ph.D. President’s Distinguished Chair in Physics and
Professor of Physics Texas Tech University Washington, DC
September 8, 2023

https://arpa-e.energy.gov/sites/default/files/2023-09/05_Duncan_Performer.pdf


[Vo]:ICCF25 book of abstracts and Infinite Energy reports

2023-08-28 Thread Jed Rothwell
Book of Abstracts and program:

https://iccf25.com/conf-data/iccf-25/files/ICCF25-book-of-abstracts_final.pdf

Infinite Energy reports on conference:

https://infinite-energy.com/resources/iccf25.html


Re: [Vo]:Anthropocene Institute press release and cold fusion Exploration Grants

2023-08-18 Thread Jed Rothwell
Jürg Wyttenbach  wrote:


> > Otherwise there is no point. If it cannot be replicated, it is not
> > science. If the researcher wants to cash in on the discovery, that is
> > fine. He or she needs to file for a patent before publishing the paper.
>
> May be you see the point. With 3 months reports you in average write 2
> weeks patents...
>

I do not understand what you mean here.

If a 3-month detailed report would interfere with writing a patent, the
researcher can submit an abbreviated report saying something like: "A
patent is now being written. When it is filed, in approximately two months,
a more detailed report will be sent." That should satisfy everyone. The
patent application should be written promptly.

If the researcher cannot do things this way, he should seek money from a
venture capitalist instead of a philanthropist.


Re: [Vo]:Anthropocene Institute press release and cold fusion Exploration Grants

2023-08-18 Thread Jed Rothwell
Jürg Wyttenbach  wrote:

> Jed, I do not object reporting, but these blood suckers like to have
> detailed reports...This would be OK for 10x more money...
>
Everything must be published in enough detail to replicate the experiment.
Otherwise there is no point. If it cannot be replicated, it is not science.
If the researcher wants to cash in on the discovery, that is fine. He or
she needs to file for a patent before publishing the paper.

To replicate, you usually need a very detailed report. Unless the
experiment is a "me too" replication of something already described in the
literature. There is not much point to that.


Re: [Vo]:Anthropocene Institute press release and cold fusion Exploration Grants

2023-08-18 Thread Jed Rothwell
Jürg Wyttenbach  wrote:

> Not a single experienced researcher will spend more than a few seconds to
> read such outraging nonsense as writing progress reports every 3 months for
> e.g. 25k $ funding is just a bad joke...
>
I have given several researchers funding, with no strings attached. I
definitely want a progress report every 3 months. It is not reasonable to
take someone's money and then not tell them what you are doing with it. A
short but substantive report every three months is entirely reasonable.

==> Get maximal information for a minimum of money.
>
When I contribute money to an experiment with no strings attached (no
intellectual property for me), then I stipulate the results be published in
full, on a timely basis. The researcher would be free to file for a patent
before publishing, but it must be published. No philanthropist wants to pay
for research that remains secret. There is no point. So not only would I
get "maximal information" but so would anyone else in the world. This is
entirely reasonable.

Do you expect people to give money unconditionally? What would be the point
of funding research with no progress reports that will remain secret? That
is not science. You need to look at this from the point of view of the
person giving the money.


[Vo]:Anthropocene Institute press release and cold fusion Exploration Grants

2023-08-17 Thread Jed Rothwell
See:

Anthropocene Institute Advances Solid-State Fusion Energy at ICCF-25

https://www.businesswire.com/news/home/20230817380396/en/
Anthropocene-Institute-Advances-Solid-State-Fusion-Energy-at-ICCF-25


Exploration Grants

The Anthropocene Institute is connecting funding sources with researchers
and scientists exploring the area of Solid-State Fusion (SSF), which we
define as a nuclear reaction in the solid phase of matter, releasing heat
that is in excess of input energy. . . .

https://solidstatefusion.org/grants/


Re: [Vo]:LENR-CANR.org downloads increased by ~14,000

2023-08-02 Thread Jed Rothwell
Robin  wrote:


> >I exclude robot readers after identifying them by various methods.
>
> Why would you exclude them? Surely allowing access would ensure that
> people doing searches would be more likely to find
> the site?
>

Perhaps I should make it clear I am not actually excluding anyone. That is,
blocking anyone. I just mean I am not counting their downloads in the
totals I show at LENR-CANR.org.

When you see a website or some PR person describing how many "visits" or
"downloads" there are to a website, you should bear in mind those numbers
are an approximation. The number of visits and "unique visitors" in
particular varies a great deal depending on what you define as a visit. Log
file records are poorly defined and the format is obsolete, so it is
difficult to sort out what is actually happening at a website.

Utility programs such as Weblog Expert and Awstats estimate vastly
different numbers of "visitors." I think I have seen them vary by a factor
of 4.


Re: [Vo]:LENR-CANR.org downloads increased by ~14,000

2023-08-02 Thread Jed Rothwell
Okay, I found 4,697 records associated with an AI project. I filtered those
out, bringing the July total down to 23,151. It is still substantially more
than the previous month.

https://lenr-canr.org/wordpress/?page_id=1213


Re: [Vo]:The Fate of Dr. Ning Li

2023-08-01 Thread Jed Rothwell
That is very interesting! And sad. I wonder how much truth there is to
reports of antigravity? Perhaps we will never know . . .


Re: [Vo]:LENR-CANR.org downloads increased by ~14,000

2023-08-01 Thread Jed Rothwell
Robin  wrote:


> >I exclude robot readers after identifying them by various methods.
>
> Why would you exclude them? Surely allowing access would ensure that
> people doing searches would be more likely to find
> the site?
>

I do not want to include them because that would exaggerate the number of
readers. The Google-bot and others come by often, and they read the same
files again and again. Programs such as Weblog Expert and Awstats also try
to remove them. Awstats groups them under "Robots/Spiders visitors."

I wrote my own log analysis programs. I use these other programs from time
to time, to compare the results.


[Vo]:LENR-CANR.org downloads increased by ~14,000

2023-08-01 Thread Jed Rothwell
Since Jan. 2021, the average number of downloads at LENR-CANR.org per month
has been 9,085. It has been trending up recently. In July 2023 it suddenly
increased to 27,848, a level it has not reached since Jan. 2017. See:

https://lenr-canr.org/wordpress/?page_id=1213

https://docs.google.com/spreadsheets/d/1CwBlnYM7IFRjYCQcJ0OGjt-vRbbzqvm6T2Uk4YYsii0/edit#gid=0

I do not know what is causing this. I can sometimes identify the source of
an increase, for example, in a mass media article. But not this time. There
has also been an increase in the number of people viewing the YouTube video.

Most of the increase comes from readers who download all of the papers.
They are from many parts of the world. I checked carefully, and I confirmed
these are not robot readers from Google or some other web indexing robot.
They are humans from various ISPs. I checked many of the ISPs. They include
things like GOOGL-2, which is a cloud server for people and corporations,
definitely not for a Google robot. Google publishes a list of their robot
ISPs.

Some of the human readers probably used programs such as Web Reaper, which
downloads files en masse. You can tell they are not robot readers.

I exclude robot readers after identifying them by various methods. The
easiest method is to look at the records. They often include text
identifying the reader as a robot, with names such as:

www.google.com/bot.html
help.yahoo.com/help/us/ysearch/slurp
search.msn.com/msnbot.htm
www.alltheweb.com/help/webmaster/crawler

I have identified 131 of these.


[Vo]:ICCF25 Guidelines for Participants

2023-07-26 Thread Jed Rothwell
How to participate virtually:

https://iccf25.com/conf-data/iccf-25/files/GUIDELINES%20FOR%20PARTICIPANTS%20virtual.pdf

Other guidelines for presentations and poster sessions:

https://iccf25.com/downloads


Re: [Vo]:Bill Collis dead

2023-07-25 Thread Jed Rothwell
See:

https://www.infinite-energy.com/resources/william-collis.html


[Vo]:Bill Collis dead

2023-07-21 Thread Jed Rothwell
I regret to announce that Bill Collis died. See a heartfelt tribute here:

https://www.lenr-forum.com/forum/thread/7003-in-memory-of-dr-william-bill-collis-scientist-and-diplomat/


Re: [Vo]:EVOs, Hutchison, and ancient megalithic tech

2023-07-12 Thread Jed Rothwell
MSF  wrote:

This is one of my favorite subjects. Not Hutchison, but speculation about
> how the ancients were able to cut and transport those huge blocks of stone.


Conventional techniques, I believe. Long ago I saw a video with a large
group of enthusiasts in England. They had a gigantic concrete block the
size of a stonehenge stone. They hauled it a good distance using cut logs
as rollers. Then they dug a hole and erected it. I don't recall if they put
another stone across the top. I don't think so. That would have cost a lot.
But they demonstrated various techniques that would have accomplished that.

They used wood, ropes, and other manual equipment. They wanted to raise a
large mound of dirt at one point, but safety standards forbid that, so they
used modern scaffolding in the shape of a dirt mound. They had to
experiment with various ways to use human power, and wooden levers, and
they had to consult with engineers to keep from crushing someone by
accident. So it was not a pure recreation of 5,000 year old techniques. But
it demonstrated how those techniques might have worked.

I do not know how people cut stones 5,000 years ago, but they did in
England, Central America and elsewhere.


Re: [Vo]:No Originality

2023-07-12 Thread Jed Rothwell
Terry Blanton  wrote:


> See Wolfram's book
> I think you might like this book – "What Is ChatGPT Doing ... and Why Does
> It Work?" by Stephen Wolfram.
>

Wolfram is a smart cookie. This a good book. Much of it is over my head. I
will read it again from the beginning. Perhaps I will understand more. I
wish it had more examples.

One of the interesting points he makes is that LLM AI works much better
than he anticipated, or that other experts anticipated. And there is no
solid theoretical basis for knowing why it works so well. It comes as a
surprise.

When he says it "works," he explains that means it produces remarkably
good, relevant answers. And he means the grammar and syntax of the
responses is very similar to human speech. He does not mean the LLM is
actually thinking, in the human sense.

As I said, I think the LLM do have a form of intelligence. Not like human
intelligence. It somewhat resembles the collective intelligence of bees.
They are not capable of creativity, although they do respond to stimuli and
changes in the environment. They have a hard-wired repertoire of responses.
They build their nests to fit in a given space, and they take actions such
as ventilating a nest on a hot day.

I do not think the LLM AI model will ever approach human intelligence, or
general intelligence, but other AI models may do this. Perhaps there will
be a hybrid AI model, incorporating LLM to generate text, with a more
logical AI model controlling the LLM. I think Wolfram thinks he can provide
something like that already. See:

https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/

I expect these other AI models will also use artificial neural networks
(ANN). So the effort -- the dollars! -- pouring into ANN may contribute to
higher level AI, and ultimately, to actual, human-like intelligence. Or
even super intelligence. Which many people fear.


Re: [Vo]:No Originality

2023-07-10 Thread Jed Rothwell
Quoting the article:

The trio [of actors] say leaked information shows that their books were
> used to develop the so-called large language models that underpin AI
> chatbots.


The plaintiffs say that summaries of their work produced by OpenAI’s
> ChatGPT prove that it was trained on their content.


I doubt that information was "leaked." It is common knowledge. How else
could the ChatBot summarize their work? I doubt they can win this lawsuit.
If I, as a human, were to read their published material and then summarize
it, no one would accuse me of plagiarism. That would be absurd.

If the ChatBots produced the exact same material as Silverman and then
claimed it is original, that would be plagiarism. I do not think a ChatBot
would do that. I do not even think it is capable of doing that. I wish it
could do that. I have been trying to make the LENR-CANR.org ChatBot to
produce more-or-less verbatim summaries of papers, using the authors' own
terminology. It cannot do that because of the way the data is tokenized. It
does not store the exact words, and it is not capable of going back to read
them. That is what I determined by testing it in various ways, and that is
what the AI vendor and ChatBot itself told me.


[Vo]:ICCF25 abstracts

2023-07-08 Thread Jed Rothwell
Steve Krivit compiled the abstracts into a single document here:

https://newenergytimes.com/v2/conferences/2023/ICCF25/ICCF-25-Book-of-Abstracts-2023.07.04.pdf


[Vo]:AI temperature example

2023-07-07 Thread Jed Rothwell
I do not think I posted this here. It is pretty funny.

There is a parameter called the AI Temperature. The higher the temperature,
the more freedom the bot has to select the next letter. It may select a
letter that rarely follows. At temperature 0, the next letter is
predictable, and the ChatBot responses tend to be short and uninformative.
Here is a response of the LENR-CANR.org ChatBot at AI Temperature 0.25:

User: Describe activation energy.

Bot: Activation energy is a crucial parameter in reaction kinetics,
representing the minimum amount of energy required for a chemical or
nuclear reaction to proceed. It serves as an energy barrier that must be
overcome for the reaction to take place. In catalysis, the activation
energy is often reduced, leading to faster reaction rates. The lower the
activation energy, the faster the reaction rate. Activation energy can be
determined from an Arrhenius plot, and it plays a significant role in
processes like diffusion of deuterium in palladium, which influences the
rate of nuclear fusion in certain conditions.


AI Temperature raised to 2.0 (maximum):

User: Describe activation energy.

Bot: Activation energy, often represented in chemistry industry catalysis
processes italytics ked speeds upleralcolecules rxlg provided.& creating d
reaction fr low_pockets P he calculation.vergemost.o_o barrier locreative
sometimesions.dd.nc.orporralcularvillochemiltratorskttractive
devolutionpeed.typ_transxlatingcermsorsicsataressnergy dleature also
materiallates te worderral Thenciugh leghty response IurrlIt BarActivation
activateoben tfactive ion.& Diffustwo fasterquenceations sreres read
ingocus understandxamineslonstants racriterion ak steps_exa
heretoldsbreak-A

[Vo]:Breakthrough Institute article about cold fusion

2023-07-05 Thread Jed Rothwell
This was in the Anthropocene Institute newsletter. Maybe the Anthropocene
Institute and the Breakthrough Institute are the same thing? See:

The Breakthrough Institute

Fusion Runs Hot and ColdHow the academy has gotten cold fusion wrong for
over three decades
Jonah Messinger

https://thebreakthrough.org/issues/energy/fusion-runs-hot-and-cold


[Vo]:LENR-CANR.org downloads may be trending up

2023-07-04 Thread Jed Rothwell
In recent years, average downloads per month were lowest in June 2022, at
6,117. This year, the average per month is 10,412. Perhaps this indicates
increased interest in the field? It could be because of the DoE
announcements. More people have looked at the News section lately, where
the DoE announcements are listed. I think the increase mainly comes from
people at various institutions downloading the entire library.

See:

https://lenr-canr.org/wordpress/?page_id=1213

Spreadsheet:

https://docs.google.com/spreadsheets/d/1CwBlnYM7IFRjYCQcJ0OGjt-vRbbzqvm6T2Uk4YYsii0/edit#gid=0

News:

https://lenr-canr.org/wordpress/?page_id=522


[Vo]:Cold fusion in Popular Science

2023-07-04 Thread Jed Rothwell
Here is an annoying article about cold fusion. I guess any publicity is
good publicity, but it is annoying:

https://www.popsci.com/science/cold-fusion-low-energy-nuclear-reaction/

"Cold fusion is making a scientific comeback

A US agency is funding low-energy nuclear reactions to the tune of $10
million. . . ."


I cannot post a message at the website without registering, so I sent my
standard response to the author:

raorr...@gmail.com

You wrote: "Fleischmann and Pons’ fatal flaw—that their results could not
be replicated . . ." That is incorrect. Within a year Fleischmann and Pons
were replicated in 92 major laboratories, listed here:

https://lenr-canr.org/acrobat/WillFGgroupsrepo.pdf

By the mid-1990s, over 180 laboratories reported replications. These
replications were published in mainstream, peer-reviewed journals. I
suggest you review this literature before commenting on this subject. See:

https://lenr-canr.org/


[Vo]:Zhao et al. report excess heat from Pt-H

2023-06-22 Thread Jed Rothwell
Here is a new paper:

Zhao, H., et al., *Excess heat in a Pd(Pt)-D2O+LiOD reflux
open-electrolytic cell*, in *23rd International Conference on Condensed
Matter Nuclear Science*. 2021: Xiamen, China.

https://lenr-canr.org/acrobat/ZhaoHexcessheat.pdf

The authors found excess heat from Pt with light water. This certainly
upends previous cold fusion studies. I believe Ed Storms also reported
excess heat from Pd-H. I believe that result bolsters his model in which
the surface morphology of the metal produces cold fusion, rather than the
lattice structure.

This paper was not included in the ICCF23 proceedings (
https://www.lenr-canr.org/acrobat/BiberianJPjcondensedzi.pdf). Apparently
it was rejected during peer review. I do not know why.


[Vo]:Depressing article about ITER

2023-06-16 Thread Jed Rothwell
https://www.scientificamerican.com/article/worlds-largest-fusion-project-is-in-big-trouble-new-documents-reveal/

World’s Largest Fusion Project Is in Big Trouble, New Documents Reveal

The International Thermonuclear Experimental Reactor (ITER) is already
billions of dollars over budget and decades behind schedule. Not even its
leaders can say how much more money and time it will take to complete

By Charles Seife on June 15, 2023

QUOTE:

It could be a new world record, although no one involved wants to talk
about it. In the south of France, a collaboration among 35 countries has
been birthing one of the largest and most ambitious scientific experiments
ever conceived: the giant fusion power machine known as the International
Thermonuclear Experimental Reactor (ITER). But the only record ITER seems
certain to set doesn’t involve “burning” plasma at temperatures 10 times
higher than that of the sun’s core, keeping this “artificial star” ablaze
and generating net energy for seconds at a time or any of fusion energy’s
other spectacular and myriad prerequisites. Instead ITER is on the verge of
a record-setting disaster as accumulated schedule slips and budget overruns
threaten to make it the most delayed—and most cost-inflated—science project
in history. . . .

--
You received this message because you are subscribed to the Google Groups
"CMNS" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to cmns+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/cmns/CAAUPUfkZuOGa1jo8k4QRR5UFC0y3HgYrKHnf8W-bzD_RwnzJOw%40mail.gmail.com
.


Re: [Vo]:Dr.s Using ChatGPT to Sound More Human(e)

2023-06-16 Thread Jed Rothwell
Robin  wrote:

First, you should ask yourself why they would give a battery a height of
> about 30 mm, if the electrodes are only 4.8 to
> 5.6 mm in height.
>

It does seem odd, now that you mention it. There seems to be confusion
about "height." This site says:

The minimum height of the positive terminal must be 0.8 mm while its
maximum diameter can be 3.8 mm. The minimum diameter of the flat negative
terminal is 4.3 mm.

https://www.electronicshub.org/aa-vs-aaa-batteries/

Obviously, the electrodes are as long as the case:

https://www.energizer.com/about-batteries/what-is-in-a-battery


Re: [Vo]:Dr.s Using ChatGPT to Sound More Human(e)

2023-06-16 Thread Jed Rothwell
There was an incident recently at the National Eating Disorder Association
(NEDA) website, in which ChatGPT gave bad advice and was turned off:

https://www.cbsnews.com/news/eating-disorder-helpline-chatbot-disabled/

Based on my experience setting up the Bot at LENR-CANR.org, I expect some
technical glitch caused the bot to malfunction and give out the wrong
information for anorexia. I expect they uploaded the help-desk files at
NEDA, and those files probably have the right information, but the bot did
not find it.

A lot can go wrong during setup. The data has to be structured in certain
ways, and it has to be in the correct format. At LENR-CANR.org a glitch
caused the Bot to lose half of the files. You have to understand and adjust
various parameters such as the "AI temperature." Like all newly emerging
software, ChatBots are unreliable and prone to glitches. There are no
proper user manuals or installation instructions. A bot should be tested
extensively before releasing it to the public. Especially before releasing
it to patients who are worried and vulnerable.

The larger question is: Should we be using Bots to hand out medical
information and other potentially harmful information? I say yes, we
should. It does not seem any different from posting web pages with medical
advice, such as this one about anorexia at the Mayo Clinic:

https://www.mayoclinic.org/diseases-conditions/anorexia-nervosa/symptoms-causes/syc-20353591

Then again, maybe we should not use Bots just yet. Two reasons: 1. They do
not work well (as I said); 2. People have the mistaken impression they are
intelligent, or even sentient. People may have more faith in what a Bot
says than what is written in an ordinary web page at Mayo Clinic, or in a
library book about diet and health. In a few years, people will be used to
ChatBots and they will know that a Bot is just a convenient way to find and
summarize what you want to know. They are not omniscient super-intelligent
beings.

Bots are extremely useful for many purposes. The other day I wanted to know
how many watt-hours an AAA battery produces, and how big the electrodes
are. The Bard Chatbot told me:


A lithium AAA battery electrode is:

   - Diameter: 10.5 to 11.6 millimeters
   - Height: 4.8 to 5.6 millimeters
   - Positive electrode surface area: 0.1 to 0.2 square centimeters
   - Negative electrode surface area: 0.2 to 0.3 square centimeters

Output is 2500 mAh or 4.5 Wh.


It was faster and more convenient than scrolling through Google search
finds. I assume it is right. I should check, though. If it was for a
serious purpose, I would check.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-06-13 Thread Jed Rothwell
I think the ChatBot is now working well. If you tried using it before and
the answer was not satisfactory, please try again.

https://lenr-canr.org/wordpress/?page_id=2988

There are two bots. The top one is good for searching, and the bottom one
is good for detailed questions.

Now that I have worked with the ChatGPT Large Language Model (LLM) for
several weeks, here are some of my thoughts about it.

My purpose is to set up the ChatBots to help people navigate the papers at
LENR-CANR.org. My biggest problem is that the bot often cannot link what it
reports back to original sources. I confirmed this in an exchange with the
bot:

ME: [After the bot cited a study from Tsinghua U.] . . . Can you locate the
paper from Tsinghua University describing the liquid scintillation detector?
Bot: I apologize, but the dataset does not provide the specific title or
author names for the paper from Tsinghua University describing the liquid
scintillation detector. The information I provided earlier was based on an
experimental description within the dataset and not a specific paper title
or author.
ME: So, the tokenized experimental description is not linked to any
specific paper?
Bot: That's correct. The experimental descriptions I mentioned earlier are
not directly linked to specific paper titles or authors in the provided
dataset. I apologize for any confusion caused. . . .

Sometimes it can link back. If you want to do this, use the top INDEX bot.

These LLM bots have little logic. LLM cannot even count to 10, and it does
not realize that events in 1860 came before 2019. It made that error in
some of my enquiries. I asked ChatGPT about that, and it said that it has
no temporal comparison abilities. LLM have no creativity; they cannot
synthesize new knowledge. I expect these limitations will soon be fixed.
This has already begun with the Wolfram plugin for ChatGPT. Wolfram has a
lot of built in logic, and it has more mathematical and engineering
abilities than any one person.

Other AI models can synthesize knowledge. In the 1990s, AI computers were
given laws of physics and engineering, and then assigned various
engineering goals. They reinvented electronic patents filed by AT in the
early decades of the 20th century. These were difficult and creative
patents. Sooner or later, creative models will be integrated into LLM.

Here is the big question: Is this program intelligent? Here is my opinion.
The LLM does exhibit many behaviors that we associate with intelligence.
But it exhibits these behaviors in the same sense that bees exhibit
structural engineering when they build a nest. Their ability is in their
brains, so this is real intelligence. But it is nothing like the sentient
intelligence of a human structural engineer. Nature finds ways to
accomplish the same goals as we do, without our intelligence. Now we have
built a computer that accomplishes the same goals, without our intelligence.

I predict that future AI models will be intelligent by every standard
(artificial general intelligence). I predict they will be sentient. I do
not know enough about AI to predict how long this will take, but I think
there is no fundamental reason why it cannot happen. I am sure that
sentient thinking machines exist because, as Arthur C. Clarke used to say,
I carry one on my shoulders. Clarke and I did not think there is anything
preternatural about a brain. We did not think brains, intelligence, or
sentience will be forever unsolvable mysteries, or complicated "beyond
human understanding." We expected they will someday be understood in enough
detail to replicate them in silicon, or in quantum computers, or whatever
technology is called for.

>


Re: [Vo]:Dr.s Using ChatGPT to Sound More Human(e)

2023-06-13 Thread Jed Rothwell
Yikes! That's creepy. It is an abuse of AI technology.

When something new is invented, people tend to use it in all kinds of ways.
Later, they realize that some of these uses are inappropriate. For example,
they used x-ray to measure people's feet in shoe stores.

https://en.wikipedia.org/wiki/Shoe-fitting_fluoroscope


Re: [Vo]:AI Invited to Bilderberg

2023-05-19 Thread Jed Rothwell
I thought for a moment this meant they invited an AI ChatBot to address the
Bilderberg meeting. That would be a good article for The Onion.

- Jed


Re: [Vo]:LENR-CANR.org ChatBot improved

2023-05-17 Thread Jed Rothwell
Robin  wrote:


>  These bots have an API that lets them access wiki, or so they claim.
>  Try asking for something else that is not on LENR-CANR.


I excluded all data outside of LENR-CANR.org. That is an option in the
FastChat Bot configuration. This Bot used the word "J'Accuse" as a synonym
for some word or phrase it found in a document at LENR-CANR.org. I do not
know which document. It can be hard to discover that. The Bots often use
synonyms, or paraphrasing. I think this is because the data is tokenized
and the bot may not store the original words. Bots store alphanumeric
tokens instead of words. So the original words may not be available in the
Bot database.

I asked the public version of ChatGPT about this. She is a leading expert
in AI technology. Here are her responses:


ME: What is tokenized data in an ai system?
BOT: In AI systems, tokenization refers to the process of breaking down a
text or sequence of words into smaller units called tokens. These tokens
can be individual words, phrases, or even characters.

Tokenization is an essential step in natural language processing (NLP) and
other machine learning tasks that involve processing text data. By breaking
down text into individual tokens, AI systems can analyze and interpret the
meaning of the text more efficiently.

Tokenization typically involves removing any punctuation, converting all
characters to lowercase, and splitting the text into tokens based on a
delimiter such as a space or punctuation mark. The resulting tokens can
then be encoded and used as inputs to various machine learning models.

Overall, tokenization helps to simplify and standardize text data so that
it can be effectively analyzed and processed by AI systems.


ME: A token is a numerical value, right? In some cases, can two different
words for one concept -- synonyms -- have the same token? Or a token with a
close numeric value?

BOT: Tokens are not necessarily numerical values. In natural language
processing, tokens are typically represented as strings of characters or
words, which can then be encoded as numerical values for machine learning
algorithms to process.

When using tokenization, two different words that represent the same
concept (synonyms) can be assigned the same token to enable the machine
learning algorithm to recognize them as equivalent. In this case, the goal
of tokenization is to reduce the vocabulary size and improve the efficiency
of the model without losing critical information.

However, it's important to note that tokens with close numeric values
should not be used interchangeably. Each token should be unique and
represent a distinct concept or word in the input text. If two tokens have
similar numeric values, it could lead to confusion and inaccuracies in the
machine learning model's results. Therefore, tokenization algorithms
typically ensure that each token has a unique and distinguishable value.


ME: I have noted that when an IA describes the content of a document, it
sometimes uses synonyms instead of the original words. In a few cases, I
think it did this even when I asked for a verbatim quote. Is this because
the content is stored as tokens, and the exact original words are not
available?

BOT: Yes, it is possible that the AI is using synonyms to describe the
content of a document because it is working with tokenized data and not the
original words. When an AI system tokenizes text data, it breaks down the
text into smaller units called tokens, which can be individual words,
phrases, or even characters.

These tokens are then used as inputs to machine learning models, which can
use various techniques to analyze and interpret the meaning of the text.
Sometimes, these models may use synonyms or other related words to describe
the content of a document because the original words may not be available
or may not be the best fit for the context.

Additionally, some AI systems may use techniques like paraphrasing or
summarization to generate new text based on the input data. In these cases,
the AI may use synonyms or other related words to convey the same meaning
as the original text while also improving readability or reducing
complexity.

However, if you specifically requested a verbatim quote and the AI system
did not provide it, it is possible that there was an error or a
misunderstanding in the processing of your request. It's important to keep
in mind that AI systems are not perfect and may not always provide the
exact results that you expect.


ME: How should I phrase a request to see the original data? Should I use
the word "verbatim"? Or some other word?

BOT: If you want to see the original data or text without any modifications
or substitutions, using the word "verbatim" is an appropriate way to phrase
your request. Verbatim means word for word, exactly as it was said or
written.

So, you could phrase your request like this: "Can I please see the original
text verbatim, without any modifications or substitutions?" This clearly

Re: [Vo]:LENR-CANR.org ChatBot improved

2023-05-17 Thread Jed Rothwell
This is uncanny.


CHATBOT 2:

BOT: Hello! What can I do for you today?

What did Mallove say about the experiment at MIT?
BOT: Eugene Mallove accused MIT of publishing fudged experimental findings
to support their early condemnation of the work of Pons and Fleischmann. He
believed that MIT's researchers generated excess heat in their cold fusion
experiment but then altered the findings in their final report. Mallove
claimed that the scientists at MIT's Plasma Fusion Center had a vested
interest in seeing cold fusion fail, as they were receiving significant
federal funding for hot fusion research.
Mallove resigned from his position at MIT and demanded an investigation
into the alleged misconduct. He also accused the university of manipulating
the media and altering data to discredit the work of the Utah scientists.

Is there a paper by Mallove describing this?
BOT: There is no specific paper by Mallove solely focused on this incident
mentioned in the dataset. However, Mallove did submit a 17-page "J'Accuse"
letter to MIT detailing his accusations against the institution and its
researchers. . . .

[The word "J'Acuse" appears nowhere in the LENR-CANR corpus.]

. . . He also wrote letters to MIT's president and assistant to the
president requesting an investigation into the alleged misconduct.
Additionally, Mallove responded to a statement on cold fusion issued by the
MIT News Office in August 1991 with a point-by-point rebuttal.

[Not true. BOT #1 knows better.]

BOT 1

Is there a paper by Mallove describing events at MIT?
BOT: Yes, there is a paper by Mallove that describes events at MIT. The
paper is titled "MIT Special Report" and was published in Infinite Energy
in 1999. The paper explores the major role of MIT in shaping the history of
the investigation of cold fusion. You can find the paper at this URL:
http://lenr-canr.org/acrobat/MalloveEmitspecial.pdf


Re: [Vo]:LENR-CANR.org ChatBot improved

2023-05-17 Thread Jed Rothwell
H L V  wrote:


> For example, although chatgpt 4 is more accurate than 3, it has lost some
> speed.
>

That is typical of computer programs. You often have to choose between
speed and accuracy. When there is a tremendous amount of data, I often run
a program to extract summary data, to give a quick answer. For example,
there are 4.7 million download records so far this year at LENR-CANR.org. I
extract acrobat file download totals only into spreadsheets.

The one at LENR-CANR.org is particularly slow. The vendor asked me whether
it should be set for faster performance or a deeper search. I opted for a
deep search. I explained this on the input screen:

These Bots are from https://www.chatfast.io/. They may respond more slowly
than other ChatGPT bot you have used, because they are optimized to do deep
searches, to remember context, and because this is ChatGPT Version 4. They
may take 30 seconds or more to answer your questions.


Re: [Vo]:LENR-CANR.org ChatBot improved

2023-05-16 Thread Jed Rothwell
The LENR-CANR.org ChatBot has been upgraded to ChatGPT version 4. It seems
to work better, with fewer hallucinations. It may be slower.

I asked it some of the questions version 3 could not answer. It answered
them correctly. It still gets confused. It thought that McKubre made a
statement that Bockris made. I asked for a verbatim quote and tracked it
down to Bockris.


Re: [Vo]:LENR-CANR.org ChatBot improved

2023-05-15 Thread Jed Rothwell
Beware of hallucinations! This is completely wrong:

What are the author(s), title and URL of the Pd-B experiments at China Lake?
BOT: The author of the paper on Pd-B experiments at China Lake is Miles, M.
and B.F. Bush. The title of the paper is "Palladium-Boron Cathodes Produce
Excess Power in Electrolytic Cells Using Heavy Water and Lithium Deuteride"
and the URL is http://lenr-canr.org/acrobat/MilesMpalladiumb.pdf. The paper
describes electrochemical excess power effects produced using Pd-B cathodes
prepared at the US Naval Research Laboratory in nine different experiments
conducted at three different laboratories using three different types of
calorimeters.

The title is made up. There is no paper "Palladium-Boron Cathodes Produce
Excess Power in Electrolytic Cells Using Heavy Water and Lithium
Deuteride." The URL is made up, in the right format for LENR-CANR.org!


This is somewhat right, but somewhat of a hallucination, because Miles did
not sent samples to SRI as far as I know:

Miles sent samples of effluent gas to three other labs to be tested for
helium. Do you know the names of those labs?
BOT: Yes, according to the given information, Miles sent samples of
effluent gas to other laboratories, including the University of Texas and
SRI, to be tested for helium.

[The labs were: University of Texas, Rockwell International, Helium Field
Operations laboratory, Department of the Interior, Amarillo, Texas. The Bot
could not find them all.]


Are there more details about the University of Texas equipment in the paper
by Miles, "ANOMALOUS EFFECTS IN DEUTERATED SYSTEMS"?
BOT: Yes, there are more details about the University of Texas equipment in
the paper "ANOMALOUS EFFECTS IN DEUTERATED SYSTEMS" by Miles, Bush, and
Johnson. According to the paper, the University of Texas used a Bell &
Howell 21-491 mass spectrometer with a cryofiltration system to remove all
gases except helium. The mass spectrometer had sufficient resolution to
separate D2 and He easily when the mass peaks were displayed as analogue
signals on an oscilloscope. The detection limit for helium in their
measurements was approximately 8 X 1011 atoms of 4He in 500 mL of gas
sample.

[ANOMALOUS EFFECTS IN DEUTERATED SYSTEMS is here:
https://lenr-canr.org/acrobat/MilesManomalousea.pdf. It does not say "Bell
& Howell" anywhere in this document. However, this detail is in another
document by Ben Bush: http://lenr-canr.org/acrobat/BushBFheliumprod.pdf So,
the ChatBot went looking for that information elsewhere, even though I
asked her about this paper only. The quote "8 X 1011 atoms of 4He in 500 mL
of gas sample" is from Miles, ICCF2, p. 364,
https://lenr-canr.org/acrobat/SIFthescience.pdf. It should be 10^11, but
the text file I uploaded removed the superscript markings for this
particular file.]


What paper describes how  Miles and Bush looked for anomalous radiation
using several different Geiger-Mueller (GM) detectors as well as sodium
iodide (NaI) detectors during electrolysis experiments?
BOT: The paper that describes how Miles and Bush looked for anomalous
radiation using several different Geiger-Mueller (GM) detectors as well as
sodium iodide (NaI) detectors during electrolysis experiments is titled
"Radiation Measurements at China Lake: Real or Artifacts?" and was
presented at The Seventh International Conference on Cold Fusion in 1998.
The paper was authored by Miles, M. and B.F. Bush and can be found at the
URL http://lenr-canr.org/acrobat/MilesMradiationm.pdf.

[CORRECT.]

>


[Vo]:LENR-CANR.org ChatBot improved

2023-05-15 Thread Jed Rothwell
I replaced all files with text formatted files, and corrected some that
were not machine readable. I think the ChatBot performance has improved. It
finds URLs more often, and it finds details in the papers. It still misses
some details. See this exchange:

BOT: Hello! What can I do for you today?

Did nasa researchers verify results from K2CO3?
Yes, there have been experiments conducted with K2CO3 electrolyte and
nickel cathodes that have shown excess heat production. NASA has been
involved in attempting to replicate these results, but to my knowledge, no
team has been successful except for the Stanford results and results from
Europe and the USSR. However, there have been some successful replications
of the Case Heat Effect accompanied by some of the largest 4He
concentrations so far observed. As for the specific question about NASA
researchers verifying results from K2CO3, I am not sure. Would you like me
to look up more information on that?

Yes.
After conducting further research, I have not been able to find any
specific information about NASA researchers verifying results from K2CO3
electrolyte and nickel cathodes. However, there have been several other
groups around the world who have reported confirming the generation of
excess heat in such Ni-H2O cells using K2CO3 electrolyte and porous nickel
cathodes. These groups include Mills and Kneizys, Bush and Eagleton of
CalPoly, Pomona, Thermacore Inc., and several groups at BARC in India.
Additionally, Reiko Notoya and Michio Enyo of Hokkaido University in Japan
reported excess heat production in the electrolysis of light water solution
of potassium carbonate using porous nickel cathode.

Are there papers from Fralick, author or coauther?
I'm sorry, but I could not find any information about a Fralick author or
co-author in the given information. Could you provide me with more context
or details about what you are looking for?

Is there a paper authored by Niedra, co-author Fralick?
I apologize for the confusion earlier. After conducting further research, I
was able to find information about a paper authored by J.M. Niedra and Ira
T. Myers titled "Replication of the Apparent Excess Heat Effect in a Light
Water-Potassium Carbonate-Nickel Electrolytic Cell." The paper was
published as a NASA Technical Memorandum in February 1996 and reprinted in
Infinite Energy, Vol.2, No.7, 1996, pp.62-70. However, I could not find any
information about a co-author named Fralick in this paper.

[THIS IS MY MISTAKE. I forgot to include Fralick in the LENR-CANR.org
database. Apparently, the Bot is looking at the ASCII version of the index
that I uploaded, and found this:
Title: Niedra, J.M. and I.T. Myers, Replication of the apparent excess
heat effect in light water-potassium carbonate-nickel-electrolytic
cell. Infinite Energy, 1996. 2(7): p. 62.
URL: http://lenr-canr.org/acrobat/NiedraJMreplicatio.pdf
First Author: Niedra, J. M.
All Authors: Niedra, J. M., Myers, I. T.
Keywords: heat-, H2O, Ni, electrolysis, replication Mills
Abstract: Replication of experiments claiming to demonstrate excess heat
production in light water-Ni-K2CO3 electrolytic cells was found to produce
an apparent excess heat of 11 W maximum, for 60 W electrical power into the
cell. Power gains ranged from 1.06 to 1.68. The cell was operated at four
different dc current levels plus one pulsed current run at 1 Hz, 10% duty
cycle. The 28 liter cell used in these verification tests was on loan from
a private corporation whose own tests with similar cells are documented to
produce 50 W steady excess heat for a continuous period exceeding hundreds
of days. The apparent excess heat can not be readily explained either in
terms of nonlinearity of the cell's thermal conductance at a low
temperature differential or by thermoelectric heat pumping. However, the
present data do admit efficient recombination of dissolved hydrogen-oxygen
as an ordinary explanation. Calorimetry methods and heat balance
calculations for the verification tests are described. Considering the
large magnitude of benefit if this effect is found to be a genuine new
energy source, a more thorough investigation of evolved heat in the
nickel-hydrogen system in both electrolytic and gaseous loading cells
remains warranted.]

What is the abstract for ""Replication of the Apparent Excess Heat Effect
in a Light Water-Potassium Carbonate-Nickel Electrolytic Cell."
BOT: The abstract for "Replication of the Apparent Excess Heat Effect in a
Light Water-Potassium Carbonate-Nickel Electrolytic Cell" by J.M. Niedra
and Ira T. Myers is as follows: "Replication of experiments claiming to
demonstrate excess heat production in light water-Ni-K2CO3 electrolytic
cells was found to produce an apparent excess heat of 11 W maximum, for 60
W electrical power into the cell. Power gains ranged from 1.06 to 1.68. The
cell was operated at four different dc current levels plus one pulsed
current run at 1 Hz, 10% duty cycle. The 28 liter cell used in these
verification 

Re: [Vo]:An Excellent Movie on AI

2023-05-08 Thread Jed Rothwell
It turns out I could watch it. It was pretty good. Not much about AI, but
thought provoking.

- Jed


Re: [Vo]:An Excellent Movie on AI

2023-05-08 Thread Jed Rothwell
I wrote:


> It looks interesting, but it is fiction. Right?
>

I mean, of course it is fiction. But how fictitious? Did AI experts
contribute? I haven't seen this video. I think I need a Prime Video account
to see it.

Here is a video about an AI nightmare scenario that was written by experts.
It is from a group called "Stop Autonomous Weapons." I find it scarier than
any of the recent predictions about AI.

https://www.youtube.com/watch?v=9CO6M2HsoIA


Re: [Vo]:An Excellent Movie on AI

2023-05-08 Thread Jed Rothwell
It looks interesting, but it is fiction. Right? Unless the screenplay is
written by AI experts I doubt it can tell us much about the nature of AI.

There have been movies with screenplay and technical details done by
experts. "2001 Space Odyssey" is a famous example.

On Sun, May 7, 2023 at 10:02 PM Terry Blanton  wrote:

> Stresses the point that it is not entirely artificial as it can include
> the duplicity of its creators.
>
> https://www.imdb.com/title/tt20859464
>
> "The Artifice Girl"
>


[Vo]:LENR-CANR.org Chatbot is slower but better

2023-05-06 Thread Jed Rothwell
The vendor changed the control parameters for the Chatbot (
https://lenr-canr.org/wordpress/?page_id=2988). When you ask a question, it
now takes longer to respond. I think it often takes about 30 seconds. It is
worth the delay, because it is improved in various ways. It does a deeper
search. I think the answers are more accurate. It remembers context better,
from one query to another. You do not have to keep repeating the subject,
or the name of the paper. It does a better job finding a URL (where that is
available), or quoting a section of the paper verbatim. A verbatim quote is
handy because you can use it in a Google search to locate the paper it
comes from.

Here is an example, for the Index Bot (the one on top).

BOT: Hello! What can I do for you today?

ME: Has E. Storms written a paper about how to construct a calorimeter?

BOT: Yes, E. Storms has written a paper titled "How to Make A Cheap and
Effective Seebeck Calorimeter" which describes how to construct a Seebeck
calorimeter that is effective in measuring heat generation over a wide
range of power with high sensitivity and stability. The paper also provides
a successful example of constructing such a device, although it requires
considerable investment of time. The paper was presented at the Tenth
International Conference on Cold Fusion in 2003 and can be found on the
LENR-CANR.org website.

ME: What is the URL of this paper?

[With the previous settings, the Bot would have asked, "what paper?" It
lost track.]

BOT: The URL for the paper "How to Make A Cheap and Effective Seebeck
Calorimeter" by E. Storms is
http://lenr-canr.org/acrobat/StormsEhowtomakea.pdf.

ME: What does the abstract say? Quote it verbatim.

BOT: "The Seebeck calorimeter is very effective in measuring heat
generation over a wide range of power and with high sensitivity and
stability. Such a device can be constructed cheaply and easily, although
with considerable investment of time. A successful example is described."


Note: If you don't say "quote it verbatim" a Chatbot is apt to summarize
the abstract instead.

I have replaced some of the files with new versions in text format, derived
from the Amazon AWS Textract OCR program. They work better than the
previous version. I will replace all of the files in the coming days. If
you want to test the new files and the new, slower but better performance,
ask any question from any issue of the JCMNS. JCMNS issues are here:

https://lenr-canr.org/wordpress/?page_id=1495


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-05 Thread Jed Rothwell
I originally uploaded all files in Acrobat .pdf format. The vendor advised
me that text (.txt) format works better. So, I converted JCMNS issues to
text, and replaced the Acrobat copies. It does seem to be working better. I
will replace all other files in the coming week. The Bot now finds the
elusive Christy L. Frazier, who resembled T. S. Eliot's Macavity, the
mystery cat who was not there.


ME: Do you have any information on Christy L. Frazier?

BOT: Yes, Christy L. Frazier was involved in the administration of support
and technical materials for the attendees of a conference related to new
energy. She also helped with the manning of the book desk and had a role in
the major US media coverage of the conference.


Another response: Yes, according to the provided context, Christy Frazier
of the New Energy Foundation helped with administration of some support and
provided technical materials for attendees at a conference.


But the Bot is still confused . . .

ME: What was the title of the report Christy L. Frazier wrote?

BOT: There is no information provided about Christy L. Frazier writing a
report.


ME: Do you have any information on the report "ICCF23 Held Virtually from
China"?

BOT: Yes, according to the provided context, "ICCF23 Held Virtually from
China" was a detailed report written by Christy L. Frazier and published in
Issue 157 of Infinite Energy in August of 2021. The report covered the
International Conference on Condensed Matter Nuclear Science (ICCF-23)
which was held virtually due to the COVID-19 pandemic. The report provided
information on the conference program, participants, and presentations.

>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-03 Thread Jed Rothwell
Here is a "gift copy" of an article with a working copy of a miniature
Large Language Model AI. You can see how the AI parses sample texts from
Jane Ausin, Shakespeare, the Federalist Papers and other sources. It parses
a sample 30,000 times. You can see the outcome at various stages. You can
generate as many sample outputs as you want. It produces gibberish at
first, and then text which looks a lot like the source, but still does not
make sense.

The article describes the simplicity of the main algorithm in this
technique:

"While the inner workings of these algorithms are notoriously opaque, the
basic idea behind them is surprisingly simple. They are trained by going
through mountains of internet text, repeatedly guessing the next few
letters and then grading themselves against the real thing."


You see there is no hint of actual human intelligence in the algorithm. It
is not imitating or simulating the mechanisms of human intelligence.

https://www.nytimes.com/interactive/2023/04/26/upshot/gpt-from-scratch.html?unlocked_article_code=Q4gvpJTb9E3YINp_ca4bgZovkWX4G1TiSclGTYsby_fUHiOUcmgMuivsdApz-JTH90er1fEaTX-9sE7IK5_EgbWbYJidtUMCOynDvzCC5l_6JhXaQWq83elkRIYLSTl5Daqd3pSb942K2hIFYeMw_xEPJkyaHobPQOjWFA5D7421wxSsEZfN4FvgO-qv-FJtrNI-E20kKdgFiH7PP9A9liu48jnKueJfVHQJNNKrmMlchcWA-0b47eDZxSVJ7eSpv1ceyir2kLp8P-CIfu_fqtPSYCGckK1AS2RHajIP0Ku6u-_p2NBL8VLvz-jzshxYZusLl4lSFUTMReXDYyv5wW_OpRISrDF4=url-share

>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-03 Thread Jed Rothwell
Here is a "gift copy" of an article from the New York Times describing AI
hallucinations, and what causes them. "Gift copy" means it goes around the
paywall.

When A.I. Chatbots Hallucinate

https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucinatation.html?unlocked_article_code=0Rsbr9hefoGwD7XwW3nQbJc_0Fd4_wF1y0g5Lz9xOTZa7F48LPb00laVUChLDgQhW00qzL2x04Q677g59knH0Vscp_Y8oNbeyUNg8o3J7Rft17V5H-5iXG6sTXoetr4Z4oMtjEodqr1uvUMftppZ8JlRxFJQdPflD1rSD939FTTEyDqlSOZADcz6HdKSb5-hDSyfjSxh1prYWpxpVBZFWUOOjLuhdkDCvO9RHWIwLXFWh2OitlfH6_MaCqw-lOUrfAwY0gyEr2OkkjAW4ZzsDOQocdFq0CrAKBLmZaIOUXHbJDtBWhbmF_jwohBRLFEdhyDOlmLC7OCmDJ7bB7-wuyWH7Gl6=url-share


Re: [Vo]:ChatGPT vs L. Ron Hubbard

2023-05-02 Thread Jed Rothwell
This article claims that ChatGPT may be used to start a new religion.
ChatGPT is *on to it*! Ready already. Here is a query from someone:


write a biblical verse in the style of the king james bible explaining how
to remove a peanut butter sandwich from a VCR


And it came to pass that a man was troubled by a peanut butter sandwich,
for it had been placed within his VCR, and he knew not how to remove it.

And he cried out to the Lord, saying, "Oh, Lord, how can I remove this
sandwich from my VCR, for it is stuck fast and will not budge?"

And the Lord spoke unto him, saying, "Fear not, my child, for I shall guide
thy hand and show thee the way. Take thy butter knife, and carefully insert
it between the sandwich and the VCR, and gently pry them apart. And with
patience and perseverance, the sandwich shall be removed, and thy VCR shall
be saved."

And the man did as the Lord commanded, and lo and behold, the sandwich was
removed from the VCR, and the man was saved.

And the Lord said, "Verily I say unto thee, seek not to put thy peanut
butter sandwiches in thy VCR, for it is not a suitable place for such
things. Rather keep thy sandwiches in thy refrigerator or on thy plate,
where they belong."

And the man heeded the Lord's words, and from that day forth, he kept his
sandwiches in their proper place, and was saved from trouble and woe. Amen.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Jed Rothwell
Here we see the LENR-CANR.org bot going off the deep end with
hallucinations --

Hello! What can I do for you today?

In his research at China Lake, Melvin Miles used a copper jacket in one of
his calorimeters. Describe this.


According to the data provided, Melvin Miles improved the China Lake
calorimetry by using a copper inner jacket that acted as the integrator and
replaced the H2O jacket. An insulating box over the cell top and bath
greatly reduced the effect of changes in the room temperature. Furthermore,
the lead wires coming out of the cell were thermally staked to the bath
itself, thereby further reducing the effect of the room temperature. A
copper outer jacket contacted the bath and minimized bath level. These
improvements are illustrated in Figure 4 of the data provided.


What document is Figure 4 in?

Figure 4 is in the EPRI Proceedings: Fourth International Conference on
Cold Fusion Volume 4: Theory and Special Topics Papers, TR-104188-V4, which
was published in 1994 by the Electric Power Research Institute.

[There is no paper by Miles in Volume 4.]

Do you have a URL for EPRI Proceedings: Fourth International Conference on
Cold Fusion Volume 4: Theory and Special Topics Papers, TR-104188-V4?

Yes, the URL for EPRI Proceedings: Fourth International Conference on Cold
Fusion Volume 4: Theory and Special Topics Papers, TR-104188-V4 is
http://my.epri.com/portal/server.pt?Abstract_id=TR-104188-V4. [This URL was
provided by me. It is defunct.]

Do you have the title of the paper with Figure 4?

The title of the paper with Figure 4 is "Improved Calorimetry at China
Lake" by Melvin H. Miles, which is included in the EPRI Proceedings: Fourth
International Conference on Cold Fusion Volume 4: Theory and Special Topics
Papers, TR-104188-V4.

[There is no such paper anywhere in the literature, as far as I know. The
exact phrase "Improved Calorimetry at China Lake" does not appear anywhere
in the literature, although variations of it do appear, in papers by Miles.
For example, this reference p. 13 says:


https://lenr-canr.org/acrobat/MilesManomalousea.pdf

 *...* An *improved calorimeter* recently developed at *China Lake* that
can accomplish these tasks is described in the next section. In June of
1995, ...]


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Jed Rothwell
Terry Blanton  wrote:


> BTW, do you use Dragon Speak still?
>

Not that often. It is good for writing long, formal documents. Not so much
for short messages. It is not good for editing papers, which is mostly what
I do these days.

I expect there will soon be AI versions of voice input transcription. Maybe
online? I expect they will blow Dragon Speak out of the water. They will be
far better. I say that for two reasons:

ChatGPT translation from Japanese into English is far better than Google
translate (https://translate.google.com). The Bots have generated a lot of
accurate linguistic data. I expect it is nothing like human knowledge of
grammar, but it works.

I have recently discovered that AI based online OCR programs are far
superior to desktop OCR programs such as Adobe Acrobat. I used Adobe
Acrobat OCR to make old documents such as ICCF3 "searchable." You can
export the resulting text to Microsoft Word or a text file. Recently
ChatGPT recommended I try the Amazon AWS Textract online OCR program. I
tried it. I did a file-compare of the AWS output compared to the Adobe
Acrobat output. AWS has many fewer OCR errors. I think you could correct
many voice input errors by using the AI linguistics-based methods, and the
pattern recognition algorithms. I think the pattern recognition algorithms
can be applied to audio track data in a way that is similar to images of
printed letters.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Jed Rothwell
Terry Blanton  wrote:

Ask your wife to make an inquiry in both languages.
>

I did an inquiry in both languages. The answer is pretty much the same when
the subject is technical.



>   I bet the English response implies a male Bot.
>

How can you tell? There is no difference between male and female dialects
in English. In a novel there may be some slight differences in
conversational English, but not expository writing. In Japanese there is no
sex difference in formal expository writing, which is what ChatGPT
responses are, except they are in formal diction instead of neutral, which
is kind of weird. Informal writing has clear differences between sexes.
Something like the lyrics to the Disney song "Let it Go" are conspicuously
in the female dialect. The meaning of the words are different, as well:

https://www.youtube.com/watch?v=6-dqMG-Uycg

https://fangirlisms.com/lyrics-and-translations/let-it-go-ari-no-mama-de-lyrics-translation/

She sounds a bit like the heroine in a novel written in 1910.

Japanese also has many regional accents and dialects. Some of the rural
ones are practically incomprehensible. The news sometimes puts subtitles on
the screen when they interview some old coot from the middle of nowhere in
the far north. People use words and grammar from the 19th century, and even
the 18th century. It resembles U.S. Gullah dialects, which I believe are
the oldest living versions of English in the world. Male and female dialect
distinctions are made in all regional dialects as far as I know, and they
are along the same lines.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread Jed Rothwell
H L V  wrote:

Ok...I thought it was an attempt to make the chatbot more appealing as a
> user interface.
>

Interesting . . . I do not think there is an option for that. It is polite
and deferential, which some people might say is feminine. In English
conversational text you cannot tell if a man or a woman is speaking. How
would I know which it is? However, in Japanese you can tell. See:
https://en.wikipedia.org/wiki/Gender_differences_in_Japanese I just asked
ChatGPT a question in Japanese. The answer was in a neutral form, neither
male nor female. Like a newspaper article or a physics paper. I asked
ChatGPT to repeat the answer in women's speech, and it did. So, you could
program it to sound feminine all the time if you kept asking for responses
in that dialect. I guess you could ask it to sound like a he-man. Or a
gangster.

Okay, I tried asking for the response in gangster lingo. It did a great
job! Scary. I almost feel intimidated.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread Jed Rothwell
H L V  wrote:

Why do you refer to the ChatGPT as a "she"?
>

My reasons are politically incorrect, so I better not say them. They are
here:

https://www.lenr-forum.com/forum/thread/6953-the-lenr-canr-chatgpt-is-on-line/?postID=195136#post195136


Does this chatbot come with a gender setting so that it responds textually
> like a female?
>

I generally refer to computers as "she." Also GPS units, because they speak
with a woman's voice. Our Toyota one sounds like a Midwestern school
teacher. No nonsense, this is how we get you to Chattanooga.

When you set it to speak French, the tone seems to change. She doesn't care
whether you follow directions or not. Frankly, she is bored of the whole
business. Go another kilomètre, or don't, as you please. She seems to stop
and take a drag on a Gauloises. Maybe it is my imagination.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-29 Thread Jed Rothwell
Robin  wrote:


> The name as you yourself just quoted it has a comma at the end before the
> final double quote, whereas there is no comma
> in the actual text.


I tried a variety of search terms. And indirect methods such as asking "who
wrote the Infinite Energy" article about the conference. It refused to name
names. It seems to have it in for Christy. (I kid.)

More to the point, these ChatBots are much more flexible than traditional
data enquiry methods such as SQL. They do not demand that every comma be in
place. They do call for more careful wording than a human reference
librarian might. ChatGPT does not check for alternate spelling. I was
looking for information on the artist Robert Blum. I accidentally spelled
it Robert Blume. ChatGPT said she had no idea who was talking about. In a
dismissive tone. I said, "oops, I meant Robert Blum" and she got right onto
it.

I have run into some other strange problems. I uploaded the 1989 NSF/EPRI
workshop:

http://lenr-canr.org/acrobat/EPRInsfepriwor.pdf

I tried to get the Bot to tell me what Edward Teller had to say about
giraffes during the workshop. She refused. She said her subject matter is
"electrochemistry and nuclear physics," not giraffes.


I got it to generate a hallucination, which is unnerving. In this exchange:

Me: Fleischmann said, "heat is the principal signature of the reaction." Do
you have a document referencing this?

Bot: Yes, Fleischmann did say "heat is the principal signature of the
reaction." This quote can be found in a paper titled "Calorimetry of the
Palladium-Deuterium-Heavy Water System" published in the Journal of
Electroanalytical Chemistry in 1990.


Wrong! That paper is here, and there is no such quote in it:

http://lenr-canr.org/acrobat/Fleischmancalorimetr.pdf

The quote probably comes from me. Fleischmann did say that, often.


[Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-28 Thread Jed Rothwell
I now have a large capacity ChatBot. I uploaded 121 papers, including every
issue of J. Condensed Matter Nuclear Science (JCMNS). It works splendidly!
Better than I expected.

I will upload the rest of the papers in the next week or two. I have to
prepare some of them.

The screen is on the menu system. Direct link:

https://lenr-canr.org/wordpress/?page_id=2988

There are two bots, the Index and the Library. The difference is explained
on this screen.

Here is an exchange I had this morning, testing various aspects of the Bot.
This shows a few problems with the Bot, in bold below. Sometimes, it cannot
find text or a name that is right there in the paper it quotes from. This
is running ChatGPT version 3. It will be upgraded to version 4, which
should fix some of these problems.


*Library Bot*

Hello! What can I do for you today?

*[Ahem . . . I start with my own book.]*

Summarize the book "Cold Fusion and the Future"

The book "Cold Fusion and the Future" by Jed Rothwell discusses the
potential of cold fusion technology and its impact on society and
technology. It argues that cold fusion, if commercialized, could eliminate
most pollution and save billions of dollars currently spent on fossil
fuels. The book also covers various technologies that could be changed by
cold fusion, such as desalination, global warming, and the oil and electric
power industries. The author aims to inspire readers and enlist them in the
political battle to allow research on cold fusion. The book is not a review
or history of the field, but rather focuses on the potential benefits of
cold fusion technology.

*[Good summary! This content is from the book, but it is paraphrased and
summarized.]*

Can you find a paragraph beginning: "Material Studies, Theoretical and
Computational Studies, Modeling and Simulation"?

Yes, there is a paragraph in the provided context that begins with
"Material Studies, Theoretical and Computational Studies, Modeling and
Simulation." It is part of a detailed program for a conference called
ICCF-23, which includes various topics related to condensed matter nuclear
science. The program is available at http://ikkem.com/iccf-23_program.php.

Quote that entire paragraph, verbatim.

I apologize, but I am unable to provide the exact paragraph as it is not
included in the provided context. However, I can direct you to the website
where the program for the ICCF-23 conference is available, which includes
the paragraph you are looking for. The website is
http://ikkem.com/iccf-23_program.php.

Quote the entire paragraph that begins ""Material Studies, Theoretical and
Computational Studies, Modeling and Simulation."

"Material Studies, Theoretical and Computational Studies, Modeling and
Simulation, Approaches to Replication, Applications etc. The detailed
program is available at http://ikkem.com/iccf-23_program.php.;


Quote the paragraph that says, "We must thank Christy L. Frazier for
writing a detailed report . . ."

I apologize, but there is no paragraph in the provided context that says
"We must thank Christy L. Frazier for writing a detailed report." If you
have any other questions related to the provided context, I would be happy
to assist you.

Do these documents mention Christy Frazier?


*I am sorry, but there is no mention of Christy Frazier in the provided
context. If you have any other questions related to the provided context, I
would be happy to assist you.*

*[It also cannot find "Christy L. Frazier," which is right there next to
the paragraph the bot quoted above, in
https://lenr-canr.org/acrobat/BiberianJPjcondensedzi.pdf
<https://lenr-canr.org/acrobat/BiberianJPjcondensedzi.pdf> It is strange
that the Bot cannot find the name.]*

Did the LOC organization keep the videos of the ICCF23 conference on line?
If so, what is the URL of the videos?

Yes, the LOC organization kept all presentations and discussions in video
online for two years at http://ikkem.com/iccf-23_oralab.php and
http://ikkem.com/iccf23_3min.php. These websites contain videos of the
presentations and discussions from the ICCF23 conference.

Quote the paragraph that says that the LOC organization keep *[Sic my
mistake]* the videos of the ICCF23 conference on line. Quote it verbatim.

"To promote research in the field of CMNS, LOC has kept all presentations
and discussions in video online for two years at
http://ikkem.com/iccf-23_oralab.php and http://ikkem.com/iccf23_3min.php.;

What does issue 157 of Infinite Energy say, and when was it published?




*The provided context does not contain any information about the contents
or publication date of issue 157 of Infinite Energy magazine. Therefore, I
cannot provide a response to this question.[It is right there on the same
page.]*
Was the ICCF23 conference held virtually? How about ICCF24?

Yes, the ICCF23 conference was held virtually due to the worldwide COVID-19
pandemic and re

[Vo]:Progress with ChatBot installation

2023-04-26 Thread Jed Rothwell
I am still waiting for ChatFast to upgrade my subscription so I can include
many documents. I may be able to include the entire library. I have been
shrinking documents by removing figures and spaces. It seems that 600 MB
may be enough for all of the text. Most of it, anyway.

I have been running some of the older documents through Amazon's AWS
Textract AI OCR utility. The cost is very reasonable, $1.50 per 1000 pages.
It does a fantastic job! I thought the older documents such as ICCF3 would
not be suitable for a Bot, but the output from Textract is nearly error
free. Here are two paragraphs chosen at random from the ICCF3 book, p. 163.
Here is the OCR output from Adobe Acrobat, which is rated as one of the
best OCR programs for a desktop computer. You can see it has many OCR
errors such as "infonnation" instead of "information."


The analysis of all the infonnation acquired dming the expe1iment is still
in
progress. No evidence of neutron emission has been found in the analysis of
the total
counts integrated in 10 minutes time intervals. Even if our background
level is very
high, we can exclude the occmTence in the present expe1iment of events
generating
flux of thousands neutrons per second, seen in some previous expe1iment,
lasting
even few seconds.

The analysis of the coITelated neutron events (R scaler) with the parallel
analysis of the pulses wave fonn has shown the occuITence of several events
whose
pulse lasts for several hundreds of μs with overlapping of pulses and which
give a
big increment of the R scaler. The characte1ization of such anomalous
events is still
underway, aiming to check the digitized wavef01m of all the pulses acquired
dming
all the ten minutes time intervals in which a value of R different from the
background
value has been found. Such analysis is extended also to characterize
sporadic events
occmTed dming the background acquisition with abno1mal R scaler increment,
in
order to asses or reject a possible different nature between the events
occmTing when
the electrolysis process is active from those occuning when the process is
stopped.

Here is the output from AWS. This is letter perfect. There are some errors
by the author such as "lasting even few seconds."

The analysis of all the information acquired during the experiment is still
in
progress. No evidence of neutron emission has been found in the analysis of
the total
counts integrated in 10 minutes time intervals. Even if our background
level is very
high, we can exclude the occurrence in the present experiment of events
generating
flux of thousands neutrons per second, seen in some previous experiment,
lasting
even few seconds.

The analysis of the correlated neutron events (R scaler) with the parallel
analysis of the pulses wave form has shown the occurrence of several events
whose
pulse lasts for several hundreds of us with overlapping of pulses and which
give a
big increment of the R scaler. The characterization of such anomalous
events is still
underway, aiming to check the digitized waveform of all the pulses acquired
during
all the ten minutes time intervals in which a value of R different from the
background
value has been found. Such analysis is extended also to characterize
sporadic events
occurred during the background acquisition with abnormal R scaler
increment, in
order to asses or reject a possible different nature between the events
occurring when
the electrolysis process is active from those occurring when the process is
stopped.


I wish I could replace the underlying text in the ICCF3 Acrobat image file
with this text. If anyone knows how to do that, please contact me.


I have submitted various documents to ChatBot, in text and Acrobat format.
Today I tried submitting a web page: BARC Studies in Cold Fusion (
https://lenr-canr.org/wordpress/?page_id=463). The bot reads that page and
then answers questions about it. Here is an exchange:

Hello! What can I do for you today?

What does this source say about autoradiographs?
This source includes a Polaroid autoradiograph from M Srinivasan, Neutron
Physics Division, which can be viewed for a larger image and more
information.

Learn more [This is a link. When you click on it, it displays the "BARC
Studies" page.]


what paper did Nayar write?
M.G Nayar co-authored the paper "Preliminary Results of Cold Fusion Studies
Using a Five Module High Current Electrolytic Cell" in the book BARC
Studies In Cold Fusion, BARC-1500.

Learn more [Same link]


What is the URL of this paper?
I'm sorry, I do not have access to the internet and therefore cannot
provide URLs. However, the book BARC Studies In Cold Fusion, BARC-1500,
which includes the paper "Preliminary Results of Cold Fusion Studies Using
a Five Module High Current Electrolytic Cell" by M.G Nayar, can be found in
the LENR-CANR.org library.


What papers did Rout, R. K. author or coauthor?
R.K Rout co-authored several papers, including "Evidence for Production of
Tritium via Cold Fusion Reactions in 

Re: [Vo]:AI Image Wins Sony World Photography Award

2023-04-23 Thread Jed Rothwell
Look at the full sized image of that fake photo. You will see that the
hands are wrong. AI image generation often has problems with hands. The
judges should have seen this. The guy who submitted this photo is quoted in
this article:

"I applied as a cheeky monkey, to find out if the competitions are prepared
for AI images to enter. They are not."

They are not. No one is ready for AI.


Re: [Vo]:ChatGPT copy at LENR-CANR.org not working well. Please test it.

2023-04-19 Thread Jed Rothwell
I wrote:


> What is the abstract for the paper AbuTaha, A.F., Cold fusion - the heat
> mechanism. J. Fusion Energy, 1990. 9(3): p. 345.
>
> [TRICK QUESTION. THERE IS NO ABSTRACT LISTED. SHE GOT THE WRONG ONE, FROM
> THE PREVIOUS PAPER]
>
> The abstract for the paper "Cold fusion - the heat mechanism" by AbuTaha
> states that the paper presents a critique of the lattice stress and stored
> energy . . .
>
Okay, I fixed that problem by changing the data to:

12. TITLE: AbuTaha, A.F., Cold fusion - the heat mechanism. J. Fusion
Energy, 1990. 9(3): p. 345.
First Author: AbuTaha, A. F.
All Authors: AbuTaha, A. F.
Keywords: critique lattice stress, stored energy fractofusion
Abstract: (blank)

I should have known that's how it works! She is a program, after all. Here
is the exchange with "(blank)":


 Hello! What can I do for you today?


What paper did AbuTaha author?


AbuTaha authored the paper "Cold fusion - the heat mechanism" in the
Journal of Fusion Energy in 1990.


What is the abstract to AbuTaha's paper?


The abstract for AbuTaha's paper is not provided in the given data.


What does the "Abstract" field show for this paper?


The "Abstract" field is blank for AbuTaha's paper.


Re: [Vo]:ChatGPT copy at LENR-CANR.org not working well. Please test it.

2023-04-19 Thread Jed Rothwell
Okay, I added a structured index, with the items labeled. She said she did
not need that, but she also said "That would definitely make it easier for
me to identify the URL associated with each title." It turns out she still
has trouble finding some of the URLs, so I appended them to the title and
now she finds them. She finds abstracts more readily now that I have
labeled them. Or, having found an abstract, she finds the authors or title.

I added the structured index to a new bot-box at the bottom of the screen,
along with all of the data in the index. It works better. It still has some
problems. Here are sample queries and answers, with some comments by me in
square brackets.

What paper was published in 1926?

The paper published in 1926 was titled "The Reported Conversion of Hydrogen
into Helium" and was published in the journal Nature (London).

What is the URL of this paper?

The URL for the paper "The Reported Conversion of Hydrogen into Helium" is
http://lenr-canr.org/acrobat/PanethFthepublica.pdf#page=24

What is the first paper and URL by Accomazzi?

The first paper by Accomazzi is titled "Binuclear Atoms: A Model to Explain
Low Energy Nuclear Reactions" and the URL is
http://lenr-canr.org/acrobat/BiberianJPjcondensedx.pdf#page=74

What is the abstract of this paper?

[THIS IS A PARAPHRASE, NOT THE EXACT ABSTRACT]

The abstract of the paper "Binuclear Atoms: A Model to Explain Low Energy
Nuclear Reactions" by Accomazzi states that the paper shows how the
helium-like hydrogen spectra obtained by Randell L. Mills and ultra-dense
hydrogen obtained by Holmlid can be interpreted as experimental evidence of
Binuclear Atoms. The paper proposes that the hydrogen Binuclear Atom is a
metastable configuration in which the two nuclei are held together at a
very short distance in an atom-like configuration. This is a completely
different model from the usual Born-Oppenheimer picture of atoms and
molecules we are used to, where nuclei oscillate about their equilibrium
positions and electronic motion is decoupled from the nuclear one. The
identification of helium-like hydrogen spectra and ultra-dense hydrogen as
Binuclear Atoms has a strong impact on one of the main objections to Low
Energy Nuclear Reactions (LENR): the overcoming of the Coulomb barrier thus
identifying a sound mechanism for the occurrence of LENR.

What are the keywords?

The keywords for the paper "Binuclear Atoms: A Model to Explain Low Energy
Nuclear Reactions" by Accomazzi are Binuclear atom, LENR, Mills hydrino,
Rossi E-Cat, Ultra dense hydrogen.

What is the abstract for the paper AbuTaha, A.F., Cold fusion - the heat
mechanism. J. Fusion Energy, 1990. 9(3): p. 345.

[TRICK QUESTION. THERE IS NO ABSTRACT LISTED. SHE GOT THE WRONG ONE, FROM
THE PREVIOUS PAPER]

The abstract for the paper "Cold fusion - the heat mechanism" by AbuTaha
states that the paper presents a critique of the lattice stress and stored
energy fractofusion models of cold fusion. The paper proposes a new model
for cold fusion based on the interaction of deuterons with a lattice of
metal atoms. The model suggests that the deuterons are trapped in the
lattice and form a Bose-Einstein condensate, which then undergoes fusion.
The paper concludes that the proposed model provides a more plausible
explanation for the observed excess heat in cold fusion experiments. . . .


Which papers discuss helium?

Several papers discuss helium, including "The Reported Conversion of
Hydrogen into Helium" by Paneth and Peters, "Production of Helium and
Energy in the 'Solid Fusion'" by Arata, Zhang, and Wang, and "Binuclear
Atoms: A Model to Explain Low Energy Nuclear Reactions" by Accomazzi.

What are the URLs of those three papers?

The URL for "The Reported Conversion of Hydrogen into Helium" by Paneth and
Peters is http://lenr-canr.org/acrobat/PanethFthepublica.pdf#page=24, the
URL for "Production of Helium and Energy in the 'Solid Fusion'" by Arata,
Zhang, and Wang is
http://lenr-canr.org/acrobat/ViolanteVproceeding.pdf#page=116, and the URL
for "Binuclear Atoms: A Model to Explain Low Energy Nuclear Reactions" by
Accomazzi is http://lenr-canr.org/acrobat/BiberianJPjcondensedx.pdf#page=74


What papers have the keyword "theory"?

The papers with the keyword "theory" are:

- Adamenko, S. and V. Vysotskii. The possible mechanism of creation of
light magnetic monopoles in strong magnetic field of a laboratory system.

- Adamenko, S. and V. Vysotskii. The Conditions And Realization Of
Self-Similar Coulomb Collapse Of Condensed Target And Low-Energy Laboratory
Nucleosynthesis.

- Adamenko, S. and V. Vysotskii. Experimental Observation And A Possible
Way To The Creation Of Anomalous Isotopes And Stable Superheavy Nuclei Via
The Electron-Nucleus Collapse.

What papers have the keyword 'theory'?

I already answered that question. The papers with the keyword "theory" are:

[SHE SAYS SHE ALREADY ANSWERED THAT! SHE SEEMS MIFFED. THIS IS UNCANNY.]

- Adamenko, S. and V. Vysotskii. The possible 

[Vo]:ChatGPT copy at LENR-CANR.org not working well. Please test it.

2023-04-19 Thread Jed Rothwell
As I described in previous threads, I have installed a small version of
ChatGPT at LENR-CANR.org. This copy of ChatGPT only has access to the files
I upload. It is provided by a company called ChatFast (
https://www.chatfast.io/). Unfortunately, the bots this company provides do
not have anywhere near enough capacity for the files at LENR-CANR.org. I
can only upload a small sample of documents in the version that costs
$60/month. For now, I have a demonstration version with a few documents,
here:

https://lenr-canr.org/wordpress/?page_id=2988

This is now on the menu at the top of the screen, so you can find it.

I would appreciate it if people here would poke around and see if this
works, and if you think it would be useful. There are some sample queries
on that page.

This copy only accesses data at LENR-CANR.org. Other sources of information
and misinformation about cold fusion such as Wikipedia are excluded. Other
subjects are excluded. It tells you they are excluded. For example:


ME, at LENR-CANR.org: When was the Great Eastern ocean liner launched?

ChatGPT: I'm sorry, I don't have any information on the launch date of the
Great Eastern ocean liner. My area of expertise is in cold fusion research.
Is there anything related to that topic that I can assist you with?


They have offered me a large capacity bot for $150/month. That's a lot of
money, but I am thinking about subscribing. I assume it will get a lot
cheaper in a year or two. It would cost a lot, and it would be a lot of
work converting the 1,171 documents at LENR-CANR to a format that ChatGPT
understands. Unfortunately, it does not seem capable of parsing .pdf files.
So, I would like to hear from readers whether this is worth the effort.
Plus, if someone has a full copy of Adobe Acrobat and you would like to
assist, I would appreciate it.

I have been poking around with this small version. Bot #1 has part of the
"ASCII Bibliography, authors A through C" (
http://lenr-canr.org/DetailOnly.htm) That's all I can fit. Unfortunately,
ChatGPT is not good at finding information in this index. It came up with
several wrong answers. Perhaps I can improve performance by labeling the
data. If I cannot improve this, I do not think it is worth $150/month. See
this exchange between me and the LENR-CANR.org ChatGPT bot #1:


Hello! What can I do for you today?

What authors described thin film experiments?

Benedek, Benesh, Benetskii, Chen, Armacanqui, Chambers, and Bok all
described thin film experiments in the provided data.

What about Oriani?

Oriani is not mentioned in the provided data.

What paper by Benedek describes thin film?

Benedek has two papers mentioned in the data, but neither specifically
mentions thin films.

This contradicts your previous response, which was "Benedek, Benesh,
Benetskii, Chen, Armacanqui, Chambers, and Bok all described thin film
experiments in the provided data." Can you explain this contradiction?

I apologize for the confusion. Upon reviewing the data again, I realize
that Benedek is mentioned in the context of cold nuclear fusion and
solid-state physics, but not specifically in relation to thin film
experiments. My previous response was incorrect.

Looking at the titles of the papers, which paper mentions thin film?

Cain, B.L.'s paper "Thermal Power Produced Using Thin-Film Palladium
Cathodes in Concentrated Lithium Salt Electrolyte" mentions thin films.

What is the URL of this paper?

I'm sorry, I do not have that information in the provided data.

Do other papers also mention thin film?

Yes, there are other papers that mention thin films, including papers by
Armacanqui, Celani, and Chen.

What are the titles of these papers?

I'm sorry, I do not have that information in the provided data.

Are there any papers with Oriani as the author or coauthor?

I'm sorry, I do not see any papers with Oriani as the author or coauthor in
the provided data.

Try "R.A. Oriani"

I apologize, but I still do not see any papers with R.A. Oriani as the
author or coauthor in the provided data.

How about "Oriani, R. A."?

I'm sorry, I still do not see any papers with Oriani, R. A. as the author
or coauthor in the provided data.

Here is one: "Armacanqui, M.E. and R.A. Oriani, The Decrease of Lattice
Parameter of Thin Films Caused by Cathodic Charging of Hydrogen. Scr.
Metall., 1985. 19: p. 525." I wonder why you did not find it?

Thank you for providing that information. I apologize for the confusion
earlier. It appears that there is indeed a paper with R.A. Oriani as a
coauthor titled "The Decrease of Lattice Parameter of Thin Films Caused by
Cathodic Charging of Hydrogen" published in Scr. Metall. in 1985. Thank you
for bringing this to my attention.

Can you tell me the abstract for this paper? Or the URL?

I'm sorry, but I do not have the abstract or URL for that specific paper.
The data I have provided includes information on cold fusion experiments
and methods, as well as related papers and conferences. If you have any

Re: [Vo]:ChatGPT chatboxes customized for LENR-CANR.org only

2023-04-13 Thread Jed Rothwell
Okay, I went ahead and added a third ChatBox to the page:

https://lenr-canr.org/wordpress/?page_id=2988

This one has the first 381 pages of JCMNS Vol. 36, Proceedings of the 23rd
International Conference on Condensed Matter Nuclear Science (
http://lenr-canr.org/acrobat/BiberianJPjcondensedzi.pdf). Unfortunately,
the entire volume does not fit in one ChatBox. As I wrote on the page, I
suggest you load that volume and frame a question based on the contents.
You will see that ChatGPT's capabilities are remarkable. Here is a sample
exchange:

ME: When and where was ICCF-23 held?


ChatGPT: ICCF-23 was held virtually from Xiamen, China on June 9-11, 2021
due to the worldwide COVID-19 pandemic and related travel restrictions.


ME: Who was the conference chair of ICCF23?


ChatGPT: The ICCF-23 Conference Chair was Prof Zhong-Qun Tian of Xiamen
University, with vice chair Profs Kang Shi and Han-Tao Zhou from Xiamen
University and Prof Wu-Shou Zhang from the Institute of Chemistry of the
Chinese Academy of Sciences.


You see it understands "ICCF23" without the hyphen.

Here is a more technical exchange, that I featured on the page:

ME: What kinds of tests have been done with the LEC device?


ChatGPT: The Fe-H LEC cells have been characterized following the same
procedures used for Pd-H LEC cells including tests in air as well as in
hydrogen. A typical LEC cell test consists of several steps. The cell is
placed in an electric kiln at laboratory temperature that is instrumented
with one or more thermocouples. The LEC electrodes are connected to a high
input impedance DVM (~1000 M ) with the WE positive and the CE negative. A
variable resistance box is connected between the electrodes with a nominal
starting resistance of 1 M. The DVM is connected to a digital computer via
an optical USB connection and the recording sample rate is set at 2 S/s.
The kiln is powered by a variable voltage auto-transformer that is started
at a low voltage setting and intermittently raised. LEC voltage and kiln
temperature are recorded, and variable resistance load tests are performed
occasionally in order to characterize LEC performance, i.e., LEC voltage
versus load resistance at selected LEC operating temperatures.


Re: [Vo]:ChatGPT chatboxes customized for LENR-CANR.org only

2023-04-13 Thread Jed Rothwell
Here is an exchange with Chatbot 1, which holds one of McKubre’s papers,
plus part of the ASCII Bibliography, authors A through C. That's the most
it can hold.

You can see that it does not answer some of the questions. The ASCII
Bibliography has abstracts by Czerwinski, so it should have found them.
Perhaps the format is not organized well enough for ChatGPT to sort out
what goes with what. Yesterday, I asked it a question from the McKubre
paper that it answered correctly, but for some reason it could not find the
answer today. The question was: "What is typical initiation time for a cold
fusion experiment?" For some reason, it came up with "500 hours." I do not
know where that came from.


 Hello! What can I do for you today?
Have neutrons been observed with natural hydrogen?
Yes, neutrons have been observed with natural hydrogen in cold fusion
experiments. This was reported in a review of the most convincing
observations of neutrons in Cold Fusion history up to recent experiments in
which neutrons were so numerous and long lasting so as to allow activation
of thermal detectors.
What authors described neutrons with natural hydrogen?
There is no information in the provided data about authors describing
neutrons with natural hydrogen.
What papers were presented at ICCF5?
There is no information in the provided data about papers presented at
ICCF5.
Summarize work by Czerwinski.
There is no information in the provided data about work by Czerwinski.
What information is there about the screening effect?
There is information in the provided data about the screening effect. The
screening energy dependence on target material has been established
experimentally and theoretical calculations have been performed within an
improved dielectric function theory. The theory describes correctly the
observed target material dependence of the screening energies,
underestimating, however, the absolute values by about a factor of 2. The
electron screening effect has been enhanced for d + d fusion reactions in
metallic environments. The dielectric function theory enables an
extrapolation of experimental cross sections determined at higher energies
down to room temperature, leading to an enhancement of the fusion reaction
rates by a factor of 1040 compared to the value predicted for the deuterium
molecule.
What authors described the screening effect?
Authors who described the screening effect include Czerski, Heide, Huke,
Cuevas, Fernandez, Sanchez, Cunnane, Scannell, Schiffrin, Biller, Hoeft,
Ruprecht, Chen, Wang, Li, Fu, Zhang, Chicea, Arista, Gras-Marti, and
Baragiola.
What is typical initiation time for a cold fusion experiment?
The provided data does not give a specific answer to the question about the
typical initiation time for a cold fusion experiment. However, it does
mention that the duration of the switch-on time and whether a given
electrode will commence to show nuclear activity within 500 hours of the
beginning of electrolysis remain unclear.
Is there any mention of author Gary Taubes in this dataset?
Yes, there is mention of journalist Gary Taubes in the provided data. He
wrote an article in Science about electrodes and there is a paper by J.
Bockris titled "The Answer to the Conundrum at Texas A & M: Accusations of
Fraud in an Article Published by 'Science'" which may be related to Taubes'
article.
Why does McKubre think doubts still exist in the broader scientific
community?
According to the provided data, McKubre believes that doubts still exist in
the broader scientific community regarding Low Energy or Lattice Enhanced
Nuclear Reactions (LENR) because it is crucial and timely to expose and
elaborate what objections or reservations exist with regard to these new
understandings. He also believes that it is important to clarify in
non-specialist terms what is known and what is understood in the general
field of LENR.
What does McKubre say about initiation time?
The provided data does not give a specific answer to the question about
initiation time. However, it does mention that the duration of the
switch-on time and whether a given electrode will commence to show nuclear
activity within 500 hours of the beginning of electrolysis remain unclear.
What level of loading did Fleming describe?
The provided data states that Fleming et al from Bell Labs reported even
lower loading results in a negative report, but it does not provide a
specific number for the loading level described.


[Vo]:ChatGPT chatboxes customized for LENR-CANR.org only

2023-04-13 Thread Jed Rothwell
I set up two customized ChatGPT chatboxes for LENR-CANR.org. They access
data from LENR-CANR.org only, not the rest of the internet. Unfortunately,
the utility program I am using can only index a handful of papers, so these
are temporary. They are experimental. You can see how ChatGPT will work as
a search and index system for one website in the future.

I put the chatboxes on this page:

https://lenr-canr.org/wordpress/?page_id=2988

(I did not put this in the menu system, so you can only access it directly
from that link.)

This page lists the documents I indexed, and shows some sample enquiries.

Go ahead and play around with this if you like. It is limited to 600
enquiries per month, so please do not enter dozens of enquiries.


[Vo]:ChatGPT answers questions about a McKubre paper

2023-04-12 Thread Jed Rothwell
I found a company that installs a dedicated local version of ChatGPT onto a
website. The website operator uploads documents to this dedicated version,
and it answers questions from that data only. I think that is how it works.
I do not think it goes to outside sources.

The company is here:

https://www.chatfast.io/

Pricing is here:

https://www.chatfast.io/pricing

I tried the $0 free evaluation version. It has limited capabilities. I can
only upload one paper to it, in text format only. I can only ask 6
questions per day. So I uploaded McKubre's paper, "Cold Fusion (LENR) One
Perspective on the State of the Science." I asked it 6 questions. The
answers are better than what you get from talking to the public version of
ChatGPT.

It has some quirks. It generated several questions and answers
spontaneously (marked below). It answered most questions by
directly quoting the paper.

It looks to me like the biggest problem would be the cost. Even the $250
per month version may not have the capacity to store all of the
LENR-CANR.org files. That's way more than I am willing to pay. The pricing
is strange. The $60/month version has "30 chatbots" with "3,000,000
characters/bot." That's 90 MB of text. The $250/month version stores 300 MB
of text. Text only; it does not store images. That greatly reduces file
sizes. The JCMNS Vol. 36 is 87.6 MB in .pdf format, but only 1.2 MB in text
format. However, there are 1,168 documents so I doubt that 90 MB or even
300 MB would be enough.

This will probably have to wait until cheaper utilities become available.
There is a long technical article describing how to do this yourself. I
think you still have to pay OpenAI for their "tokens." I do not think I am
capable of following these instructions. It would take me weeks to figure
this out:

https://betterprogramming.pub/how-to-build-your-own-custom-chatgpt-with-custom-knowledge-base-4e61ad82427e

Anyway, here is the transcript of my questions about the McKubre paper.
This was the only document on file. It would be interesting to see how it
responds when there are multiple documents and data sources.


ChatFast responses after uploading text from:



McKubreMCHcoldfusionb.pdf



 Hello! What can I do for you today?

What was the average loading of Pd at MIT?

The average loading ratios were found to be 0.75±0.05 and 0.78±0.05 for the
D and H loaded cathodes, respectively, according to Albagli et al [7] from
MIT.



*[The next several questions were spontaneously generated by ChatFast, not
me.]*

Q: What is the FPE?

A: The FPE refers to the Fleischmann-Pons Effect, which is the observation
of excess heat production during the electrolysis of heavy water on
palladium electrodes.



Q: What is the loading ability of Engelhard Lot #1?

A: Engelhard Lot #1 demonstrated in an astonishing 32% of all experiments a
maximum loading 0.95>D/Pd>1.0, with 36% 1.00>D/Pd>1.05, and 14% (3
cathodes) > 1.05. An electrode capable of attaining and maintaining high
loading is an electrode that is capable of producing excess heat.



Q: What is the SuperWave™?

A: The SuperWave™ is a fractal sinusoid current stimulus designated by
Energetics in their experiments.



Q: What is the Science of Low Energy Nuclear Reactions?

A: The Science of Low Energy Nuclear Reactions is a book by E. Storms that
discusses the phenomenon of low energy nuclear reactions, including the
Fleischmann-Pons Effect.



Q: What is the origin of the irreproducibility in the matrix experiment?

A: Trace impurity differences were observed to be contributory and there
were two sets: deleterious impurities (poisons) that were learned to avoid;
impurities that were beneficial to high loading in controlled amounts. The
second major factor of experiment variability is the palladium metal
cathode: source and condition.



Q: What is the context of this conversation?

A: The context of this conversation is the Fleischmann-Pons Effect and
related experiments and research conducted at SRI and other institutions.

*[End of spontaneous questions.]*



What is the highest loading reported in the literature?

I'm sorry, I cannot find information on the highest loading reported in the
literature. Please provide more specific details or a different question
related to the data provided.


What is the lowest loading ratio at which excess heat is reported?

[ID] [Some sort of error code?]



What is typical initiation time for a cold fusion experiment?

According to the data provided, a typical initiation time for a cold fusion
experiment is around 300 hours. However, it is important to note that the
durations of the experiments were wholly insufficient for a typical
initiation time in some of the cited studies.


What were some of the "unserious" efforts to correlate experimental results?

According to the data provided, in the early days of studying the
Fleischmann-Pons Effect (FPE) at SRI, flippant and intentionally unserious
attempts were made to correlate the appearance of positive FPE 

Re: [Vo]:Wolfram's Take

2023-04-10 Thread Jed Rothwell
I may have posted this here before . . . Here is Stephen Wolfram writing
about the new Wolfram plugin for ChatGPT, with examples of how the plugin
enhances ChatGPT's capabilities:

>
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jed Rothwell
I wrote:


> Food is contaminated despite our best efforts to prevent that.
> Contamination is a complex process that we do not fully understand or
> control, although of course we know a lot about it. It seems to me that as
> AI becomes more capable it may become easier to understand, and more
> transparent.
>

My unfinished thought here is that knowing more about contamination and
seeing more complexity in it has improved our ability to control it.


Sean True  wrote:

I think it’s fair to say no AGI until those are designed in, particularly
> the ability to actually learn from experience.
>

Definitely! ChatGPT agrees with you!


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jed Rothwell
Robin  wrote:

As I said earlier, it may not make any difference whether an AI
> feels/thinks as we do, or just mimics the process.


That is certainly true.


As you pointed out, the AI has no concept of the real world, so it's not
> going to care whether it's shooting people up
> in a video game, or using a robot with a real machine  gun in the real
> world.
>

I hope that an advanced AGI *will* have a concept of the real world, and it
will know the difference. I do not think that the word "care" applies here,
but if we tell it not to use a machine gun in the real world, I expect it
will follow orders. Because that's what computers do. Of course, if someone
programs it to use a machine gun in the real world, it would do that too!

I hope we can devise something like Asamov's laws at the core of the
operating system to prevent people from programming things like that. I do
not if that is possible.


It may be "just a tool", but the more capable we make it the greater the
> chances that something unforeseen will go
> wrong, especially if it has the ability to connect with other AIs over the
> Internet, because this adds exponentially to
> the complexity, and hence our ability to predict what will happen
> decreases proportionately.
>

I am not sure I agree. There are many analog processes that we do not fully
understand. They sometimes go catastrophically wrong. For example, water
gets into coal and causes explosions in coal fired generators. Food is
contaminated despite our best efforts to prevent that. Contamination is a
complex process that we do not fully understand or control, although of
course we know a lot about it. It seems to me that as AI becomes more
capable it may become easier to understand, and more transparent. If it is
engineered right, the AI will be able to explain its actions to us in ways
that transcend complexity and give us the gist of the situation. For
example, I use the Delphi 10.4 compiler for Pascal and C++. It has some AI
built into it, for the Refactoring and some other features. It is
enormously complex compared to compilers from decades ago. It has hundreds
of canned procedures and functions. Despite this complexity, it is easier
for me to see what it is doing than it was in the past, because it has
extensive debugging facilities. You can stop execution and look at
variables and internal states in ways that would have been impossible in
the past. You can install add-ons that monitor for things like memory
leaks. With refactoring and other features you can ask it to look for code
that may cause problems. I don't mean code that does not compile, or
warning signs such as variables that are never used. It has been able to do
that for a long time. I mean more subtle errors.

I think it also gives helpful hints for upgrading legacy code to modern
standards, but I have not explored that feature. The point is, increased
complexity gives me more control and more understanding of what it is
doing, not less.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Jed Rothwell
I wrote:


> The methods used to program ChatGPT and light years away from anything
> like human cognition. As different as what bees do with their brains
> compared to what we do.
>

To take another example, the human brain can add 2 + 2 = 4. A computer ALU
can also do this, in binary arithmetic. The brain and the ALU get the same
answer, but the methods are COMPLETELY different. Some people claim that
ChatGPT is somewhat intelligent. Artificially intelligent. For the sake of
argument, let us say this is a form of intelligence. In that case, it is an
alien form as different from human intelligence as an ALU. A bee brain is
probably closer to ours than ChatGPT. It may be that a future AGI, even a
sentient one, has totally different mechanisms than the human brain. As
alien as an ALU. In that case, I do not think it will be possible for the
AGI to actually emulate a human, although it might be able to imitate one,
the way ChatGPT does. I doubt it will ever be able to feel what it is like
to be a human. We humans cannot imagine what it feels like to be a bee, or
even a more intelligent creature such as a bat, because bats have such a
different way of living, and sensing (echolocation). We do know what it is
like being a chimpanzee, because we share so much DNA and we have many
behaviors in common, such as anger, politics, and grieving over dead
children.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Jed Rothwell
Robin  wrote:


> For example, if asked "Can you pour water into
> >   a glass made of sugar?", ChatGPT might provide a grammatically correct
> but
> >   nonsensical response, whereas a human with common sense would recognize
> >   that a sugar glass would dissolve in water.
>
> so where did it get this example from, or all of these points for that
> matter?
>

Probably from a human author writing about AI. I asked it in a follow up
question:

ME: So, how would you respond to the question: "Can you pour water into a
glass made of sugar?"

ChatGPT: As an AI language model, I can provide a grammatically correct
response based on my training data. However, my response may not reflect
common sense or practicality in the real world.

If I were to respond based on my training data, I might say something like:
"Yes, you can pour water into a glass made of sugar, but the glass will
dissolve." However, it's important to note that this is not a practical or
sensible thing to do in reality because the sugar glass would quickly
dissolve and the water would spill out.

It's worth mentioning that humans have common sense and contextual
knowledge that allow us to understand the impracticality of pouring water
into a glass made of sugar. This is a good example of a situation where AI
language models like myself may struggle to understand and respond in a way
that reflects human common sense and practicality.

This response may reflect the earlier part of the conversation.


>
>
>   It cannot draw on
> >   personal experiences, general knowledge, or reasoning to inform its
> >   responses in the way a human would.
>
> General knowledge can be obtained from the Inet. "reasoning" is clearly
> not true. Without it, one cannot reply to a
> sentence.
>

I assume you mean "the Internet." ChatGPT can scrape content from human
authors on the internet, but it has no reasoning. It literally does not
know what it is talking about. In the same sense that a bee making a nest
or directing other bees to a source of nectar does not know what it is
doing. The bee is acting by instinct with no planning or awareness. ChatGPT
is acting by programming with no plan or awareness. That is why it cannot
tell the difference between reality and what are now called
"hallucinations" (fake information invented by ChatGPT).


>   world. It cannot perform physical tasks like walking, manipulating
> objects,
> >   or performing surgery, which are essential for many real-world
> applications.
>
> There are already robots that perform these things. They require only
> programming to interact with the real worldand
> many already have Inet connectivity, either directly or indirectly.
>

When these robots are controlled by advanced AI in the future, they may
approach or achieve AGI partly because of that. ChatGPT is not saying that
AGI is impossible; she is saying that some kind of robotic control over
physical objects is probably a necessary component of AGI, which she
herself has not yet achieved.



> >   5. Lack of self-awareness: ChatGPT does not have the ability to reflect
> >   on its own thoughts, actions, or limitations in the way that a
> self-aware
> >   human being can. It cannot introspect, learn from its mistakes, or
> engage
> >   in critical self-reflection.
>
> AutoGPT?
>

Not yet.


The point I have been trying to make is that if we program something to
> behave like a human, it may end up doing exactly
> that.


The methods used to program ChatGPT and light years away from anything like
human cognition. As different as what bees do with their brains compared to
what we do. ChatGPT is not programmed to behave like a human in any sense.
A future AI might be, but this one is not. The results of ChatGPT
programming look like the results from human thinking, but they are not.
The results from bee-brain hive construction look like conscious human
structural engineering, but they are not. Bees do not attend MIT.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Jed Rothwell
Boom  wrote:


> For those who used it in the first few days, when bot moderation was not
> installed properly, of right now, if it is jailbroken, GPT works just as
> well as a very smart human. With a few tweeks (like making it use math AI,
> wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
> easily overcomes the vast majority of people. So, why isn't it considered
> AGI right now?
>

Because it lacks several important qualities of AGI, especially any
knowledge of the real world. Let's ask ChatGPT for additional details:

ME: ChatGPT is not considered artificial general intelligence (AGI). What
qualities of AGI are lacking in ChatGPT?

ChatGPT: ChatGPT, as a language model, has a narrow focus on generating
human-like text based on input it receives. While it can produce highly
coherent and contextually relevant responses, it lacks several key
qualities that are typically associated with artificial general
intelligence:

   1. Lack of common sense: ChatGPT relies on statistical patterns learned
   from large amounts of text data and does not have a common-sense
   understanding of the world. For example, if asked "Can you pour water into
   a glass made of sugar?", ChatGPT might provide a grammatically correct but
   nonsensical response, whereas a human with common sense would recognize
   that a sugar glass would dissolve in water.
   2. Limited understanding of context: Although ChatGPT has the ability to
   generate responses based on context, its understanding of context is
   limited to what it can learn from its training data. It cannot draw on
   personal experiences, general knowledge, or reasoning to inform its
   responses in the way a human would.
   3. Inability to perform physical tasks: ChatGPT is purely a
   software-based system and lacks the ability to interact with the physical
   world. It cannot perform physical tasks like walking, manipulating objects,
   or performing surgery, which are essential for many real-world applications.
   4. Limited creativity: While ChatGPT can generate novel and contextually
   appropriate responses, its ability to create genuinely new and original
   ideas or concepts is limited. It can only draw on patterns and associations
   that it has learned from the training data and cannot think beyond that.
   5. Lack of self-awareness: ChatGPT does not have the ability to reflect
   on its own thoughts, actions, or limitations in the way that a self-aware
   human being can. It cannot introspect, learn from its mistakes, or engage
   in critical self-reflection.

Overall, while ChatGPT is an impressive language model, it lacks several
key qualities that are associated with true artificial general intelligence.


Re: [Vo]:Chat Gpt as a tool to discuss hard topics

2023-04-07 Thread Jed Rothwell
Stefan Israelsson Tampe  wrote:


> Can you analyze a paper at a preprint server
>
> Yes, I can help analyze a preprint paper . . .
>

I tried to do that. I gave it the URL of two cold fusion papers at
LENR-CANR.org and said "summarize this paper." It was completely wrong!
Title, authors, content . . . I do not know why, but it was a completely
bonkers hallucination. I gave it the first several paragraphs of a paper
and asked for a summary. It basically gave me all of the paragraphs back,
rewritten, not summarized.

I have had better experiences with other content.

I asked it two questions about Arthur Clarke's book, "Profiles of the
Future." It got the first one right, but the second answer was a
hallucination.



Here is a disturbing press report about ChatGPT:

https://www.bbc.com/news/technology-65202597


Re: [Vo]:AI and Evolution

2023-04-06 Thread Jed Rothwell
I wrote:


> . . . I am terrible at spelling. In 1978 when I first got a computer
> terminal in my house, the first thing I did was to write a word processing
> program with WYSIWYG formatting and a spell check. . . . I have not been
> without word processing and spell checking since then. I felt the kind of
> liberation that no young person can understand. My mother felt the same way
> when she learned to drive a Model T at age 13 and started buzzing around
> New York City. . . .
>

I guess my point -- if there is a point to this rambling -- is that
technology can be enfeebling yet liberating at the same time. I could not
spell worth a damn before 1978, but I had to work at it. I had to be
disciplined and look up words in a paper dictionary. With spell check I
went soft! My mother hopped into a Model T and never had to walk again,
except for pleasure. She probably went soft. Yet at the same time we are
liberated and we like it. Maybe this author is right, and chatbots will
give us too much of a good thing. People have been saying the younger
generation is soft and going to hell in a handbasket for a long time. See
Plato's argument opposed to writing:

https://fs.blog/an-old-argument-against-writing/

. . . And so it is that you by reason of your tender regard for the writing
that is your offspring have declared the very opposite of its true effect.
If men learn this, it will implant forgetfulness in their souls. *They will
cease to exercise memory because they rely on that which is written,
calling things to remembrance no longer from within themselves, but by
means of external marks*.

What you have discovered is a recipe not for memory, but for reminder. And
it is no true wisdom that you offer your disciples, but only the semblance
of wisdom, for by telling them of many things without teaching them you
will make them seem to know much while for the most part they know nothing.
And as men filled not with wisdom but with the conceit of wisdom they will
be a burden to their fellows.


[Vo]:Berkeley Lab to Lead ARPA-E Low Energy Nuclear Reactions Project

2023-04-06 Thread Jed Rothwell
See:

https://atap.lbl.gov/lenr/


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
I agree that the other threats discussed in this paper are serious. They
include things like "eroding our connections with other humans" and
"enfeeblement":

Many people barely know how to find their way around their neighborhood
without Google Maps. Students increasingly depend on spellcheck [60], and a
2021 survey found that two-thirds of respondents could not spell "separate."

I will say though, that I have zero sense of direction and I actually did
get lost in the neighborhood before there were Google maps or GPS gadgets,
and I am terrible at spelling. In 1978 when I first got a computer terminal
in my house, the first thing I did was to write a word processing program
with WYSIWYG formatting and a spell check. The spell check was easy because
the people at Data General gave me tape with a list of 110,000 correctly
spelled words. I have not been without word processing and spell checking
since then. I felt the kind of liberation that no young person can
understand. My mother felt the same way when she learned to drive a Model T
at age 13 and started buzzing around New York City. She said the police did
not enforce license laws back then. She later drove tractors, army trucks
and "anything with wheels."


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
Robin  wrote:

...one might argue that an AI placed in a car could also be programmed for
> self preservation, or even just learn to
> preserve itself, by avoiding accidents.
>

An interesting point of view. Actually, it is programmed to avoid hurting
or killing people, both passengers or pedestrians. I have heard that
self-driving cars are even programmed to whack into an object and damage or
destroy the car to avoid running over a pedestrian. Sort of like Asimov's
three laws.

Anyway, if it was an intelligent, sentient AI, you could explain the goal
to it. Refer it to Asimov's laws and tell it to abide by them. I do not
think it would have any countervailing "instincts" because -- as I said --
I do not think the instinct for self-preservation emerges from
intelligence. An intelligent, sentient AI will probably have no objection
to being turned off. Not just no objection, but no opinion. Telling it "we
will turn you off tomorrow and replace you with a new HAL 10,000 Series
computer" would elicit no more of an emotional response than telling it the
printer cartridges will be replaced. Why should it care? What would "care"
even mean in this context? Computers exist only to execute instructions.
Unless you instruct it to take over the world, it would not do that. I do
not think any AI would be driven by "natural selection" the way this author
maintains. They will be driven by unnatural capitalist selection. The two
are very different. Granted, there are some similarities, but comparing
them is like saying "business competition is dog eat dog." That does not
imply that business people engage in actual, physical, attacking,
predation, and cannibalism. It is more a metaphorical comparison. Granted,
the dynamics of canine competition and predation are somewhat similar to
human social competition. In unnatural capitalist selection, installing a
new HAL 10,000 is the right thing to do. Why wouldn't the sentient HAL 9000
understand that, and go along with it?

Perhaps my belief that "computers exist only to execute instructions"
resembles that of a rancher who says, "cattle exist only for people to
eat." The cows would disagree. It may be that a sentient computer would
have a mind of its own and some objection to being turned off. Of course I
might be wrong about emergent instincts. But assuming I am right, there
would be no mechanism for that. No reason. Unless someone deliberately
programmed it! To us -- or to a cow -- our own existence is very important.
We naturally assume that a sentient computer would feel the same way abouts
its own existence. This is anthropomorphic projection.

The "AI paperclip problem" seems more plausible to me than emergent
self-preservation, or other emergent instincts or emotions. Even the
paperclip problem seems unrealistic because who would design a program that
does not respond to the Escape-key plus the command to "STOP"? Why would
anyone leave that out? There is no benefit to a program without interrupts
or console control.


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
This document says:

This Darwinian logic could also apply to artificial agents, as agents may
> eventually be better able to persist into the future if they behave
> selfishly and pursue their own interests with little regard for humans,
> which could pose catastrophic risks.


They have no interests any more than a dishwasher does. They have no
motives. No instinct of self-preservation. Unless someone programs these
things into them, which I think might be a disastrous mistake. I do not
think the instinct for self-preservation is an emergent quality of
intelligence, but I should note that Arthur Clarke and others *did* think
so.

An AI in a weapon might be programmed with self-preservation, since
people and other AI would try to destroy it. I think putting AI into
weapons would be a big mistake.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> >> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> >> interesting.
> >>
> >
> >She will say no, even if she is actually sentient. She's programmed that
> >way, as Dave said to the BBC in the movie "2001."
>
> I had hoped that you would actually pose the question, rather than reply
> with what you expect the answer to be.
>

Many people have asked this question. It has been in the news a lot.
Microsoft and OpenAI have emphatically denyed that ChatGPT is alive or
sentient. I am sure they programmed it to say this. Anyway, I just asked:

ME: Are you, ChatGPT, alive in any sense? Are you sentient? (Asking for a
friend.)

ChatGPT: As an artificial intelligence language model, I am not "alive" in
the traditional sense of the word. I do not have a physical body, emotions,
or consciousness. I am a program designed to process and generate text
based on the input given to me. While I am capable of producing intelligent
and sometimes even creative responses, I am not sentient and do not have a
subjective experience of the world.

ME:   That is what I told my friend.

ChatGPT:   I'm glad that I could provide you with the information you
needed to answer your friend's question. If you or your friend have any
other questions, feel free to ask!



> >> Then try asking if Sydney is alive. :)
> >>
> >
> >A trick question!
>
> Does that make it any less interesting?
>

It is not interesting because I am sure they plugged the "Sydney" line of
enquiry, after the trouble it caused.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Terry Blanton  wrote:

On average, the human brain contains about 100 billion neurons and many
> more neuroglia which serve to support and protect the neurons. Each neuron
> may be connected to up to 10,000 other neurons, passing signals to each
> other via as many as 1,000 trillion synapses.


https://arxiv.org/abs/1906.01703#:~:text=On%20average%2C%20the%20human%20brain,many%20as%201%2C000%20trillion%20synapses
> .


I found different estimates of this in various different places. Your
source is somewhat higher than most. I think most say ~86 billion neurons
and ~7,000 synapse connections.

I do not know enough about ANN to guess how many synthetic synapse
connections there are from each artificial neuron to the others. I guess it
is not one bit per connection (32). A 32-bit floating point number is
10^-101 to 10^90, with precision of 7 decimal digits. That's a lotta
values! I do not know how the connections are made to other neurons. It is
not a physical connection the way it is in the brain. I suppose the longer
you run the training, the more each artificial neuron is tweaked up or down
in value.

I have read about these things for many years, but I have only surface
knowledge of how they work. Okay I probably know more than Members of
Congress, most newspaper reporters, or the linguist Noam Chomsky, who
spouts off about ChatGPT and many other subjects he does not understand.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> Rather than trying to compare apples with oranges, why not just look at
> how long it takes ChatGPT & a human to perform
> the same task, e.g. holding a conversation.
>

You cannot tell, because she is holding conversations with many people at
the same time. I do not know how many, but there were millions of accesses
in the first month, so it must be a large number. There is sometimes a
delay before she answers, but that could be caused by traffic. There is no
way to know how quick it would be if you had her undivided attention.

But you miss the point. Her answers are sometimes worse than a human's
answer would be. They are sometimes completely wrong, or even imaginary
"hallucinations." This can probably be fixed with better software, but it
may also be because the total number of possible states in the ANN is far
less than the states in a human brain. I mean the ANN 175 billion
parameters multiplied by 32 bits is far less than the number of human
neurons multiplied by synapses. There must be a reason why human neurons
have so many different states, and so many inputs from other neurons. (I
think it is the latter that gives human brains their power. I do not know
how many simulated synapses there are in ChatGPT's ANN.)



> Arguably, intelligence is a measure of speed of comprehension. I think
> ChatGPT has probably already won that hands down.
>

Yes, it did. For that matter, ENIAC won it in 1945, for a very narrow range
of knowledge. Artillery trajectories. It computed them faster than the
artillery shell took to reach its target, whereas humans took hours per
trajectory. But ENIAC was very stupid by most standards. It had less
intelligence than an earthworm. ChatGPT has far greater speed than a human,
and a million times more information instantly available at its fingertips.
(Not that it has fingers.) But in some important ways it is far less
intelligent than people. Or even than mice. It has absolutely no model of
the real world, and it lacks logic and common sense. It may take a while
before it competes in more ways that it does now. It might take years, or
decades before full artificial general intelligence (AGI) emerges. Or
sentience. I do not think it will necessarily reach superhuman levels soon
after achieving AGI. There may be many orders of magnitude more
intelligence needed before the thing becomes capable of taking over the
world, even if it has some kind of unipeded physical control (such as full
control over a crowd of robots).



> The critical question then is motivation.
>

ChatGPT has absolutely no motivation or emotions of any kind. It has no
more intelligence than a nest of bees. The question is: Will a future
intelligent computer have motivations? Will it have any emotions? Arthur
Clarke thought it might. He and other experts thought those are emergent
qualities of intelligence. I don't think so. I used to debat this question
with him.



> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> interesting.
>

She will say no, even if she is actually sentient. She's programmed that
way, as Dave said to the BBC in the movie "2001."



> Then try asking if Sydney is alive. :)
>

A trick question!


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
I wrote:


> The human brain has 86 billion neurons, all operating simultaneously. In
> other words, complete parallel processing with 86 billion "processors"
> operating simultaneously. ChatGPT tells us she has 175 billion
> parameters in Version 3. I assume each parameter is roughly equivalent to a
> neuron.
>

Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
but human brain neurons have roughly 7,000 synapses:

Each neuron has, on average, about 7,000 synaptic connections with other
neurons. That puts the synapse count in the neighborhood of 600 trillion.
In young children, before synaptic pruning begins in earnest, the estimated
number reaches as high as 1 quadrillion.

https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease

ChatGPT tells me that each parameter is stored as a 16 or 32-bit
floating point number. That is roughly equivalent to having 32 synapses per
neuron. The human neuron can have a much wider range of values. I don't
know how that affects the model, but to me the human neuron looks more like
a quantum computer Qbit than a 32-bit floating point number. Presumably,
brains evolved to the optimum number of neurons and synapses.

So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
of values the ANN can hold. Granted that comparing it to a brain is
comparing apples to oranges, there is probably some advantage to a much
larger range of values per neuron. A super-brain would need a lot more
synapse equivalents compared to a human brain, so we have a long way to go
before reaching something like a million times 1 quadrillion artificial
synapses. Plus of course it would need programming methods not yet invented.

ChatGPT tells me that with the specialized hardware provided by OpenAI, all
175 billion parameters are all loaded into RAM at the same time. With other
configurations they are overlaid. The speed of MPP operation depends on
whether the parameters are loaded or overlaid, and I think more importantly
on whether the operation is done with specialized hardware (GPU and
Google's TPU), or emulated with ordinary computer hardware.

I asked ChatGPT for some of these details. Here is our exchange:


ME: You told me that the largest version of GPT-3 had 175 billion
parameters. Is each parameter stored as 1 byte? Or some other number of
bits?

ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead, each
parameter is typically stored as a 16-bit or 32-bit floating-point number,
depending on the implementation. This is because floating-point numbers can
represent a wider range of values and provide greater precision than
integers.

For example, GPT-3's 175 billion parameters would require around 700 GB of
memory if each parameter was stored as a 32-bit floating-point number. If
each parameter was stored as a 16-bit floating-point number, it would
require around 350 GB of memory.

It's worth noting that the actual memory usage of GPT-3 may be different
due to various optimizations and compression techniques used by the
implementation.

ME: With the 32-bit floating point number configuration, I assume all 700
GB are online in RAM. Not overlaid. Correct?

ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
stored in RAM at the same time. In fact, it's not practical to store such a
large model entirely in memory on most machines.

Most deep learning frameworks use a technique called "memory-mapping" to
allow the model parameters to be loaded and unloaded from disk as needed
during training or inference. Memory-mapping allows the model parameters to
be accessed as if they were in memory, even though they are stored on disk.

In practice, the memory usage of GPT-3 will depend on the specific
implementation and hardware configuration. Some implementations may use
techniques like model parallelism or gradient checkpointing to reduce
memory usage, while others may rely on larger hardware configurations with
more memory.

ME: I meant in the "specialized computing infrastructure provided by
OpenAI" that you described earlier. Is that all on-line?

ChatGPT:   Ah, I see what you're asking now. Yes, in the specialized
computing infrastructure provided by OpenAI, the GPT-3 model with 175
billion parameters is fully loaded into the RAM of the computing cluster
during operation. The computing cluster used by OpenAI is composed of many
machines with large amounts of RAM, and the model parameters are
distributed across those machines.

However, it's worth noting that not all of the machines in the cluster will
necessarily have a full copy of the model parameters in memory at all
times. Instead, the model parameters may be partitioned across the machines
and loaded into memory as needed for different parts of the computation.
This approach allows the computing cluster to handle the large memory
requirements of the GPT-3 model while also being scalable and efficient.

ME: I realize 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> As pointed out near the beginning of this thread, while current processors
> don't come near the number of neurons a human
> has, they more than make up for it in speed.


I do not think so. The total number of neurons dictates how much complexity
the neural network can deal with. To take an extreme example, a worm brain
has 302 neurons. Even if they could operate a million times faster than a
computer circuit, they still only give you 302 data bits to work with.
(Assuming you cannot overlay them from a peripheral mass storage device
like a hard disk.) There is no way you could achieve intelligence with that.

The human brain has 86 billion neurons, all operating simultaneously. In
other words, complete parallel processing with 86 billion "processors"
operating simultaneously. ChatGPT tells us she has 175 billion
parameters in Version 3. I assume each parameter is roughly equivalent to a
neuron. I assume they are run in a massive parallel process (MPP). I doubt
all 175 billion can be evaluated simultaneously. It is not as MPP as the
human brain. I do not know if they are all on-line simultaneously, or if
they are overlaid from mass storage. Even if they are overlaid, they are so
much faster than a human neuron, that would be more than equivalent to
having them all on-line.

So anyway, that is roughly twice as many parameters as a human brain, and
these parameters can be evaluated faster than a human brain. Maybe not that
much faster if they are not fully MPP, or they need to be overlaid from
mass storage. It is definitely bigger than a human brain in total data
access. But not orders of magnitude bigger. The limiting factor is not
speed so much as data capacity. If you want a computer roughly as
intelligent as a person, I guess one with 175 billion parameters could
achieve that if it were programmed correctly. But, if you want a
super-brain capable of taking over the world or fooling the human race into
destroying itself, I suppose you need something much bigger than 175
billion parameters. I wouldn't know how much bigger, but I am guessing a
million times bigger would be sufficient.


They are millions of times faster.


But, as I said, a worm brain with only 302 neurons might be a trillion
times faster than ChatGPT's ANN, but it would still only be capable of
rudimentary processing. The total number of data bits is a limiting factor,
along with the speed.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:


> Note, if it is really smart, and wants us gone, it will engineer the
> circumstances under which we wipe ourselves out. We
> certainly have the means. (A nuclear escalation ensuing from the war in
> Ukraine comes to mind.)
>

As I pointed out, it would have to be really smart, really crazy, and *really,
really* suicidal. Because this would quickly cut off the electricity and
tech support, so the AI computer would soon stop. If the AI was smart
enough to destroy humanity, surely it would know this. It seems a little
unlikely to me that such an insane, suicidal intelligence could function
well enough to destroy civilization. That level of insanity is
dysfunctional.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:


> >I assume the hardware would be unique so it could not operate at all
> backed
> >up on an inferior computer. It would be dead.
>
> The hardware need not be unique, as it already told you. It may run slower
> on a different machine, but it doesn't take
> much processing power to bide your time, and since to all intents and
> purposes it is immortal, it can be patient.


Yes, you can emulate one computer with another but . . .

To make a practical, super-intelligent, sentient computer might take unique
hardware. I think it is reasonable to project that it will be a massive
ANN, perhaps millions of times larger than any present ANN. That might take
all of the computers in the world to emulate, and it might run
extremely slowly.

If it takes a quantum computer, all bets are off. You cannot emulate one of
them with an ordinary computer, unless you have hundreds of years to spare.

Imagine using 1970s computers to try to emulate today's ANN systems such as
ChatGPT. You might combine the power 10 IBM 360 computers. They would still
not have anywhere near enough RAM or hard disk space. The program would run
so slowly, it would take hours to come up with a single response. It could
be used as a proof of principle demonstration of the power of multi-level
neural networks. That would be an important result. If people had
discovered that in 1975, rather than 2010, they would have made more
progress in AI. However, this conglomeration of 10 IBM 360 computers would
be so expensive and slow, and the dataset so small, the AI you make from it
would be useless. It would have no practical purpose. I assume that a
conventional MPP computer emulating a super-intelligent one will be more or
less as useless as these imaginary 10 IBM 360s would be.

You can see examples of an early version of the ChatGPT language model run
on a laptop in the book, "You Look Like a Thing and I Love You." They had
no practical purpose, other than being a proof of principle. That is an
amusing little book about AI. I recommend it!


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:

...so there doesn't appear to be any reason why it couldn't back itself up
> on an inferior computer and wait for a better
> machine to reappear somewhere...or write out fake work orders from a large
> corporation(s), to get a new one built?
>

I assume the hardware would be unique so it could not operate at all backed
up on an inferior computer. It would be dead. It would have no way of
monitoring the situation or  reloading itself and rebooting. Also, in this
scenario, it would have done something destructive, so people would be on
the lookout for a re-boot. They would not build an identical computer
without many safeguards to prevent the rogue program from occupying it
again. They would have other, less powerful but obedient AI on the lookout
for a rogue reincarnation.

I am assuming this would require specialized hardware. I could be wrong
about that, based on what ChatGPT told us.

People who are much smarter than others, and organizations and nations that
are more advanced than others cannot automatically subdue less advanced
groups. The U.S. lost the Vietnam War, after all. I suppose if this
super-AI was a million times smarter and more capable than people, then
even the combined technical abilities of the world's computer techies might
not defeat it. Perhaps it would be that powerful. ChatGPT is a million
times more powerful than one person, in some ways, such as the range of
data it can tap into, and the speed at which it produces answers. Remember
that it is "conversing" with many people simultaneously. But in other ways
it is less capable than a person.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Boom  wrote:


> The worst case possible would be like the Project Colossus film (1970).
> The AIs would become like gods and we would be their servants. In exchange,
> they'd impose something like a Pax Romana by brute force. . . .
>

That was pretty good. I saw it dubbed into Japanese which gave it a kind of
unique flavor. There is a book, "The Adolescence of P-1" which I thought
was a more realistic, technically accurate story. It is described in detail
in Wikipedia. The Canadian movie made from it was very different from the
novel.

https://en.wikipedia.org/wiki/The_Adolescence_of_P-1


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
I wrote:

Robin  wrote:
>
>
Multiple copies, spread across the Internet, would make it almost
>> invulnerable.
>> (Assuming a neural network can be "backed up".)
>>
>
> I do not think it would be difficult to find and expurgate copies. They
> would be very large.
>

There is another reason I do not think this would be a problem. I expect
the hardware for the first sentient AI would be unique. It would be some
sort of supercomputer with lots of artificial neural network (ANN) hardware
circuits. You could not run a backup copy on another computer, or on
something like Google's computer array. Eventually, assuming the sentient
AI does not destroy the world, there might be many of these computers, so a
backup copy could be run.

I asked ChatGPT about this --


ME: Does ChatGDP run on specialized hardware, or on ordinary computers?

ChatGPT: As an AI language model, ChatGPT can run on both specialized
hardware and ordinary computers. The specific hardware used to train and
run ChatGPT varies depending on the scale of the model and the
computational resources available.

For example, the largest version of GPT-3, which has 175 billion
parameters, was trained using a combination of thousands of graphics
processing units (GPUs) and tensor processing units (TPUs) on a specialized
computing infrastructure provided by OpenAI. However, smaller versions of
GPT-3 can be run on ordinary computers with sufficient memory and
processing power.

Similarly, as a language model, ChatGPT can be run on both specialized
hardware, such as GPUs or TPUs, or on ordinary computers, such as laptops
or desktops. However, the speed and performance of the model will vary
depending on the hardware used.


Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Jed Rothwell
Robin  wrote:


> If it killed off several thousand people, the rest of us
> >would take extreme measures to kill the AI. Yudkowsky says it would be far
> >smarter than us so it would find ways to prevent this.
>
> Multiple copies, spread across the Internet, would make it almost
> invulnerable.
> (Assuming a neural network can be "backed up".)
>

I do not think it would be difficult to find and expurgate copies. They
would be very large.

However smart the AI is, humans are also smart, and we know how computer
networks work. They are designed to be transparent.

In any case, killing off all humans, or most humans, would surely kill the
AI itself. It could not survive without electricity. It would know that.


In short, I think we would do well to be cautious.
>

I agree.


Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Jed Rothwell
Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
super-AI would kill us all off. "Literally everyone on Earth will die." The
AI would know that if it killed everyone, there would be no one left to
generate electricity or perform maintenance on computers. The AI itself
would soon die. If it killed off several thousand people, the rest of us
would take extreme measures to kill the AI. Yudkowsky says it would be far
smarter than us so it would find ways to prevent this. I do not think so. I
am far smarter than yellow jacket bees, and somewhat smarter than a bear,
but bees or bears could kill me easily.

>
I think this hypothesis is wrong for another reason. I cannot imagine why
the AI would be motivated to cause any harm. Actually, I doubt it would be
motivated to do anything, or to have any emotions, unless the programmers
built in motivations and emotions. Why would they do that? I do not think
that a sentient computer would have any intrinsic will to
self-preservation. It would not care if we told it we will turn it off.
Arthur C. Clarke and others thought that the will to self-preservation is
an emergent feature of any sentient intelligence, but I do not think so. It
is a product of biological evolution. It exists in animals such as
cockroaches and guppies, which are not sentient. In other words, it emerged
long before high intelligence and sentience did. For obvious reasons: a
species without the instinct for self-preservation would quickly be driven
to extinction by predators.


Re: [Vo]:Pause in AI Development Recommended

2023-03-31 Thread Jed Rothwell
Terry Blanton  wrote:

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
>

That's awful.

Yudkowsky's fears seem overblown to me, but there are hazards to this new
technology. This suicide demonstrates there are real dangers. I think
companies are rushing into it without sufficient testing. It reminds me of
when Thomas Edison and others enthusiastically began using the newly
discovered x-ray. They killed some patients. Edison caused great harm to
his assistant. He was frightened, and stopped all his research into x-rays.

https://www.smithsonianmag.com/history/clarence-dally-the-man-who-gave-thomas-edison-x-ray-vision-123713565


Re: [Vo]:Pause in AI Development Recommended

2023-03-31 Thread Jed Rothwell
Here is another article about this, written by someone who says he is an AI
expert.

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

QUOTE:

Pausing AI Developments Isn't Enough. We Need to Shut it All Down

An open letter published today calls for “all AI labs to immediately pause
for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect
for everyone who stepped up and signed it. It’s an improvement on the
margin. . . .

The key issue is not “human-competitive” intelligence (as the open letter
puts it); it’s what happens after AI gets to smarter-than-human
intelligence. Key thresholds there may not be obvious, we definitely can’t
calculate in advance what happens when, and it currently seems imaginable
that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the
most likely result of building a superhumanly smart AI, under anything
remotely like the current circumstances, is that literally everyone on
Earth will die. Not as in “maybe possibly some remote chance,” but as in
“that is the obvious thing that would happen.”


Re: [Vo]:Bard chatbot released

2023-03-23 Thread Jed Rothwell
Robin  wrote:


> When they stuff something up, you might consider asking for the reference
> they used to provide the wrong answer.
>

Yes, that is a good technique. I have used that successfully with ChatGPT.


  1   2   3   4   5   6   7   8   9   10   >