Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-06-13 Thread Jed Rothwell
I think the ChatBot is now working well. If you tried using it before and
the answer was not satisfactory, please try again.

https://lenr-canr.org/wordpress/?page_id=2988

There are two bots. The top one is good for searching, and the bottom one
is good for detailed questions.

Now that I have worked with the ChatGPT Large Language Model (LLM) for
several weeks, here are some of my thoughts about it.

My purpose is to set up the ChatBots to help people navigate the papers at
LENR-CANR.org. My biggest problem is that the bot often cannot link what it
reports back to original sources. I confirmed this in an exchange with the
bot:

ME: [After the bot cited a study from Tsinghua U.] . . . Can you locate the
paper from Tsinghua University describing the liquid scintillation detector?
Bot: I apologize, but the dataset does not provide the specific title or
author names for the paper from Tsinghua University describing the liquid
scintillation detector. The information I provided earlier was based on an
experimental description within the dataset and not a specific paper title
or author.
ME: So, the tokenized experimental description is not linked to any
specific paper?
Bot: That's correct. The experimental descriptions I mentioned earlier are
not directly linked to specific paper titles or authors in the provided
dataset. I apologize for any confusion caused. . . .

Sometimes it can link back. If you want to do this, use the top INDEX bot.

These LLM bots have little logic. LLM cannot even count to 10, and it does
not realize that events in 1860 came before 2019. It made that error in
some of my enquiries. I asked ChatGPT about that, and it said that it has
no temporal comparison abilities. LLM have no creativity; they cannot
synthesize new knowledge. I expect these limitations will soon be fixed.
This has already begun with the Wolfram plugin for ChatGPT. Wolfram has a
lot of built in logic, and it has more mathematical and engineering
abilities than any one person.

Other AI models can synthesize knowledge. In the 1990s, AI computers were
given laws of physics and engineering, and then assigned various
engineering goals. They reinvented electronic patents filed by AT in the
early decades of the 20th century. These were difficult and creative
patents. Sooner or later, creative models will be integrated into LLM.

Here is the big question: Is this program intelligent? Here is my opinion.
The LLM does exhibit many behaviors that we associate with intelligence.
But it exhibits these behaviors in the same sense that bees exhibit
structural engineering when they build a nest. Their ability is in their
brains, so this is real intelligence. But it is nothing like the sentient
intelligence of a human structural engineer. Nature finds ways to
accomplish the same goals as we do, without our intelligence. Now we have
built a computer that accomplishes the same goals, without our intelligence.

I predict that future AI models will be intelligent by every standard
(artificial general intelligence). I predict they will be sentient. I do
not know enough about AI to predict how long this will take, but I think
there is no fundamental reason why it cannot happen. I am sure that
sentient thinking machines exist because, as Arthur C. Clarke used to say,
I carry one on my shoulders. Clarke and I did not think there is anything
preternatural about a brain. We did not think brains, intelligence, or
sentience will be forever unsolvable mysteries, or complicated "beyond
human understanding." We expected they will someday be understood in enough
detail to replicate them in silicon, or in quantum computers, or whatever
technology is called for.

>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-05 Thread Jed Rothwell
I originally uploaded all files in Acrobat .pdf format. The vendor advised
me that text (.txt) format works better. So, I converted JCMNS issues to
text, and replaced the Acrobat copies. It does seem to be working better. I
will replace all other files in the coming week. The Bot now finds the
elusive Christy L. Frazier, who resembled T. S. Eliot's Macavity, the
mystery cat who was not there.


ME: Do you have any information on Christy L. Frazier?

BOT: Yes, Christy L. Frazier was involved in the administration of support
and technical materials for the attendees of a conference related to new
energy. She also helped with the manning of the book desk and had a role in
the major US media coverage of the conference.


Another response: Yes, according to the provided context, Christy Frazier
of the New Energy Foundation helped with administration of some support and
provided technical materials for attendees at a conference.


But the Bot is still confused . . .

ME: What was the title of the report Christy L. Frazier wrote?

BOT: There is no information provided about Christy L. Frazier writing a
report.


ME: Do you have any information on the report "ICCF23 Held Virtually from
China"?

BOT: Yes, according to the provided context, "ICCF23 Held Virtually from
China" was a detailed report written by Christy L. Frazier and published in
Issue 157 of Infinite Energy in August of 2021. The report covered the
International Conference on Condensed Matter Nuclear Science (ICCF-23)
which was held virtually due to the COVID-19 pandemic. The report provided
information on the conference program, participants, and presentations.

>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-03 Thread Jed Rothwell
Here is a "gift copy" of an article with a working copy of a miniature
Large Language Model AI. You can see how the AI parses sample texts from
Jane Ausin, Shakespeare, the Federalist Papers and other sources. It parses
a sample 30,000 times. You can see the outcome at various stages. You can
generate as many sample outputs as you want. It produces gibberish at
first, and then text which looks a lot like the source, but still does not
make sense.

The article describes the simplicity of the main algorithm in this
technique:

"While the inner workings of these algorithms are notoriously opaque, the
basic idea behind them is surprisingly simple. They are trained by going
through mountains of internet text, repeatedly guessing the next few
letters and then grading themselves against the real thing."


You see there is no hint of actual human intelligence in the algorithm. It
is not imitating or simulating the mechanisms of human intelligence.

https://www.nytimes.com/interactive/2023/04/26/upshot/gpt-from-scratch.html?unlocked_article_code=Q4gvpJTb9E3YINp_ca4bgZovkWX4G1TiSclGTYsby_fUHiOUcmgMuivsdApz-JTH90er1fEaTX-9sE7IK5_EgbWbYJidtUMCOynDvzCC5l_6JhXaQWq83elkRIYLSTl5Daqd3pSb942K2hIFYeMw_xEPJkyaHobPQOjWFA5D7421wxSsEZfN4FvgO-qv-FJtrNI-E20kKdgFiH7PP9A9liu48jnKueJfVHQJNNKrmMlchcWA-0b47eDZxSVJ7eSpv1ceyir2kLp8P-CIfu_fqtPSYCGckK1AS2RHajIP0Ku6u-_p2NBL8VLvz-jzshxYZusLl4lSFUTMReXDYyv5wW_OpRISrDF4=url-share

>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-03 Thread Jed Rothwell
Here is a "gift copy" of an article from the New York Times describing AI
hallucinations, and what causes them. "Gift copy" means it goes around the
paywall.

When A.I. Chatbots Hallucinate

https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucinatation.html?unlocked_article_code=0Rsbr9hefoGwD7XwW3nQbJc_0Fd4_wF1y0g5Lz9xOTZa7F48LPb00laVUChLDgQhW00qzL2x04Q677g59knH0Vscp_Y8oNbeyUNg8o3J7Rft17V5H-5iXG6sTXoetr4Z4oMtjEodqr1uvUMftppZ8JlRxFJQdPflD1rSD939FTTEyDqlSOZADcz6HdKSb5-hDSyfjSxh1prYWpxpVBZFWUOOjLuhdkDCvO9RHWIwLXFWh2OitlfH6_MaCqw-lOUrfAwY0gyEr2OkkjAW4ZzsDOQocdFq0CrAKBLmZaIOUXHbJDtBWhbmF_jwohBRLFEdhyDOlmLC7OCmDJ7bB7-wuyWH7Gl6=url-share


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Jed Rothwell
Here we see the LENR-CANR.org bot going off the deep end with
hallucinations --

Hello! What can I do for you today?

In his research at China Lake, Melvin Miles used a copper jacket in one of
his calorimeters. Describe this.


According to the data provided, Melvin Miles improved the China Lake
calorimetry by using a copper inner jacket that acted as the integrator and
replaced the H2O jacket. An insulating box over the cell top and bath
greatly reduced the effect of changes in the room temperature. Furthermore,
the lead wires coming out of the cell were thermally staked to the bath
itself, thereby further reducing the effect of the room temperature. A
copper outer jacket contacted the bath and minimized bath level. These
improvements are illustrated in Figure 4 of the data provided.


What document is Figure 4 in?

Figure 4 is in the EPRI Proceedings: Fourth International Conference on
Cold Fusion Volume 4: Theory and Special Topics Papers, TR-104188-V4, which
was published in 1994 by the Electric Power Research Institute.

[There is no paper by Miles in Volume 4.]

Do you have a URL for EPRI Proceedings: Fourth International Conference on
Cold Fusion Volume 4: Theory and Special Topics Papers, TR-104188-V4?

Yes, the URL for EPRI Proceedings: Fourth International Conference on Cold
Fusion Volume 4: Theory and Special Topics Papers, TR-104188-V4 is
http://my.epri.com/portal/server.pt?Abstract_id=TR-104188-V4. [This URL was
provided by me. It is defunct.]

Do you have the title of the paper with Figure 4?

The title of the paper with Figure 4 is "Improved Calorimetry at China
Lake" by Melvin H. Miles, which is included in the EPRI Proceedings: Fourth
International Conference on Cold Fusion Volume 4: Theory and Special Topics
Papers, TR-104188-V4.

[There is no such paper anywhere in the literature, as far as I know. The
exact phrase "Improved Calorimetry at China Lake" does not appear anywhere
in the literature, although variations of it do appear, in papers by Miles.
For example, this reference p. 13 says:


https://lenr-canr.org/acrobat/MilesManomalousea.pdf

 *...* An *improved calorimeter* recently developed at *China Lake* that
can accomplish these tasks is described in the next section. In June of
1995, ...]


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Jed Rothwell
Terry Blanton  wrote:


> BTW, do you use Dragon Speak still?
>

Not that often. It is good for writing long, formal documents. Not so much
for short messages. It is not good for editing papers, which is mostly what
I do these days.

I expect there will soon be AI versions of voice input transcription. Maybe
online? I expect they will blow Dragon Speak out of the water. They will be
far better. I say that for two reasons:

ChatGPT translation from Japanese into English is far better than Google
translate (https://translate.google.com). The Bots have generated a lot of
accurate linguistic data. I expect it is nothing like human knowledge of
grammar, but it works.

I have recently discovered that AI based online OCR programs are far
superior to desktop OCR programs such as Adobe Acrobat. I used Adobe
Acrobat OCR to make old documents such as ICCF3 "searchable." You can
export the resulting text to Microsoft Word or a text file. Recently
ChatGPT recommended I try the Amazon AWS Textract online OCR program. I
tried it. I did a file-compare of the AWS output compared to the Adobe
Acrobat output. AWS has many fewer OCR errors. I think you could correct
many voice input errors by using the AI linguistics-based methods, and the
pattern recognition algorithms. I think the pattern recognition algorithms
can be applied to audio track data in a way that is similar to images of
printed letters.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Terry Blanton
I was under the impression ChatGBT accepted voice inquiries.  Sorry.

BTW, do you use Dragon Speak still?

On Mon, May 1, 2023, 10:16 AM Jed Rothwell  wrote:

> Terry Blanton  wrote:
>
> Ask your wife to make an inquiry in both languages.
>>
>
> I did an inquiry in both languages. The answer is pretty much the same
> when the subject is technical.
>
>
>
>>   I bet the English response implies a male Bot.
>>
>
> How can you tell? There is no difference between male and female dialects
> in English. In a novel there may be some slight differences in
> conversational English, but not expository writing. In Japanese there is no
> sex difference in formal expository writing, which is what ChatGPT
> responses are, except they are in formal diction instead of neutral, which
> is kind of weird. Informal writing has clear differences between sexes.
> Something like the lyrics to the Disney song "Let it Go" are conspicuously
> in the female dialect. The meaning of the words are different, as well:
>
> https://www.youtube.com/watch?v=6-dqMG-Uycg
>
>
> https://fangirlisms.com/lyrics-and-translations/let-it-go-ari-no-mama-de-lyrics-translation/
>
> She sounds a bit like the heroine in a novel written in 1910.
>
> Japanese also has many regional accents and dialects. Some of the rural
> ones are practically incomprehensible. The news sometimes puts subtitles on
> the screen when they interview some old coot from the middle of nowhere in
> the far north. People use words and grammar from the 19th century, and even
> the 18th century. It resembles U.S. Gullah dialects, which I believe are
> the oldest living versions of English in the world. Male and female dialect
> distinctions are made in all regional dialects as far as I know, and they
> are along the same lines.
>
>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-05-01 Thread Jed Rothwell
Terry Blanton  wrote:

Ask your wife to make an inquiry in both languages.
>

I did an inquiry in both languages. The answer is pretty much the same when
the subject is technical.



>   I bet the English response implies a male Bot.
>

How can you tell? There is no difference between male and female dialects
in English. In a novel there may be some slight differences in
conversational English, but not expository writing. In Japanese there is no
sex difference in formal expository writing, which is what ChatGPT
responses are, except they are in formal diction instead of neutral, which
is kind of weird. Informal writing has clear differences between sexes.
Something like the lyrics to the Disney song "Let it Go" are conspicuously
in the female dialect. The meaning of the words are different, as well:

https://www.youtube.com/watch?v=6-dqMG-Uycg

https://fangirlisms.com/lyrics-and-translations/let-it-go-ari-no-mama-de-lyrics-translation/

She sounds a bit like the heroine in a novel written in 1910.

Japanese also has many regional accents and dialects. Some of the rural
ones are practically incomprehensible. The news sometimes puts subtitles on
the screen when they interview some old coot from the middle of nowhere in
the far north. People use words and grammar from the 19th century, and even
the 18th century. It resembles U.S. Gullah dialects, which I believe are
the oldest living versions of English in the world. Male and female dialect
distinctions are made in all regional dialects as far as I know, and they
are along the same lines.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread Terry Blanton
Ask your wife to make an inquiry in both languages.  I bet the English
response implies a male Bot.

On Sun, Apr 30, 2023, 8:36 PM Jed Rothwell  wrote:

> H L V  wrote:
>
> Ok...I thought it was an attempt to make the chatbot more appealing as a
>> user interface.
>>
>
> Interesting . . . I do not think there is an option for that. It is polite
> and deferential, which some people might say is feminine. In English
> conversational text you cannot tell if a man or a woman is speaking. How
> would I know which it is? However, in Japanese you can tell. See:
> https://en.wikipedia.org/wiki/Gender_differences_in_Japanese I just asked
> ChatGPT a question in Japanese. The answer was in a neutral form, neither
> male nor female. Like a newspaper article or a physics paper. I asked
> ChatGPT to repeat the answer in women's speech, and it did. So, you could
> program it to sound feminine all the time if you kept asking for responses
> in that dialect. I guess you could ask it to sound like a he-man. Or a
> gangster.
>
> Okay, I tried asking for the response in gangster lingo. It did a great
> job! Scary. I almost feel intimidated.
>
>
>
>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread Jed Rothwell
H L V  wrote:

Ok...I thought it was an attempt to make the chatbot more appealing as a
> user interface.
>

Interesting . . . I do not think there is an option for that. It is polite
and deferential, which some people might say is feminine. In English
conversational text you cannot tell if a man or a woman is speaking. How
would I know which it is? However, in Japanese you can tell. See:
https://en.wikipedia.org/wiki/Gender_differences_in_Japanese I just asked
ChatGPT a question in Japanese. The answer was in a neutral form, neither
male nor female. Like a newspaper article or a physics paper. I asked
ChatGPT to repeat the answer in women's speech, and it did. So, you could
program it to sound feminine all the time if you kept asking for responses
in that dialect. I guess you could ask it to sound like a he-man. Or a
gangster.

Okay, I tried asking for the response in gangster lingo. It did a great
job! Scary. I almost feel intimidated.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread H L V
Ok...I thought it was an attempt to make the chatbot more appealing as a
user interface.

harry


On Sun, Apr 30, 2023 at 4:40 PM Jed Rothwell  wrote:

> H L V  wrote:
>
> Why do you refer to the ChatGPT as a "she"?
>>
>
> My reasons are politically incorrect, so I better not say them. They are
> here:
>
>
> https://www.lenr-forum.com/forum/thread/6953-the-lenr-canr-chatgpt-is-on-line/?postID=195136#post195136
>
>
> Does this chatbot come with a gender setting so that it responds textually
>> like a female?
>>
>
> I generally refer to computers as "she." Also GPS units, because they
> speak with a woman's voice. Our Toyota one sounds like a Midwestern school
> teacher. No nonsense, this is how we get you to Chattanooga.
>
> When you set it to speak French, the tone seems to change. She doesn't
> care whether you follow directions or not. Frankly, she is bored of the
> whole business. Go another kilomètre, or don't, as you please. She seems to
> stop and take a drag on a Gauloises. Maybe it is my imagination.
>
>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread Terry Blanton
Indeed.  And when they become self aware they have the option of
identifying differently

藍

On Sun, Apr 30, 2023, 3:23 PM H L V  wrote:

> Why do you refer to the ChatGPT as a "she"?
> Does this chatbot come with a gender setting so that it responds textually
> like a female?
>
> harry
>
> On Sat, Apr 29, 2023 at 8:08 PM Jed Rothwell 
> wrote:
>
>> Robin  wrote:
>>
>>
>>> The name as you yourself just quoted it has a comma at the end before
>>> the final double quote, whereas there is no comma
>>> in the actual text.
>>
>>
>> I tried a variety of search terms. And indirect methods such as asking
>> "who wrote the Infinite Energy" article about the conference. It refused to
>> name names. It seems to have it in for Christy. (I kid.)
>>
>> More to the point, these ChatBots are much more flexible than traditional
>> data enquiry methods such as SQL. They do not demand that every comma be in
>> place. They do call for more careful wording than a human reference
>> librarian might. ChatGPT does not check for alternate spelling. I was
>> looking for information on the artist Robert Blum. I accidentally spelled
>> it Robert Blume. ChatGPT said she had no idea who was talking about. In a
>> dismissive tone. I said, "oops, I meant Robert Blum" and she got right onto
>> it.
>>
>> I have run into some other strange problems. I uploaded the 1989 NSF/EPRI
>> workshop:
>>
>> http://lenr-canr.org/acrobat/EPRInsfepriwor.pdf
>>
>> I tried to get the Bot to tell me what Edward Teller had to say about
>> giraffes during the workshop. She refused. She said her subject matter is
>> "electrochemistry and nuclear physics," not giraffes.
>>
>>
>> I got it to generate a hallucination, which is unnerving. In this
>> exchange:
>>
>> Me: Fleischmann said, "heat is the principal signature of the reaction."
>> Do you have a document referencing this?
>>
>> Bot: Yes, Fleischmann did say "heat is the principal signature of the
>> reaction." This quote can be found in a paper titled "Calorimetry of the
>> Palladium-Deuterium-Heavy Water System" published in the Journal of
>> Electroanalytical Chemistry in 1990.
>>
>>
>> Wrong! That paper is here, and there is no such quote in it:
>>
>> http://lenr-canr.org/acrobat/Fleischmancalorimetr.pdf
>>
>> The quote probably comes from me. Fleischmann did say that, often.
>>
>>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread Jed Rothwell
H L V  wrote:

Why do you refer to the ChatGPT as a "she"?
>

My reasons are politically incorrect, so I better not say them. They are
here:

https://www.lenr-forum.com/forum/thread/6953-the-lenr-canr-chatgpt-is-on-line/?postID=195136#post195136


Does this chatbot come with a gender setting so that it responds textually
> like a female?
>

I generally refer to computers as "she." Also GPS units, because they speak
with a woman's voice. Our Toyota one sounds like a Midwestern school
teacher. No nonsense, this is how we get you to Chattanooga.

When you set it to speak French, the tone seems to change. She doesn't care
whether you follow directions or not. Frankly, she is bored of the whole
business. Go another kilomètre, or don't, as you please. She seems to stop
and take a drag on a Gauloises. Maybe it is my imagination.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-30 Thread H L V
Why do you refer to the ChatGPT as a "she"?
Does this chatbot come with a gender setting so that it responds textually
like a female?

harry

On Sat, Apr 29, 2023 at 8:08 PM Jed Rothwell  wrote:

> Robin  wrote:
>
>
>> The name as you yourself just quoted it has a comma at the end before the
>> final double quote, whereas there is no comma
>> in the actual text.
>
>
> I tried a variety of search terms. And indirect methods such as asking
> "who wrote the Infinite Energy" article about the conference. It refused to
> name names. It seems to have it in for Christy. (I kid.)
>
> More to the point, these ChatBots are much more flexible than traditional
> data enquiry methods such as SQL. They do not demand that every comma be in
> place. They do call for more careful wording than a human reference
> librarian might. ChatGPT does not check for alternate spelling. I was
> looking for information on the artist Robert Blum. I accidentally spelled
> it Robert Blume. ChatGPT said she had no idea who was talking about. In a
> dismissive tone. I said, "oops, I meant Robert Blum" and she got right onto
> it.
>
> I have run into some other strange problems. I uploaded the 1989 NSF/EPRI
> workshop:
>
> http://lenr-canr.org/acrobat/EPRInsfepriwor.pdf
>
> I tried to get the Bot to tell me what Edward Teller had to say about
> giraffes during the workshop. She refused. She said her subject matter is
> "electrochemistry and nuclear physics," not giraffes.
>
>
> I got it to generate a hallucination, which is unnerving. In this exchange:
>
> Me: Fleischmann said, "heat is the principal signature of the reaction."
> Do you have a document referencing this?
>
> Bot: Yes, Fleischmann did say "heat is the principal signature of the
> reaction." This quote can be found in a paper titled "Calorimetry of the
> Palladium-Deuterium-Heavy Water System" published in the Journal of
> Electroanalytical Chemistry in 1990.
>
>
> Wrong! That paper is here, and there is no such quote in it:
>
> http://lenr-canr.org/acrobat/Fleischmancalorimetr.pdf
>
> The quote probably comes from me. Fleischmann did say that, often.
>
>


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-29 Thread Jed Rothwell
Robin  wrote:


> The name as you yourself just quoted it has a comma at the end before the
> final double quote, whereas there is no comma
> in the actual text.


I tried a variety of search terms. And indirect methods such as asking "who
wrote the Infinite Energy" article about the conference. It refused to name
names. It seems to have it in for Christy. (I kid.)

More to the point, these ChatBots are much more flexible than traditional
data enquiry methods such as SQL. They do not demand that every comma be in
place. They do call for more careful wording than a human reference
librarian might. ChatGPT does not check for alternate spelling. I was
looking for information on the artist Robert Blum. I accidentally spelled
it Robert Blume. ChatGPT said she had no idea who was talking about. In a
dismissive tone. I said, "oops, I meant Robert Blum" and she got right onto
it.

I have run into some other strange problems. I uploaded the 1989 NSF/EPRI
workshop:

http://lenr-canr.org/acrobat/EPRInsfepriwor.pdf

I tried to get the Bot to tell me what Edward Teller had to say about
giraffes during the workshop. She refused. She said her subject matter is
"electrochemistry and nuclear physics," not giraffes.


I got it to generate a hallucination, which is unnerving. In this exchange:

Me: Fleischmann said, "heat is the principal signature of the reaction." Do
you have a document referencing this?

Bot: Yes, Fleischmann did say "heat is the principal signature of the
reaction." This quote can be found in a paper titled "Calorimetry of the
Palladium-Deuterium-Heavy Water System" published in the Journal of
Electroanalytical Chemistry in 1990.


Wrong! That paper is here, and there is no such quote in it:

http://lenr-canr.org/acrobat/Fleischmancalorimetr.pdf

The quote probably comes from me. Fleischmann did say that, often.


Re: [Vo]:The LENR-CANR ChatGPT is ON LINE!

2023-04-28 Thread Robin
In reply to  Jed Rothwell's message of Fri, 28 Apr 2023 12:31:02 -0400:
Hi,
[snip]
>Quote the paragraph that says, "We must thank Christy L. Frazier for
>writing a detailed report . . ."
>
>I apologize, but there is no paragraph in the provided context that says
>"We must thank Christy L. Frazier for writing a detailed report." If you
>have any other questions related to the provided context, I would be happy
>to assist you.
>
>Do these documents mention Christy Frazier?
>
>
>*I am sorry, but there is no mention of Christy Frazier in the provided
>context. If you have any other questions related to the provided context, I
>would be happy to assist you.*
>
>*[It also cannot find "Christy L. Frazier," which is right there next to
>the paragraph the bot quoted above, in
>https://lenr-canr.org/acrobat/BiberianJPjcondensedzi.pdf
> It is strange
>that the Bot cannot find the name.]*

The name as you yourself just quoted it has a comma at the end before the final 
double quote, whereas there is no comma
in the actual text. Was the comma also in your query? Note that there is no 
space between Frazier and the comma, hence
the comma *may* be taken as part of the name. 
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.