Re: [Vo]:No Originality

2023-07-12 Thread Jed Rothwell
Terry Blanton  wrote:


> See Wolfram's book
> I think you might like this book – "What Is ChatGPT Doing ... and Why Does
> It Work?" by Stephen Wolfram.
>

Wolfram is a smart cookie. This a good book. Much of it is over my head. I
will read it again from the beginning. Perhaps I will understand more. I
wish it had more examples.

One of the interesting points he makes is that LLM AI works much better
than he anticipated, or that other experts anticipated. And there is no
solid theoretical basis for knowing why it works so well. It comes as a
surprise.

When he says it "works," he explains that means it produces remarkably
good, relevant answers. And he means the grammar and syntax of the
responses is very similar to human speech. He does not mean the LLM is
actually thinking, in the human sense.

As I said, I think the LLM do have a form of intelligence. Not like human
intelligence. It somewhat resembles the collective intelligence of bees.
They are not capable of creativity, although they do respond to stimuli and
changes in the environment. They have a hard-wired repertoire of responses.
They build their nests to fit in a given space, and they take actions such
as ventilating a nest on a hot day.

I do not think the LLM AI model will ever approach human intelligence, or
general intelligence, but other AI models may do this. Perhaps there will
be a hybrid AI model, incorporating LLM to generate text, with a more
logical AI model controlling the LLM. I think Wolfram thinks he can provide
something like that already. See:

https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/

I expect these other AI models will also use artificial neural networks
(ANN). So the effort -- the dollars! -- pouring into ANN may contribute to
higher level AI, and ultimately, to actual, human-like intelligence. Or
even super intelligence. Which many people fear.


Re: [Vo]:No Originality

2023-07-12 Thread Terry Blanton
Suit brought to Google Deepmind, too.

https://www.cnn.com/2023/07/11/tech/google-ai-lawsuit/index.html

On Mon, Jul 10, 2023, 11:26 AM Terry Blanton  wrote:

> Being a Class Action suit, it should prove interesting.  I don't think the
> ChatGPT approach will lead to true AI as presented in Iain Banks' Culture
> series.
>
> See Wolfram's book
> I think you might like this book – "What Is ChatGPT Doing ... and Why Does
> It Work?" by Stephen Wolfram.
>
> Start reading it for free: https://a.co/iphsADj
>
> On Mon, Jul 10, 2023, 10:23 AM Jed Rothwell  wrote:
>
>> Quoting the article:
>>
>> The trio [of actors] say leaked information shows that their books were
>>> used to develop the so-called large language models that underpin AI
>>> chatbots.
>>
>>
>> The plaintiffs say that summaries of their work produced by OpenAI’s
>>> ChatGPT prove that it was trained on their content.
>>
>>
>> I doubt that information was "leaked." It is common knowledge. How else
>> could the ChatBot summarize their work? I doubt they can win this lawsuit.
>> If I, as a human, were to read their published material and then summarize
>> it, no one would accuse me of plagiarism. That would be absurd.
>>
>> If the ChatBots produced the exact same material as Silverman and then
>> claimed it is original, that would be plagiarism. I do not think a ChatBot
>> would do that. I do not even think it is capable of doing that. I wish it
>> could do that. I have been trying to make the LENR-CANR.org ChatBot to
>> produce more-or-less verbatim summaries of papers, using the authors' own
>> terminology. It cannot do that because of the way the data is tokenized. It
>> does not store the exact words, and it is not capable of going back to read
>> them. That is what I determined by testing it in various ways, and that is
>> what the AI vendor and ChatBot itself told me.
>>
>>
>>
>>
>>


Re: [Vo]:No Originality

2023-07-10 Thread Terry Blanton
Being a Class Action suit, it should prove interesting.  I don't think the
ChatGPT approach will lead to true AI as presented in Iain Banks' Culture
series.

See Wolfram's book
I think you might like this book – "What Is ChatGPT Doing ... and Why Does
It Work?" by Stephen Wolfram.

Start reading it for free: https://a.co/iphsADj

On Mon, Jul 10, 2023, 10:23 AM Jed Rothwell  wrote:

> Quoting the article:
>
> The trio [of actors] say leaked information shows that their books were
>> used to develop the so-called large language models that underpin AI
>> chatbots.
>
>
> The plaintiffs say that summaries of their work produced by OpenAI’s
>> ChatGPT prove that it was trained on their content.
>
>
> I doubt that information was "leaked." It is common knowledge. How else
> could the ChatBot summarize their work? I doubt they can win this lawsuit.
> If I, as a human, were to read their published material and then summarize
> it, no one would accuse me of plagiarism. That would be absurd.
>
> If the ChatBots produced the exact same material as Silverman and then
> claimed it is original, that would be plagiarism. I do not think a ChatBot
> would do that. I do not even think it is capable of doing that. I wish it
> could do that. I have been trying to make the LENR-CANR.org ChatBot to
> produce more-or-less verbatim summaries of papers, using the authors' own
> terminology. It cannot do that because of the way the data is tokenized. It
> does not store the exact words, and it is not capable of going back to read
> them. That is what I determined by testing it in various ways, and that is
> what the AI vendor and ChatBot itself told me.
>
>
>
>
>


Re: [Vo]:No Originality

2023-07-10 Thread Jed Rothwell
Quoting the article:

The trio [of actors] say leaked information shows that their books were
> used to develop the so-called large language models that underpin AI
> chatbots.


The plaintiffs say that summaries of their work produced by OpenAI’s
> ChatGPT prove that it was trained on their content.


I doubt that information was "leaked." It is common knowledge. How else
could the ChatBot summarize their work? I doubt they can win this lawsuit.
If I, as a human, were to read their published material and then summarize
it, no one would accuse me of plagiarism. That would be absurd.

If the ChatBots produced the exact same material as Silverman and then
claimed it is original, that would be plagiarism. I do not think a ChatBot
would do that. I do not even think it is capable of doing that. I wish it
could do that. I have been trying to make the LENR-CANR.org ChatBot to
produce more-or-less verbatim summaries of papers, using the authors' own
terminology. It cannot do that because of the way the data is tokenized. It
does not store the exact words, and it is not capable of going back to read
them. That is what I determined by testing it in various ways, and that is
what the AI vendor and ChatBot itself told me.