Suit brought to Google Deepmind, too.

https://www.cnn.com/2023/07/11/tech/google-ai-lawsuit/index.html

On Mon, Jul 10, 2023, 11:26 AM Terry Blanton <hohlr...@gmail.com> wrote:

> Being a Class Action suit, it should prove interesting.  I don't think the
> ChatGPT approach will lead to true AI as presented in Iain Banks' Culture
> series.
>
> See Wolfram's book
> I think you might like this book – "What Is ChatGPT Doing ... and Why Does
> It Work?" by Stephen Wolfram.
>
> Start reading it for free: https://a.co/iphsADj
>
> On Mon, Jul 10, 2023, 10:23 AM Jed Rothwell <jedrothw...@gmail.com> wrote:
>
>> Quoting the article:
>>
>> The trio [of actors] say leaked information shows that their books were
>>> used to develop the so-called large language models that underpin AI
>>> chatbots.
>>
>>
>> The plaintiffs say that summaries of their work produced by OpenAI’s
>>> ChatGPT prove that it was trained on their content.
>>
>>
>> I doubt that information was "leaked." It is common knowledge. How else
>> could the ChatBot summarize their work? I doubt they can win this lawsuit.
>> If I, as a human, were to read their published material and then summarize
>> it, no one would accuse me of plagiarism. That would be absurd.
>>
>> If the ChatBots produced the exact same material as Silverman and then
>> claimed it is original, that would be plagiarism. I do not think a ChatBot
>> would do that. I do not even think it is capable of doing that. I wish it
>> could do that. I have been trying to make the LENR-CANR.org ChatBot to
>> produce more-or-less verbatim summaries of papers, using the authors' own
>> terminology. It cannot do that because of the way the data is tokenized. It
>> does not store the exact words, and it is not capable of going back to read
>> them. That is what I determined by testing it in various ways, and that is
>> what the AI vendor and ChatBot itself told me.
>>
>>
>>
>>
>>

Reply via email to