Quoting the article:

The trio [of actors] say leaked information shows that their books were
> used to develop the so-called large language models that underpin AI
> chatbots.


The plaintiffs say that summaries of their work produced by OpenAI’s
> ChatGPT prove that it was trained on their content.


I doubt that information was "leaked." It is common knowledge. How else
could the ChatBot summarize their work? I doubt they can win this lawsuit.
If I, as a human, were to read their published material and then summarize
it, no one would accuse me of plagiarism. That would be absurd.

If the ChatBots produced the exact same material as Silverman and then
claimed it is original, that would be plagiarism. I do not think a ChatBot
would do that. I do not even think it is capable of doing that. I wish it
could do that. I have been trying to make the LENR-CANR.org ChatBot to
produce more-or-less verbatim summaries of papers, using the authors' own
terminology. It cannot do that because of the way the data is tokenized. It
does not store the exact words, and it is not capable of going back to read
them. That is what I determined by testing it in various ways, and that is
what the AI vendor and ChatBot itself told me.

Reply via email to