You might blow its circuits. I'm sorry, Dave, but I can't do that right now.
You can tell it to compare and contrast, I think.
On 16/08/2023 10:58 am, Bernard Robertson-Dunn wrote:
On 15/08/2023 9:04 am, Kim Holburn wrote:
>Among other findings, the authors found ChatGPT is more likely to
make conceptual errors than factual ones. "Many answers are incorrect
due to ChatGPT’s incapability to understand the underlying context of
the question being asked," the paper found.
Suppose you wanted to tell an AI system how to distinguish between
right and wrong (if you try it will tell you it has no opinion),
correct and incorrect, truth and lies? How would you do it?
Would it work with e.g. far right extremists? MAGA supporters?
Politicians?
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link