I believe you can ask it to limit its 'answer' to some subset of the data, e.g. something like reports/articles/journals that include a reference/citation list. Has anyone tried that? Then the 'hearsay' is somewhat mitigated if it hasn't made up the references, which it has done!

On 18/08/2023 8:15 am, Tom Worthington wrote:
On 16/8/23 10:58, Bernard Robertson-Dunn wrote:

Suppose you wanted to tell an AI system how to distinguish between right and wrong ...

More a philosophical than technical question. Like a good researcher, you could get the AI to find what is supported by evidence. But that doesn't necessarily make the answer factually correct, or morally right.

The problem is that the AI is mostly working on hearsay. It is harvesting text, which is what people said was the case, not it actually was. It will be interesting when the AI can look at direct evidence, from sensors, or by observing the behavior of people online, not what they say they do.

If you ask AI about a controversial topic, the answer will depend on who put the most convincing propaganda online. How would you know if the answer was correct: ask the government?


_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to