On Wednesday 07 March 2007 17:58, Ben Goertzel wrote:
A more interesting question to think about, rather than how to represent
a story in a formal language, is: How would you convince yourself that
your AGI actually understood a story? What kind of question-answers or
behaviors would
On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
Perhaps the ultimate Turing Test would be to make the system itself act as the
interviewer for a Turing Test of another system.
So intelligence is defined as the capability of a system to
recognize the intelligence of other systems. I
On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
Perhaps the ultimate Turing Test would be to make the system itself act as
the
interviewer for a Turing Test of another system.
It's called an inverted Turing test. See:
Watt, S. (1996) Naive-Psychology and the Inverted Turing Test.
How would you convince yourself that
your AGI actually understood a story? What kind of question-answers or
behaviors would convince you of this?
You can just keep describing an increasingly complex scenario (with n
objects/agents changing [mentioned and/or implied] relationships) and