Correct me if I'm wrong, but all of the attempts to use GPT-3 to pass the
Turing Test are "stateless" in the following sense:

All prior inputs to GPT-3 are appended to the current input, and the whole
mess is sent as a single input to GPT-3 always in the same state.

GPT-3, itself, doesn't keep track of anything, which is why all of the
examples of "conversations" end up being pretty limited.  You just can't
get away with this trick indefinitely because the input string becomes too
large.

Now, not having read the Gato paper in much detail, it would _appear_ they
are taking the "sequence" thing a bit more seriously, so there may be some
hope it isn't pulling the same fakery.

Is this true?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T861334313bba231b-M17e0edd030d328093b1ceab0
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to