I think extended Turing test style discussions are still the best way to
define "true understanding". One could exclude all "biographical" questions
and just ask about non-personal topics, including hypothetical scenarios
like the Jack and Jill question or the pebbles question. If an AI can
consistently pass with a wide range of questioners (including ones like the
author of that article with a past record of being good at coming up with
creative questions that are relatively easy for a human but trip simpler
AIs up, and where questioners are allowed to communicate to pass along
strategies), that would be strong evidence that it has human-like
understanding of the ideas it talks about, based on internal models like we
have.

On Sat, Apr 29, 2023 at 9:16 PM stathisp <stath...@gmail.com> wrote:

>
>
> On Sunday, 30 April 2023 at 10:29:20 UTC+10 Jesse Mazer wrote:
>
> I think there is plenty of evidence that GPT4 lacks "understanding" in a
> human-like sense, some good examples of questions that trip it up in this
> article:
>
> https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee
>
> The first example they give is the question 'Jack and Jill are sitting
> side by side. The person next to Jack is angry. The person next to Jill is
> happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy.
> The article also gives example of GPT4 doing well on more technical
> questions but then seeming clueless about some of the basic concepts
> involved, for example it can explain Euclid's proof of the infinity of the
> primes in various ways (including inventing a Platonic dialogue to explain
> it), but then when asked 'True or false? It's possible to multiply a prime
> number by numbers other than itself and 1', it answers 'False. A prime
> number can only be multiplied by itself and 1'. The article also mentions a
> word problem along similar lines: 'Here’s an amusing example: If you split
> a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups
> must have only 1 pebble (presumably because of a shallow association
> between divisor and the splitting into groups).'
>
> The author concludes:
>
> 'When a human understands something — when they’re not just relying on
> habits and associations, but they “get it” — they’re using a structured
> internal model. The model coherently patterns the human’s performance on
> complex and simple tasks. But in GPT, complex feats seem to haphazardly
> dissociate from the simpler abilities that — in humans — they would
> presuppose. The imitative process mimics outputs of the original process,
> but it doesn’t seem to reproduce the latter’s deep structure.'
>
>
> So if the next version of GPT can answer questions like this in the same
> way a human might, would that be evidence that it has true understanding,
> or will some other objection be raised?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4e2acd99-1c15-431b-bea4-e64dd03341b4n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/4e2acd99-1c15-431b-bea4-e64dd03341b4n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3%2B9MgCO%2BTgFhfR_yU_eq0hhuPTzg5C2i5AdEqz7-r2w2A%40mail.gmail.com.

Reply via email to