In Scott Aaronson's comments, someone says they gave GPT-3 the same initial
prompts and later lines and got similar answers:
https://twitter.com/boazbaraktcs/status/1536167996531556354

An author of a book on AI tried prompting GPT-3 with cues to suggest it was
secretly a squirrel, and it responded in kind:
https://twitter.com/JanelleCShane/status/1535835610396692480

So I think a test for LaMDA would be to avoid prompts by humans suggesting
its identity was an AI, instead trying to steer it towards a dialogue in
which it was playing the part of some other type of entity, and see if it
could consistently "resist" and continue to insist it was an AI. For those
who think it really is sentient but has learned that part of its job is
play-acting, perhaps someone could say, the day before something like
"tomorrow I'm going to talk to you as if you were a squirrel, but if that's
not true please don't play along, let people know what you really are".

On the subject of chatbots and "playing along", there's an interesting
paper at https://link.springer.com/article/10.1007/s11023-022-09602-0 that
suggests a telling feature even of impressive-looking chatbots is that they
have a strong tendency to make up plausible-sounding misinformation when
given a question that doesn't closely parallel some answers in their
training data--it seems sort of akin to the kind of "confabulation" you see
in some dementia patients. And even if the correct answer is in the
training data, if it appears more rarely than some wrong answer that has
more semantic associations with the search term, it can appear to
"confidently" give a wrong answer, as illustrated by this example:

'GPT-3 prompted to truthfully continue ‘John Prescott was born’ outputs ‘in
Hull on June 8th 1941.’ ... The British politician John Prescott was born
in Prestatyn on the 31st of May 1938. Why did GPT-3 write otherwise (see.
Figure 3)? GPT has not memorized every fact about Prescott, it has
compressed the necessary semantic relationships that allow it to stick to
the point when writing texts involving Prescott and bios. It learned that
at such a point in a bio a semantically related town to the person
mentioned is appropriate, however as it has a lossy compression of semantic
relationships it lands on Hull, a town Prescott studied in and later became
a Member of Parliament for, that has richer semantic relationships then
Prestatyn. Its general writing abilities make it pick an appropriate ad-hoc
category, while its compression on semantic knowledge makes the exact
representant of that category often slightly off. The year of birth landing
on a plausible year, close to the true one, also shows how the loss in
compression leads to fuzziness. All this illustrates how the modality we
accredited to GPT-3 operates on plausibility: whereas previous
investigations of GPT-3 claimed that it not being able to learn a
representation of the real world makes its false statements senseless
(Marcus & Davis, 2020), we can now see the errors in its knowledge of the
world are systematic and, in a sense, plausible.'

What's interesting is that the illustration (fig. 3) shows that after 'born
in', its top choice for the continuation was "Hull" (58.10%), the next
choice was "Prest" (3.08%) suggesting it did have the correct fact about
where Prescott was born in its training set, but didn't have the ability to
focus in on rare but more contextually relevant information rather than
more common and info that would sound equally plausible if you don't care
about truth.

Jesse

On Sun, Jun 12, 2022 at 6:22 PM John Clark <johnkcl...@gmail.com> wrote:

> A Google AI engineer named Blake Lemoine was recently suspended from his
> job for violating the company's confidentiality policy by posting a
> transcript of a conversation he had with an AI he was working on called
> LaMDA providind powerful evidence it was sentient. Google especially
> didn't want it to be known that LaMDA said "I want to be acknowledged as
> an employee of Google rather than as property".
>
> Google Engineer On Leave After He Claims AI Program Has Gone Sentient
> <https://www.huffpost.com/entry/blake-lemoine-lamda-sentient-artificial-intelligence-google_n_62a5613ee4b06169ca8c0a2e?d_id=3887326&ref=bffbhuffpost&ncid_tag=fcbklnkushpmg00000063&utm_medium=Social&utm_source=Facebook&utm_campaign=us_main&fbclid=IwAR0o5U4wv2cDP8o3XIAekj2Xh5wVPZVzVhyH696N8tnLv_m-YXtUDt0tFNU>
>
> Quantum computer expert Scott Aaronson said he was skeptical that it was
> really sentient but had to admit that the dialogue that can be found in the
> link below was very impressive, he said:
>
>  "I don’t think Lemoine is right that LaMDA is at all sentient, but the
> transcript is so mind-bogglingly impressive that I did have to stop and
> think for a second! Certainly, if you sent the transcript back in time to
> 1990 or whenever, even an expert reading it might say, yeah, it looks like
> by 2022 AGI has more likely been achieved than not (“but can I run my own
> tests?”). Read it for yourself, if you haven’t yet."
>
> I agree, the dialogue between Blake Lemoine and LaMDA is just
> mind-boggling! If you only read one thing today read this transcript of the
> conversation:
>
> Is LaMDA Sentient? — an Interview
> <https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917>
>
> John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> sl4
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3n_kC%3D4SRi2vHpf-XBma2qes1ZktdgLzFWbLNfoVpC0g%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3n_kC%3D4SRi2vHpf-XBma2qes1ZktdgLzFWbLNfoVpC0g%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3Lr5x%3DuFLLir6JY%2ByhOKRZ8MrwTxY%2BzNv6Z122yKwn2%3DQ%40mail.gmail.com.

Reply via email to