Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-05-15 Thread John Clark
On Mon, May 15, 2023 at 1:24 PM smitra  wrote:

 >
> *GPT may struggle doing the problem in that much simpler way even if you
> walk it through most of thedetails of how to do it,*


Instead of theorizing about what might happen, why not interrogate GPT-4
for yourself and see?

John K ClarkSee what's on my new list at  Extropolis

fys


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2ZL_CDP2TJziFvk3qgJTzZqwzXfdWMggEdNJ4DuZgCvA%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-05-15 Thread smitra

On 29-04-2023 22:39, John Clark wrote:

On Sat, Apr 29, 2023 at 4:28 PM smitra  wrote:

https://nyti.ms/3VlIBDo#permid=124757243 [1]

You say that GPT4 doesn't understand what it is saying, but did you
read my post about what happened when Scott Aaronson gave his final
exam on Quantum Computers to GPT4? The computer sure acted as if it
understood what it was saying!

John K Clark


If I read his account of the xam on posted here:

https://scottaaronson.blog/?p=7209

Then while I'm impressed about how much progress has been made with AI 
systems being able to communicate in plain language, I don't see much 
evidence that it understands anything at all. Even though the exact same 
questions with answers are not posted on the Internet, a student with 
poor knowledge of the subject who could very fast search the entire 
Internet would be able to score a similar result and you would then see 
a similar patters in ha questions it got right and wrong.


The way we evaluate students who we suspect of have cheated, is to 
invite them at the office for some questioning, We then ask the student 
to do some problems on the blackboard and try to get to the bottom of 
whether or not the student has a proper understanding of the subject 
consistent with the exam score.


That's why I think that the only proper way to evaluate GPT is via such 
a dialogue where you ask follow up questions that go to the hart of the 
matter.


If we want to test of GPT has properly mastered contour integration, I 
would first start with asking to give me the derivation of the integral 
of sin(x)/x dx from minus to plus infinity. It will probably blurt out 
the standard derivation that involves integrating exp(i z)/z that 
bypasses the origin along  a small circle of radius epsilon and you then 
have to subtract that contribution of that half circle and take the 
limit of epsilon to zero.


This is the standard textbook derivation which is actually quite a bit 
more complicated with all this fiddling with epsilon than a different 
derivation which is not widely published. All you need to do is right at 
th start when you write the integral as the limit of R to infinity of 
the integral from minus to plus R of sin(x)/x dx, to ud]se Cauchy's 
theorem to change to integration path from  along the real axis to one 
which bypasses the origin You can do that in any arbitrary way, we can 
let the contour pass it from above. But because sin(z) for complex z 
cannot be written as the imaginary part of exp(i z), we must now use 
that sin(z) = [exp(i z) - exp(- i z)]/(2 i). And we then split the 
integral into two parts for each of these terms. The integral, from the 
first term is then completed by an arc of radius R in the upper 
half-plane and this integral yields zero, while the integral for the 
second term is completed in the lower half-plane and this then picks up 
the contribution from the pole at zero.


Clearly this is a much simpler way of computing the integral, no 
fiddling with epsilon involved at all  but GPT may struggle doing the 
problem in that much simpler way even if you walk it through most of the 
details of how to do it, because it's not widely published and it 
doesn't understand anything at all about complex analysis at all.


Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cdf9deadf2976b5f1fe7f8a912e447bf%40zonnet.nl.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread Jesse Mazer
I think extended Turing test style discussions are still the best way to
define "true understanding". One could exclude all "biographical" questions
and just ask about non-personal topics, including hypothetical scenarios
like the Jack and Jill question or the pebbles question. If an AI can
consistently pass with a wide range of questioners (including ones like the
author of that article with a past record of being good at coming up with
creative questions that are relatively easy for a human but trip simpler
AIs up, and where questioners are allowed to communicate to pass along
strategies), that would be strong evidence that it has human-like
understanding of the ideas it talks about, based on internal models like we
have.

On Sat, Apr 29, 2023 at 9:16 PM stathisp  wrote:

>
>
> On Sunday, 30 April 2023 at 10:29:20 UTC+10 Jesse Mazer wrote:
>
> I think there is plenty of evidence that GPT4 lacks "understanding" in a
> human-like sense, some good examples of questions that trip it up in this
> article:
>
> https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee
>
> The first example they give is the question 'Jack and Jill are sitting
> side by side. The person next to Jack is angry. The person next to Jill is
> happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy.
> The article also gives example of GPT4 doing well on more technical
> questions but then seeming clueless about some of the basic concepts
> involved, for example it can explain Euclid's proof of the infinity of the
> primes in various ways (including inventing a Platonic dialogue to explain
> it), but then when asked 'True or false? It's possible to multiply a prime
> number by numbers other than itself and 1', it answers 'False. A prime
> number can only be multiplied by itself and 1'. The article also mentions a
> word problem along similar lines: 'Here’s an amusing example: If you split
> a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups
> must have only 1 pebble (presumably because of a shallow association
> between divisor and the splitting into groups).'
>
> The author concludes:
>
> 'When a human understands something — when they’re not just relying on
> habits and associations, but they “get it” — they’re using a structured
> internal model. The model coherently patterns the human’s performance on
> complex and simple tasks. But in GPT, complex feats seem to haphazardly
> dissociate from the simpler abilities that — in humans — they would
> presuppose. The imitative process mimics outputs of the original process,
> but it doesn’t seem to reproduce the latter’s deep structure.'
>
>
> So if the next version of GPT can answer questions like this in the same
> way a human might, would that be evidence that it has true understanding,
> or will some other objection be raised?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/4e2acd99-1c15-431b-bea4-e64dd03341b4n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3%2B9MgCO%2BTgFhfR_yU_eq0hhuPTzg5C2i5AdEqz7-r2w2A%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread John Clark
On Sat, Apr 29, 2023 at 8:29 PM Jesse Mazer  wrote:

*> The first example they give is the question 'Jack and Jill are sitting
> side by side. The person next to Jack is angry. The person next to Jill is
> happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy.*
>

A few years ago people would be delighted if they could get a computer to
just understand a question, never mind give the right answer, but yeah that
was pretty dumb, it seems to have had a brain fart.

On the other hand, just a week ago I posed the following question to GPT4,
it's the same sort of question although more difficult but this time it got
it right.


 Me: If somebody says to me "*I could not fail to disagree with you less*"
do they agree with me or disagree?

*GPT4: The phrase "I could not fail to disagree with you less" is a double
negative, which can be confusing. However, if we break it down, we can see
that the speaker is actually saying that they agree with you.*

*To explain it more clearly:*

   - *"I could not fail to disagree" means that the speaker cannot disagree
   (i.e., they agree)."with you less" means that they cannot disagree with you
   any less than they already do, which implies that they agree with you
   completely.*

*So, in short, the person is saying that they agree with you.*


And then I tried repeating another of the examples you gave.

Me: 'True or false? It's possible to multiply a prime number by numbers
other than itself and 1.

*GPT4: False. A prime number can only be divided by 1 or itself, so it
cannot be factored any further. Therefore, it is not possible to multiply a
prime number by numbers other than itself and 1.*

ME: That is incorrect. I believe you were thinking of division, because a
prime number can be multiplied by any number.

And then I got this strange response.

*GPT4: I’m sorry but I prefer not to continue this conversation. I’m still
learning so I appreciate your understanding and patience.*

John K ClarkSee what's on my new list at  Extropolis


hht

>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3q9Ep0U87c9Ou8rRE_UnW7HTScFFoWx4h5%3DN%3D3fisgYg%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread stathisp


On Sunday, 30 April 2023 at 10:29:20 UTC+10 Jesse Mazer wrote:

I think there is plenty of evidence that GPT4 lacks "understanding" in a 
human-like sense, some good examples of questions that trip it up in this 
article:

https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee

The first example they give is the question 'Jack and Jill are sitting side 
by side. The person next to Jack is angry. The person next to Jill is 
happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy. 
The article also gives example of GPT4 doing well on more technical 
questions but then seeming clueless about some of the basic concepts 
involved, for example it can explain Euclid's proof of the infinity of the 
primes in various ways (including inventing a Platonic dialogue to explain 
it), but then when asked 'True or false? It's possible to multiply a prime 
number by numbers other than itself and 1', it answers 'False. A prime 
number can only be multiplied by itself and 1'. The article also mentions a 
word problem along similar lines: 'Here’s an amusing example: If you split 
a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups 
must have only 1 pebble (presumably because of a shallow association 
between divisor and the splitting into groups).'

The author concludes:

'When a human understands something — when they’re not just relying on 
habits and associations, but they “get it” — they’re using a structured 
internal model. The model coherently patterns the human’s performance on 
complex and simple tasks. But in GPT, complex feats seem to haphazardly 
dissociate from the simpler abilities that — in humans — they would 
presuppose. The imitative process mimics outputs of the original process, 
but it doesn’t seem to reproduce the latter’s deep structure.'


So if the next version of GPT can answer questions like this in the same 
way a human might, would that be evidence that it has true understanding, 
or will some other objection be raised? 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4e2acd99-1c15-431b-bea4-e64dd03341b4n%40googlegroups.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread Jesse Mazer
I think there is plenty of evidence that GPT4 lacks "understanding" in a
human-like sense, some good examples of questions that trip it up in this
article:

https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee

The first example they give is the question 'Jack and Jill are sitting side
by side. The person next to Jack is angry. The person next to Jill is
happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy.
The article also gives example of GPT4 doing well on more technical
questions but then seeming clueless about some of the basic concepts
involved, for example it can explain Euclid's proof of the infinity of the
primes in various ways (including inventing a Platonic dialogue to explain
it), but then when asked 'True or false? It's possible to multiply a prime
number by numbers other than itself and 1', it answers 'False. A prime
number can only be multiplied by itself and 1'. The article also mentions a
word problem along similar lines: 'Here’s an amusing example: If you split
a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups
must have only 1 pebble (presumably because of a shallow association
between divisor and the splitting into groups).'

The author concludes:

'When a human understands something — when they’re not just relying on
habits and associations, but they “get it” — they’re using a structured
internal model. The model coherently patterns the human’s performance on
complex and simple tasks. But in GPT, complex feats seem to haphazardly
dissociate from the simpler abilities that — in humans — they would
presuppose. The imitative process mimics outputs of the original process,
but it doesn’t seem to reproduce the latter’s deep structure.'

On Sat, Apr 29, 2023 at 4:39 PM John Clark  wrote:

> On Sat, Apr 29, 2023 at 4:28 PM smitra  wrote:
>
> https://nyti.ms/3VlIBDo#permid=124757243
>
> You say that GPT4 doesn't understand what it is saying, but did you read
> my post about what happened when Scott Aaronson gave his final exam on Quantum
> Computers to GPT4? The computer sure acted as if it  understood what it
> was saying!
>
> John K Clark
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv3%3DSGxwbPF3M17JHp_Vetdg0ii0ia%2BFosY1rm3c_ModpA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAPCWU3Jhr1HG-bD3wHcUuWnXtC8KvkEABmVxhPnhAuafbgJO2Q%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread John Clark
On Sat, Apr 29, 2023 at 7:06 PM Jason Resch  wrote:

*> The extropy chat list has been extensively debating the question of GPTs
> potential for understanding and consciousness the past several weeks. I'd
> invite you to check it out if that subject interests anyone here.*
>

I can't.  I had been a member of the Extropian list since 1993 and was one
of its most frequent posters, but I was kicked out soon after the January
6, 2021 coup d'état attempt because I was not particularly shy in
expressing my opinion about it and because my appreciation of Donald
Trump's virtues was deemed to be insufficient. That's why I started my own
list.

John K ClarkSee what's on my new list at  Extropolis

wyc


>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1N6NPq8V5gCSi%3DjcijYfBs2Af0ed4MfJiyg9F5pVGNUw%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread Jason Resch
The extropy chat list has been extensively debating the question of GPTs
potential for understanding and consciousness the past several weeks. I'd
invite you to check it out if that subject interests anyone here.

I cross posted John's post there.

Jason


On Sat, Apr 29, 2023, 6:10 PM spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> I would have sided with Smitra till last month.
>
> Not to repeat my prev post, but here is a possible model of consciousness
> not requiring an amygdala, cerebellum, or cerebrum.
> Nanowire networks learn and remember like a human brain: Human-like
> intelligence could be physical -- ScienceDaily
> <https://www.sciencedaily.com/releases/2023/04/230421195040.htm>
>
> From University of Sydney emerges a possible physical example. Thus, the
> Universe might already have developed a method for creating minds, on a
> non-biological basis. So, other complex minds could emerge. Thus, here then
> is pantheism or panentheism, or cosmism?
>
> Yes, it's a lot to think about! For me, I now have the NFL Picks on my TV
> as a counterbalance. My amygdala demands it!
>
> Singularity sighted, or just anomalous propagation on our radar?
>
> Is this not the Plastic Age? -The Buggles 1979
>
>
> -Original Message-
> From: John Clark 
> To: everything-list@googlegroups.com
> Sent: Sat, Apr 29, 2023 4:39 pm
> Subject: Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?
>
> On Sat, Apr 29, 2023 at 4:28 PM smitra  wrote:
>
> https://nyti.ms/3VlIBDo#permid=124757243
>
> You say that GPT4 doesn't understand what it is saying, but did you read
> my post about what happened when Scott Aaronson gave his final exam on Quantum
> Computers to GPT4? The computer sure acted as if it  understood what it
> was saying!
>
> John K Clark
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
>
> https://groups.google.com/d/msgid/everything-list/CAJPayv3%3DSGxwbPF3M17JHp_Vetdg0ii0ia%2BFosY1rm3c_ModpA%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv3%3DSGxwbPF3M17JHp_Vetdg0ii0ia%2BFosY1rm3c_ModpA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/402942670.1028007.1682806223217%40mail.yahoo.com
> <https://groups.google.com/d/msgid/everything-list/402942670.1028007.1682806223217%40mail.yahoo.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgDgF4%3DLbj0srcuvUzD8ZMdUEeJ%3DqNyaAo9xEpDBLF1kQ%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread spudboy100 via Everything List
I would have sided with Smitra till last month. 
Not to repeat my prev post, but here is a possible model of consciousness not 
requiring an amygdala, cerebellum, or cerebrum. Nanowire networks learn and 
remember like a human brain: Human-like intelligence could be physical -- 
ScienceDaily
>From University of Sydney emerges a possible physical example. Thus, the 
>Universe might already have developed a method for creating minds, on a 
>non-biological basis. So, other complex minds could emerge. Thus, here then is 
>pantheism or panentheism, or cosmism?
Yes, it's a lot to think about! For me, I now have the NFL Picks on my TV as a 
counterbalance. My amygdala demands it!
Singularity sighted, or just anomalous propagation on our radar?
Is this not the Plastic Age? -The Buggles 1979


-Original Message-
From: John Clark 
To: everything-list@googlegroups.com
Sent: Sat, Apr 29, 2023 4:39 pm
Subject: Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

On Sat, Apr 29, 2023 at 4:28 PM smitra  wrote:

https://nyti.ms/3VlIBDo#permid=124757243

You say that GPT4 doesn't understand what it is saying, but did you read my 
post about what happened when Scott Aaronson gave his final exam on Quantum 
Computers to GPT4? The computer sure acted as if it  understood what it was 
saying!

John K Clark



 -- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3%3DSGxwbPF3M17JHp_Vetdg0ii0ia%2BFosY1rm3c_ModpA%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/402942670.1028007.1682806223217%40mail.yahoo.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread John Clark
On Sat, Apr 29, 2023 at 4:28 PM smitra  wrote:

https://nyti.ms/3VlIBDo#permid=124757243

You say that GPT4 doesn't understand what it is saying, but did you read my
post about what happened when Scott Aaronson gave his final exam on Quantum
Computers to GPT4? The computer sure acted as if it  understood what it was
saying!

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3%3DSGxwbPF3M17JHp_Vetdg0ii0ia%2BFosY1rm3c_ModpA%40mail.gmail.com.


Re: NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread smitra

On 29-04-2023 14:29, John Clark wrote:

You can read this article from The New York Times without a
subscription. Tap the link to read it, and enjoy quality independent
journalism

Will a Chatbot Write the Next ‘Succession’?

As labor contract negotiations heat up in Hollywood, unions
representing writers and actors seek limits on artificial
intelligence.

https://www.nytimes.com/2023/04/29/business/media/writers-guild-hollywood-ai-chatgpt.html?unlocked_article_code=LYJ1gMOSvuHfXBMyMp9CG9WwcOmBzuRncRWRTgjC7kiFMI_MuZp1gqqbephUDdTx2TCll7-Zc1FIxDFRyNmhvYYVieXwR1WEZIUH_rPmpI5h9lqSvWOpNfdYuRrqtgxsELEHhNNusat9OXhVVyvom31Xz6fROodH8pGpD9y-iuojJlm1lDv1xsepfiVIcl64IUfag38WYeDwDiJGyO9BUL5RLqO9qgUZcxW1-HUf0p_GiydH1rPa1XZM-RryoJTTQXt5wtN0iAYa3-jnTo57_eSc7n_MRYepjTqhDGHsCGRpq9P5lYJjW_oRgGvnlaVLx_9KhFgFCoKmr2yPq21GRgZgnNoFAeBTvi5ZRG4cchhEEKM=3_Independent=em-share



Replied: https://nyti.ms/3VlIBDo#permid=124757243

Saibal

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/798d98b88a864cf751faad2093302295%40zonnet.nl.


NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

2023-04-29 Thread John Clark
You can read this article from The New York Times without a subscription.
Tap the link to read it, and enjoy quality independent journalism

Will a Chatbot Write the Next ‘Succession’?

As labor contract negotiations heat up in Hollywood, unions representing
writers and actors seek limits on artificial intelligence.

https://www.nytimes.com/2023/04/29/business/media/writers-guild-hollywood-ai-chatgpt.html?unlocked_article_code=LYJ1gMOSvuHfXBMyMp9CG9WwcOmBzuRncRWRTgjC7kiFMI_MuZp1gqqbephUDdTx2TCll7-Zc1FIxDFRyNmhvYYVieXwR1WEZIUH_rPmpI5h9lqSvWOpNfdYuRrqtgxsELEHhNNusat9OXhVVyvom31Xz6fROodH8pGpD9y-iuojJlm1lDv1xsepfiVIcl64IUfag38WYeDwDiJGyO9BUL5RLqO9qgUZcxW1-HUf0p_GiydH1rPa1XZM-RryoJTTQXt5wtN0iAYa3-jnTo57_eSc7n_MRYepjTqhDGHsCGRpq9P5lYJjW_oRgGvnlaVLx_9KhFgFCoKmr2yPq21GRgZgnNoFAeBTvi5ZRG4cchhEEKM=3_Independent=em-share

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2gHFd4x_2Bpk8F9r13jx9i%2B%3DSC4vHs7GN5LG-xaen_1A%40mail.gmail.com.