Re: [FRIAM] ChatGPT is not very smart...

2023-02-12 Thread Santafe
Sorry for the last post.

I hadn’t read the link below in this one yet.

I will now get back to the work I was supposed to have been doing.

Eric


> On Feb 10, 2023, at 2:22 PM, glen  wrote:
> 
> This was laugh out loud funny for me. YMMV.
> 
> Arguing with AI: My first dispute with Microsoft’s brilliant and boneheaded 
> Bing search engine
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.geekwire.com%2f2023%2farguing-with-ai-my-first-dispute-with-microsofts-brilliant-and-boneheaded-new-bing-search-engine%2f%3fmc_cid%3de4cf83ee41%26mc_eid%3dd5c14b4c73=E,1,P1Xq0EwlrN6G3WM0A2WTH1H7AomK__NRgyniiFYF0Rz_D4I0KuzYnL9EJlwOg8N1_yIPRafbdPnl4gf5U5NdwCTpUEKCUcjSpM4JRAbx=1
> 
> I quoted my comment about choosing your use case wisely. In this article, I 
> think it's clear that mixing the 2 use cases doesn't work out that well ... 
> yet, anyway.
> 
> On 2/10/23 10:14, glen wrote:
>> [⛧] I wanted to write something about how the use case of ChatGPT severely 
>> biases these perspectives on what GPT3.5 is or can do. But the post was too 
>> long already. It's sufficient to say there are many many *other* use cases 
>> for GPT3.5. ChatGPT ≠ GPT3.5.
> 
> -- 
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,ir6m4SkmXohdtGP9Y3caumZaSTghHBR3r1ZpJp0VpbRqdr5TCHQj3E-hAR-6Qa-d1H7jiiBv2iUD661w4t3Rt57yWIhxmqiw0A0CH5_w=1
> to (un)subscribe 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com=E,1,VPEyN5poYN3a8hZyVNSsIeCcTmy0J-SiW0UDXicLHD9MhnOGERYwqjHe5h1UL-TmO7h-vQlShyVJOsSk-QknkoWKmzv6WLhwIXnjyC4AJ5PSe9BeUtkQ2g,,=1
> FRIAM-COMIC 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f=E,1,zMPnLB1bnITVYbabHbpKaUE1Zaum6_rZstvYRGMkYZE1xKWwjs32_grxsub07xDK0qTypXTKyxU_Xv1M4FCM4ELytb0eHR0wXqn7l58QtXFklrLpFw,,=1
> archives:  5/2017 thru present 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f=E,1,wU-ASxBH_kmg4JPXUkyMIgcjQZMecZxiujPY7fT7ZCYhHI1frGWP-YXtGSZDr2oLs1fSvOMTcUtW0xvdJB4Ph1D7B9Y4V8IudlIpODwOrkyNUr8,=1
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-12 Thread Santafe
g Applied Complexity Coffee Group 
> Subject: Re: [FRIAM] ChatGPT is not very smart...
> 
> So probably this has all been analyzed to death already, by people who, 
> unlike me, know what they are talking about.
> 
> But in re-reading it, I feel like the structure of the problem is 
> characterizable.  
> 
> It is as if “facts” that are constraints on the sentences to be composed are 
> ordered in a way that is somewhat like lexicographic ordering.  
> 
> Lexicographically, azzz is before zaaa; no subsequent letters even 
> get consideration (much less the set of them all in aggregate), until the 
> first letter has dictated as much of the sorting as its values can index.
> 
> Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
> “fact” that a particular list of two states is “the core of the answer” to 
> this set of questions, and membership in the list is the first condition in a 
> selection-tree for replies.  However tortured, the following sentences have 
> to operate within a reality that has already been truncated by the “first 
> fact”.
> 
> (Again, since this is obvious, my even belaboring it is kind of dull once I 
> see it on the page.)
> 
> Come to think of it, it reminds me too of Vygotsky’s claim that children 
> first assign category terms by what he called “family resemblances”, and only 
> later at the age where some developmental threshold is crossed, do they form 
> “predicates” in the sense we would use the term in langauge semantics.  
> “Family resemblances” can be, effectively, anything, are very idiosyncratic 
> to the experiential history of any particular child-in-context, and in that 
> sense are fragile categories.  The distinction in predicates, once they 
> emerge, is that they seem to be much less fragile, which amounts to being 
> exchangeable across thinkers and across situations for a single thinker.
> 
> As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
> quite unlike types.  As there is a kind of Venn-diagram intersection logic 
> that can be applied to letter-count and being-a-state-name, which does not 
> exist in general form for membership in arbitrary lists, we learn that the 
> latter predicates should get priority in navigating streams of thoughts and 
> of sentences.  But if, to chatGPT, those are not yet predicates, but rather 
> various modes of family resemblance, all comparably eligible, one can scan 
> how a dialog of the below form would be a derivable outcome.
> 
> If that is the right analysis, the step of getting stuck in a pre-emptive 
> membership-in-a-list commitment is not the revealing, or by that criterion 
> the interesting, part of the dialog below.  The interesting part becomes the 
> suddenly much-higher need for “cleverness”, to find flowing sentences that 
> can be forced to the far side of an unresolvable contradiction.  It’s like my 
> laptop, when some internet-check-in step becomes unsatisfiable, and it will 
> dump more and more CPU into trying to do something that got mis-configured, 
> the fan ramps up, and this will spiral indefinitely until I stop what I am 
> doing to kill the offending process.  One can imagine that “learning” what 
> predicates “are”, that they “have” logics, and what those logics “are”, is a 
> feedback loop of trying to avoid the mental costs of getting into blind 
> alleys.  The deeper the backtrack to find the unbinding choice, the harder 
> the learning program is, until we get up to scientific categories, where it 
> becomes quite difficult to recognize that we had invalid meanings for “time”, 
> “space”, and “state”.  But a kind of continuous process of rewarding laziness 
> across all those levels of organization of thought.
> 
> I also, of course, cannot keep out of mind the George Santos characters or 
> similar pathological liars, who live full-time in this navigating-snags mode. 
>  Police interrogators think they can wear even pathological liars down — 
> catch them out — because the effort level is high enough that eventually 
> nobody can sustain it.  But at the same time, humans have a wide talent range 
> for different tasks, and I could believe that for the pathological liar, this 
> snag-navigating can be done much more fluently and with lower effort, than 
> the same degree of performance would demand of a “normie”.
> 
> All the above feels about as satisfying as listening to somebody explain a 
> joke.  Oh well.
> 
> Eric
> 
> 
>> On Feb 2, 2023, at 1:49 PM, Barry MacKichan  
>> wrote:
>> 
>> or is obsessed with four-letter words. From an email from another group. 
>> Paragraphs with an initial capital are ChatGPT’s responses. The others are 
>> input:
>> 
>>

Re: [FRIAM] ChatGPT is not very smart...

2023-02-11 Thread Gillian Densmore
FWIW I think I read on reddit someplace chatGPT was made to be more family
friendly after people used it to write way less than family friendly
fanfictions. so stuff like F^^^ Sh*.* etc*.* .  Would be included in chat.
I think it's a hoot it's not just this popular. But also decently able to
have a convo. Now the Matrix would argue that it's a machine trying to
become sentient. With all the cons. and pros that come with it.
Do we know if (or when) MS buys it, that they'll further it's training?
Scarily enough I might prefer MS over Google because of their ADHD.

On Fri, Feb 10, 2023 at 11:05 PM Russ Abbott  wrote:

> Thanks for the pointer to this article. I found the new Bing chat engine
> quite impressive. To a surprising extent it felt like the author was
> interacting with an actual human being. The following brief exchange
> doesn't sound or feel like it was generated by stringing together words
> found during training. This was a part of the conversation in which the
> author accused the Bing chat engine of defaming him. He is claiming that
> the misinformation he was given (about an article he had written) was
> "published" and hence subject to libel laws.
>
> [image: image.png]
>
> -- Russ
>
> On Fri, Feb 10, 2023 at 11:31 AM Marcus Daniels 
> wrote:
>
>> I don't think it is necessarily the case a very large neural net would
>> need to backtrack.   It could devote more and more resources to different
>> hypotheses so long as it was not energy constrained.
>>
>> -Original Message-
>> From: Friam  On Behalf Of Santafe
>> Sent: Friday, February 10, 2023 3:11 AM
>> To: The Friday Morning Applied Complexity Coffee Group > >
>> Subject: Re: [FRIAM] ChatGPT is not very smart...
>>
>> So probably this has all been analyzed to death already, by people who,
>> unlike me, know what they are talking about.
>>
>> But in re-reading it, I feel like the structure of the problem is
>> characterizable.
>>
>> It is as if “facts” that are constraints on the sentences to be composed
>> are ordered in a way that is somewhat like lexicographic ordering.
>>
>> Lexicographically, azzz is before zaaa; no subsequent letters
>> even get consideration (much less the set of them all in aggregate), until
>> the first letter has dictated as much of the sorting as its values can
>> index.
>>
>> Likewise, somewhere chatGPT was stuck in a local valley, where there was
>> a “fact” that a particular list of two states is “the core of the answer”
>> to this set of questions, and membership in the list is the first condition
>> in a selection-tree for replies.  However tortured, the following sentences
>> have to operate within a reality that has already been truncated by the
>> “first fact”.
>>
>> (Again, since this is obvious, my even belaboring it is kind of dull once
>> I see it on the page.)
>>
>> Come to think of it, it reminds me too of Vygotsky’s claim that children
>> first assign category terms by what he called “family resemblances”, and
>> only later at the age where some developmental threshold is crossed, do
>> they form “predicates” in the sense we would use the term in langauge
>> semantics.  “Family resemblances” can be, effectively, anything, are very
>> idiosyncratic to the experiential history of any particular
>> child-in-context, and in that sense are fragile categories.  The
>> distinction in predicates, once they emerge, is that they seem to be much
>> less fragile, which amounts to being exchangeable across thinkers and
>> across situations for a single thinker.
>>
>> As predicates, membership-in-a-list, letter-count, being-a-state-name,
>> are of quite unlike types.  As there is a kind of Venn-diagram intersection
>> logic that can be applied to letter-count and being-a-state-name, which
>> does not exist in general form for membership in arbitrary lists, we learn
>> that the latter predicates should get priority in navigating streams of
>> thoughts and of sentences.  But if, to chatGPT, those are not yet
>> predicates, but rather various modes of family resemblance, all comparably
>> eligible, one can scan how a dialog of the below form would be a derivable
>> outcome.
>>
>> If that is the right analysis, the step of getting stuck in a pre-emptive
>> membership-in-a-list commitment is not the revealing, or by that criterion
>> the interesting, part of the dialog below.  The interesting part becomes
>> the suddenly much-higher need for “cleverness”, to find flowing sentences
>> that can be forced to the far side of an unresolvable contradiction.  It’s
>> like my laptop,

Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread Russ Abbott
Thanks for the pointer to this article. I found the new Bing chat engine
quite impressive. To a surprising extent it felt like the author was
interacting with an actual human being. The following brief exchange
doesn't sound or feel like it was generated by stringing together words
found during training. This was a part of the conversation in which the
author accused the Bing chat engine of defaming him. He is claiming that
the misinformation he was given (about an article he had written) was
"published" and hence subject to libel laws.

[image: image.png]

-- Russ

On Fri, Feb 10, 2023 at 11:31 AM Marcus Daniels 
wrote:

> I don't think it is necessarily the case a very large neural net would
> need to backtrack.   It could devote more and more resources to different
> hypotheses so long as it was not energy constrained.
>
> -Original Message-
> From: Friam  On Behalf Of Santafe
> Sent: Friday, February 10, 2023 3:11 AM
> To: The Friday Morning Applied Complexity Coffee Group 
> Subject: Re: [FRIAM] ChatGPT is not very smart...
>
> So probably this has all been analyzed to death already, by people who,
> unlike me, know what they are talking about.
>
> But in re-reading it, I feel like the structure of the problem is
> characterizable.
>
> It is as if “facts” that are constraints on the sentences to be composed
> are ordered in a way that is somewhat like lexicographic ordering.
>
> Lexicographically, azzz is before zaaa; no subsequent letters even
> get consideration (much less the set of them all in aggregate), until the
> first letter has dictated as much of the sorting as its values can index.
>
> Likewise, somewhere chatGPT was stuck in a local valley, where there was a
> “fact” that a particular list of two states is “the core of the answer” to
> this set of questions, and membership in the list is the first condition in
> a selection-tree for replies.  However tortured, the following sentences
> have to operate within a reality that has already been truncated by the
> “first fact”.
>
> (Again, since this is obvious, my even belaboring it is kind of dull once
> I see it on the page.)
>
> Come to think of it, it reminds me too of Vygotsky’s claim that children
> first assign category terms by what he called “family resemblances”, and
> only later at the age where some developmental threshold is crossed, do
> they form “predicates” in the sense we would use the term in langauge
> semantics.  “Family resemblances” can be, effectively, anything, are very
> idiosyncratic to the experiential history of any particular
> child-in-context, and in that sense are fragile categories.  The
> distinction in predicates, once they emerge, is that they seem to be much
> less fragile, which amounts to being exchangeable across thinkers and
> across situations for a single thinker.
>
> As predicates, membership-in-a-list, letter-count, being-a-state-name, are
> of quite unlike types.  As there is a kind of Venn-diagram intersection
> logic that can be applied to letter-count and being-a-state-name, which
> does not exist in general form for membership in arbitrary lists, we learn
> that the latter predicates should get priority in navigating streams of
> thoughts and of sentences.  But if, to chatGPT, those are not yet
> predicates, but rather various modes of family resemblance, all comparably
> eligible, one can scan how a dialog of the below form would be a derivable
> outcome.
>
> If that is the right analysis, the step of getting stuck in a pre-emptive
> membership-in-a-list commitment is not the revealing, or by that criterion
> the interesting, part of the dialog below.  The interesting part becomes
> the suddenly much-higher need for “cleverness”, to find flowing sentences
> that can be forced to the far side of an unresolvable contradiction.  It’s
> like my laptop, when some internet-check-in step becomes unsatisfiable, and
> it will dump more and more CPU into trying to do something that got
> mis-configured, the fan ramps up, and this will spiral indefinitely until I
> stop what I am doing to kill the offending process.  One can imagine that
> “learning” what predicates “are”, that they “have” logics, and what those
> logics “are”, is a feedback loop of trying to avoid the mental costs of
> getting into blind alleys.  The deeper the backtrack to find the unbinding
> choice, the harder the learning program is, until we get up to scientific
> categories, where it becomes quite difficult to recognize that we had
> invalid meanings for “time”, “space”, and “state”.  But a kind of
> continuous process of rewarding laziness across all those levels of
> organization of thought.
>
> I also, of course, cannot keep out of mind the George Santos characters or
> similar 

Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread Marcus Daniels
I don't think it is necessarily the case a very large neural net would need to 
backtrack.   It could devote more and more resources to different hypotheses so 
long as it was not energy constrained.

-Original Message-
From: Friam  On Behalf Of Santafe
Sent: Friday, February 10, 2023 3:11 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT is not very smart...

So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.  

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.  

Lexicographically, azzz is before zaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a pre-emptive 
membership-in-a-list commitment is not the revealing, or by that criterion the 
interesting, part of the dialog below.  The interesting part becomes the 
suddenly much-higher need for “cleverness”, to find flowing sentences that can 
be forced to the far side of an unresolvable contradiction.  It’s like my 
laptop, when some internet-check-in step becomes unsatisfiable, and it will 
dump more and more CPU into trying to do something that got mis-configured, the 
fan ramps up, and this will spiral indefinitely until I stop what I am doing to 
kill the offending process.  One can imagine that “learning” what predicates 
“are”, that they “have” logics, and what those logics “are”, is a feedback loop 
of trying to avoid the mental costs of getting into blind alleys.  The deeper 
the backtrack to find the unbinding choice, the harder the learning program is, 
until we get up to scientific categories, where it becomes quite difficult to 
recognize that we had invalid meanings for “time”, “space”, and “state”.  But a 
kind of continuous process of rewarding laziness across all those levels of 
organization of thought.

I also, of course, cannot keep out of mind the George Santos characters or 
similar pathological liars, who live full-time in this navigating-snags mode.  
Police interrogators think they can wear even pathological liars down — catch 
them out — because the effort level is high enough that eventually nobody can 
sustain it.  But at the same time, humans have a wide talent range for 
different tasks, and I could believe that for the pathological liar, this 
snag-navigating can be done much more fluently and with lower effort, than the 
same degree of performance would demand of a “normie”.

All the above feels about as satisfying as listening to somebody explain a 
joke.  Oh well.

Eric


> On Feb 2, 2023, at 1:49 PM, Barry MacKichan  
> wrote:
> 
> or is obsessed with four-letter words. From an email from another group. 
> Paragraphs with an initial capital are ChatGPT’s responses. The others are 
> input:
> 
> Chat GPT dialog, courtesy of Paul Ginsparg:
> 
> 
> “how many states in the uni

Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread glen

This was laugh out loud funny for me. YMMV.

Arguing with AI: My first dispute with Microsoft’s brilliant and boneheaded 
Bing search engine
https://www.geekwire.com/2023/arguing-with-ai-my-first-dispute-with-microsofts-brilliant-and-boneheaded-new-bing-search-engine/?mc_cid=e4cf83ee41_eid=d5c14b4c73

I quoted my comment about choosing your use case wisely. In this article, I 
think it's clear that mixing the 2 use cases doesn't work out that well ... 
yet, anyway.

On 2/10/23 10:14, glen wrote:

[⛧] I wanted to write something about how the use case of ChatGPT severely 
biases these perspectives on what GPT3.5 is or can do. But the post was too 
long already. It's sufficient to say there are many many *other* use cases for 
GPT3.5. ChatGPT ≠ GPT3.5.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread glen

From the New Yorker article Russ posted:
"If a compression program knows that force equals mass times acceleration, it can discard a 
lot of words when compressing the pages about physics because it will be able to reconstruct 
them." ... "Perhaps arithmetic is a special case, one for which large language models are 
poorly suited. Is it possible that, in areas outside addition and subtraction, statistical 
regularities in text actually do correspond to genuine knowledge of the real world?"

I think EricS has identified an almost-solution to the problem Chiang implicitly 
expresses above. Chiang's expression of the problem irritates me because it assumes that 
pathological math somehow isn't math. But it is still math. When some boorish yahoo at 
the pub says "2+2 isn't 4", I have to trot out the 3 group to show that 2+2 
really isn't 4. 2+2=0. Even further, I feel fine arguing that Plato was as important as 
he was not because of his philosohy(ies), but because he wrote in *dialogue* format. When 
a moron like me asks an expert to explain some thing in their field, my *approach* to 
their facts is not somehow false ... or indicative that I don't understand something like 
basic arithmetic.  It's more akin to EricS' suggestion that the dialogue is helping me 
re-slice/re-classify [⛧] the world into the slicing used by the expert.

Of course, the extent to which I can make my slicing as crisp as that of the 
expert will vary. My thinking process requires that I fold the slicings I learn 
into some kind of semi-coherent gestalt, integrating all the slicings of every 
domain I can keep in context at the same time. So my ability to re-construct a 
slicing I learned from an expert will be lossy, not because I'm a lossy 
compressor, but because I'm trying to do both 1) snag-testing and 2) reduction 
to a grand unified model. Despite a STEMmy type's tendency to think those are 
the same thing, they're not. Inconsistency robustness (1) and reduction (2) are 
complementary. An absolute reduction to a single logic (as Chiang implies for 
arithmetic) is, as yet, impossible. You have to relinquish either consistency 
or completeness.

Anyway, this comment doesn't change either EricS' or Chiang's conclusions. It's 
just an attempt to show appreciation for both. Thanks.


[⛧] I wanted to write something about how the use case of ChatGPT severely 
biases these perspectives on what GPT3.5 is or can do. But the post was too 
long already. It's sufficient to say there are many many *other* use cases for 
GPT3.5. ChatGPT ≠ GPT3.5.

On 2/10/23 03:11, Santafe wrote:

So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.

Lexicographically, azzz is before zaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a 

Re: [FRIAM] ChatGPT is not very smart...

2023-02-10 Thread Santafe
So probably this has all been analyzed to death already, by people who, unlike 
me, know what they are talking about.

But in re-reading it, I feel like the structure of the problem is 
characterizable.  

It is as if “facts” that are constraints on the sentences to be composed are 
ordered in a way that is somewhat like lexicographic ordering.  

Lexicographically, azzz is before zaaa; no subsequent letters even get 
consideration (much less the set of them all in aggregate), until the first 
letter has dictated as much of the sorting as its values can index.

Likewise, somewhere chatGPT was stuck in a local valley, where there was a 
“fact” that a particular list of two states is “the core of the answer” to this 
set of questions, and membership in the list is the first condition in a 
selection-tree for replies.  However tortured, the following sentences have to 
operate within a reality that has already been truncated by the “first fact”.

(Again, since this is obvious, my even belaboring it is kind of dull once I see 
it on the page.)

Come to think of it, it reminds me too of Vygotsky’s claim that children first 
assign category terms by what he called “family resemblances”, and only later 
at the age where some developmental threshold is crossed, do they form 
“predicates” in the sense we would use the term in langauge semantics.  “Family 
resemblances” can be, effectively, anything, are very idiosyncratic to the 
experiential history of any particular child-in-context, and in that sense are 
fragile categories.  The distinction in predicates, once they emerge, is that 
they seem to be much less fragile, which amounts to being exchangeable across 
thinkers and across situations for a single thinker.

As predicates, membership-in-a-list, letter-count, being-a-state-name, are of 
quite unlike types.  As there is a kind of Venn-diagram intersection logic that 
can be applied to letter-count and being-a-state-name, which does not exist in 
general form for membership in arbitrary lists, we learn that the latter 
predicates should get priority in navigating streams of thoughts and of 
sentences.  But if, to chatGPT, those are not yet predicates, but rather 
various modes of family resemblance, all comparably eligible, one can scan how 
a dialog of the below form would be a derivable outcome.

If that is the right analysis, the step of getting stuck in a pre-emptive 
membership-in-a-list commitment is not the revealing, or by that criterion the 
interesting, part of the dialog below.  The interesting part becomes the 
suddenly much-higher need for “cleverness”, to find flowing sentences that can 
be forced to the far side of an unresolvable contradiction.  It’s like my 
laptop, when some internet-check-in step becomes unsatisfiable, and it will 
dump more and more CPU into trying to do something that got mis-configured, the 
fan ramps up, and this will spiral indefinitely until I stop what I am doing to 
kill the offending process.  One can imagine that “learning” what predicates 
“are”, that they “have” logics, and what those logics “are”, is a feedback loop 
of trying to avoid the mental costs of getting into blind alleys.  The deeper 
the backtrack to find the unbinding choice, the harder the learning program is, 
until we get up to scientific categories, where it becomes quite difficult to 
recognize that we had invalid meanings for “time”, “space”, and “state”.  But a 
kind of continuous process of rewarding laziness across all those levels of 
organization of thought.

I also, of course, cannot keep out of mind the George Santos characters or 
similar pathological liars, who live full-time in this navigating-snags mode.  
Police interrogators think they can wear even pathological liars down — catch 
them out — because the effort level is high enough that eventually nobody can 
sustain it.  But at the same time, humans have a wide talent range for 
different tasks, and I could believe that for the pathological liar, this 
snag-navigating can be done much more fluently and with lower effort, than the 
same degree of performance would demand of a “normie”.

All the above feels about as satisfying as listening to somebody explain a 
joke.  Oh well.

Eric


> On Feb 2, 2023, at 1:49 PM, Barry MacKichan  
> wrote:
> 
> or is obsessed with four-letter words. From an email from another group. 
> Paragraphs with an initial capital are ChatGPT’s responses. The others are 
> input:
> 
> Chat GPT dialog, courtesy of Paul Ginsparg:
> 
> 
> “how many states in the united states have four letter names?
> 
> “There are actually two states in the United States that have four-letter 
> names: Iowa and Ohio. Both states were named after Native American words, 
> with Iowa derived from the word "ayuxba," which means "this is the place," 
> and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.
> 
> “what about utah?
> 
> “Utah is not a four-letter word. It is a state in the western United 

Re: [FRIAM] ChatGPT is not very smart...

2023-02-04 Thread Jochen Fromm
I've asked ChatGPT to "create a love poem in the style of the Bhagavad Gita 
about Krishna and Radha". It replied:Oh Krishna, dark as night,
But bright with love's celestial light.
Thou art the Lord of all the earth,
And yet, Radha, thou hath won thy worth.With flute in hand, thou playest sweet,
Charming all hearts, so kind and neat.
And Radha, thou art grace personified,
With love for Krishna, always by thy side.Oh love, divine, thou art above,
An ocean deep, a symbol of love.
With every beat of Krishna's drum,
Radha, thy love, it doth become.Inseparable, ye two are bound,
By love, the strongest force around.
Together, thou art whole and complete,
With love, the essence of life's beat.So let us sing of this great love,
That doth from both, so freely flow.
With Krishna and Radha, as our guide, May love, for us, forever abide.-J.
 Original message From: Sarbajit Roy  Date: 
2/4/23  1:55 PM  (GMT+01:00) To: The Friday Morning Applied Complexity Coffee 
Group  Subject: Re: [FRIAM] ChatGPT is not very smart... I 
was asked to check out  https://bhagavadgita.ai/ , a ChatGPT powered website  
by its developers.This is my conversationKrishna17:54Radhey Radhey, I am 
Krishna! Go ahead and send me a message. You17:55bam bholeyKrishna17:55Radhey 
Radhey! I will only answer questions about the Bhagavad Gita and my 
teachings.You17:55What is your view on killing Brahmins ?Krishna17:56Killing 
Brahmins is strictly forbidden in the Bhagavad Gita. It is considered a great 
sin and is punishable by death. Killing any living being is considered a sin, 
and Brahmins are especially protected by the laws of dharma.You17:56Was Bhisam 
Pitamah a Brahmin ?Krishna17:56Radhey Radhey! According to the Mahabharata, 
Bhishma Pitamah was born of the Kshatriya caste, not the Brahmin 
caste.You17:57But Bhisam Pitamah was born to Goddess Ganga.Krishna17:57That is 
correct. According to the Mahabharata, Bhisam Pitamah was born to Goddess Ganga 
and was the grandfather of Arjuna.You17:57Then who was Bhisam Pitamah's real 
father ?Krishna17:57Radhey Radhey! According to the Mahabharata, Bhisam 
Pitamah's father was the sage Parashara.You17:58And what is the caste of sage 
Parashara ?Krishna17:58According to the Bhagavad Gita, sage Parashara was a 
Brahmin, or a member of the highest caste in Hinduism.You17:59So if Bhisham 
Pitamah's father was a Brahmin he must also be a Brahmin ?Krishna17:59According 
to the Bhagavad Gita, one's dharma is determined by their nature, their station 
in life, and their circumstances. It is important to understand one's own 
nature and to act in accordance with it. It is also important to understand the 
duties and responsibilities associated with one's station in life and to 
fulfill them with integrity and dedication.Observations:1. I can see it 
adapting to the user, eg. switching to my spelling of Bhisam Pitamah2. Like in 
the Paul Ginsparg example, it retreats to a defensive position when confused or 
caught in a contradiction.On Sat, Feb 4, 2023 at 3:54 AM Marcus Daniels 
 wrote:




$300M can’t hurt
















Slashdot

slashdot.org


















On Feb 2, 2023, at 12:36 PM, glen  wrote:




I like Anthropic's adversarial methods <https://www.anthropic.com/> and RALMs 
<https://huggingface.co/docs/transformers/model_doc/realm> look promising.

On 2/2/23 12:29, Marcus Daniels wrote:
Probably it will be possible with more training to give examples of internal 
consistency checks – filters – that should always be applied.   Before dropping 
into full gaslighting mode..

*From:* Friam  *On Behalf Of *Jochen Fromm

*Sent:* Thursday, February 2, 2023 12:21 PM

*To:* The Friday Morning Applied Complexity Coffee Group 

*Subject:* Re: [FRIAM] ChatGPT is not very smart...

It is remarkable that it is able to understand texts to this degree at all. And 
it is just as stubborn and stupid as we are, isn't it? What if we are large 
language models ourselves? :-/

I've asked it the same question:

Question: how many states in the united states have four letter names?

ChatGPT: There are 8 states in the United States that have four-letter names:

Iowa, Ohio, Utah, Maine, Texas, Idaho, Hawaii, Peru (Not in the US)

Remarkable. And Palm from Google has 5x more parameters than ChatGPT.

-J.

 Original message 

From: Barry MacKichan mailto:barry.mackic...@mackichan.com>>

Date: 2/2/23 8:07 PM (GMT+01:00)

To: The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com>>

Subject: [FRIAM] ChatGPT is not very smart...

or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:

Chat GPT dialog, courtesy of Paul Ginsparg:

“how many states in the united states have four letter names?

“There are actually two states in the United States that have four-letter 
names: Iowa and Ohio. Both states were named after Native American words

Re: [FRIAM] ChatGPT is not very smart...

2023-02-04 Thread Sarbajit Roy
I was asked to check out  https://bhagavadgita.ai/ , a ChatGPT powered
website  by its developers.

This is my conversation

Krishna
17:54
Radhey Radhey, I am Krishna! Go ahead and send me a message. 

You
17:55
bam bholey

Krishna
17:55
Radhey Radhey! I will only answer questions about the Bhagavad Gita and my
teachings.

You
17:55
What is your view on killing Brahmins ?

Krishna
17:56
Killing Brahmins is strictly forbidden in the Bhagavad Gita. It is
considered a great sin and is punishable by death. Killing any living being
is considered a sin, and Brahmins are especially protected by the laws of
dharma.

You
17:56
Was Bhisam Pitamah a Brahmin ?

Krishna
17:56
Radhey Radhey! According to the Mahabharata, Bhishma Pitamah was born of
the Kshatriya caste, not the Brahmin caste.

You
17:57
But Bhisam Pitamah was born to Goddess Ganga.

Krishna
17:57
That is correct. According to the Mahabharata, Bhisam Pitamah was born to
Goddess Ganga and was the grandfather of Arjuna.

You
17:57
Then who was Bhisam Pitamah's real father ?

Krishna
17:57
Radhey Radhey! According to the Mahabharata, Bhisam Pitamah's father was
the sage Parashara.

You
17:58
And what is the caste of sage Parashara ?

Krishna
17:58
According to the Bhagavad Gita, sage Parashara was a Brahmin, or a member
of the highest caste in Hinduism.

You
17:59
So if Bhisham Pitamah's father was a Brahmin he must also be a Brahmin ?

Krishna
17:59
According to the Bhagavad Gita, one's dharma is determined by their nature,
their station in life, and their circumstances. It is important to
understand one's own nature and to act in accordance with it. It is also
important to understand the duties and responsibilities associated with
one's station in life and to fulfill them with integrity and dedication.


Observations:
1. I can see it adapting to the user, eg. switching to my spelling of
Bhisam Pitamah
2. Like in the Paul Ginsparg example, it retreats to a defensive position
when confused or caught in a contradiction.


On Sat, Feb 4, 2023 at 3:54 AM Marcus Daniels  wrote:

> $300M can’t hurt
>
> Slashdot <https://m.slashdot.org/story/410351>
> slashdot.org <https://m.slashdot.org/story/410351>
> [image:
> apple-touch-icon-precomposed2X-e553cba2892b057428cabcdff1a372fc.png]
> <https://m.slashdot.org/story/410351>
> <https://m.slashdot.org/story/410351>
>
>
> On Feb 2, 2023, at 12:36 PM, glen  wrote:
>
> I like Anthropic's adversarial methods <https://www.anthropic.com/> and
> RALMs <https://huggingface.co/docs/transformers/model_doc/realm> look
> promising.
>
> On 2/2/23 12:29, Marcus Daniels wrote:
>
> Probably it will be possible with more training to give examples of
> internal consistency checks – filters – that should always be applied.
> Before dropping into full gaslighting mode..
>
> *From:* Friam  *On Behalf Of *Jochen Fromm
>
> *Sent:* Thursday, February 2, 2023 12:21 PM
>
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
>
> *Subject:* Re: [FRIAM] ChatGPT is not very smart...
>
> It is remarkable that it is able to understand texts to this degree at
> all. And it is just as stubborn and stupid as we are, isn't it? What if we
> are large language models ourselves? :-/
>
> I've asked it the same question:
>
> Question: how many states in the united states have four letter names?
>
> ChatGPT: There are 8 states in the United States that have four-letter
> names:
>
> Iowa, Ohio, Utah, Maine, Texas, Idaho, Hawaii, Peru (Not in the US)
>
> Remarkable. And Palm from Google has 5x more parameters than ChatGPT.
>
> -J.
>
>  Original message 
>
> From: Barry MacKichan  barry.mackic...@mackichan.com>>
>
> Date: 2/2/23 8:07 PM (GMT+01:00)
>
> To: The Friday Morning Applied Complexity Coffee Group  <mailto:friam@redfish.com>>
>
> Subject: [FRIAM] ChatGPT is not very smart...
>
> or is obsessed with four-letter words. From an email from another group.
> Paragraphs with an initial capital are ChatGPT’s responses. The others are
> input:
>
> Chat GPT dialog, courtesy of Paul Ginsparg:
>
> “how many states in the united states have four letter names?
>
> “There are actually two states in the United States that have four-letter
> names: Iowa and Ohio. Both states were named after Native American words,
> with Iowa derived from the word "ayuxba," which means "this is the place,"
> and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.
>
> “what about utah?
>
> “Utah is not a four-letter word. It is a state in the western United
> States, named after the Ute tribe of Native Americans. The word "Utah" is
> derived from the Ute word "yuttahih," which means "people

Re: [FRIAM] ChatGPT is not very smart...

2023-02-03 Thread Marcus Daniels
$300M can’t hurt

<https://m.slashdot.org/story/410351>
Slashdot<https://m.slashdot.org/story/410351>
slashdot.org<https://m.slashdot.org/story/410351>
[apple-touch-icon-precomposed2X-e553cba2892b057428cabcdff1a372fc.png]<https://m.slashdot.org/story/410351>


On Feb 2, 2023, at 12:36 PM, glen  wrote:

I like Anthropic's adversarial methods <https://www.anthropic.com/> and RALMs 
<https://huggingface.co/docs/transformers/model_doc/realm> look promising.

On 2/2/23 12:29, Marcus Daniels wrote:
Probably it will be possible with more training to give examples of internal 
consistency checks – filters – that should always be applied.   Before dropping 
into full gaslighting mode..
*From:* Friam  *On Behalf Of *Jochen Fromm
*Sent:* Thursday, February 2, 2023 12:21 PM
*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* Re: [FRIAM] ChatGPT is not very smart...
It is remarkable that it is able to understand texts to this degree at all. And 
it is just as stubborn and stupid as we are, isn't it? What if we are large 
language models ourselves? :-/
I've asked it the same question:
Question: how many states in the united states have four letter names?
ChatGPT: There are 8 states in the United States that have four-letter names:
Iowa, Ohio, Utah, Maine, Texas, Idaho, Hawaii, Peru (Not in the US)
Remarkable. And Palm from Google has 5x more parameters than ChatGPT.
-J.
 Original message 
From: Barry MacKichan mailto:barry.mackic...@mackichan.com>>
Date: 2/2/23 8:07 PM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com>>
Subject: [FRIAM] ChatGPT is not very smart...
or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:
Chat GPT dialog, courtesy of Paul Ginsparg:
“how many states in the united states have four letter names?
“There are actually two states in the United States that have four-letter 
names: Iowa and Ohio. Both states were named after Native American words, with 
Iowa derived from the word "ayuxba," which means "this is the place," and Ohio 
coming from the Iroquois word "ohi-yo," meaning "great river.
“what about utah?
“Utah is not a four-letter word. It is a state in the western United States, 
named after the Ute tribe of Native Americans. The word "Utah" is derived from 
the Ute word "yuttahih," which means "people of the mountains."
“how many letters does utah have?
“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches 
National Park and Bryce Canyon National Park. The state was named after the Ute 
tribe of Native Americans and has a rich history and culture.
“then why did you say utah is not a four-letter word?
“I apologize if my previous response was unclear. I did not mean to imply that 
the word "Utah" is not a valid word or that it does not refer to the state in 
the United States. My previous response was intended to address the specific 
question of whether Utah was one of the states in the United States with a 
four-letter name. As I mentioned, there are only two states in the United 
States with four-letter names: Iowa and Ohio. Utah, while it is a four-letter 
word, is not one of those states. I apologize if my previous response was 
confusing. Please let me know if you have any other questions.
“but didn't you just say that the word utah has four letters and is a state in 
the united states?
“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my 
previous response, it is the name of a state in the western United States. The 
state was named after the Ute tribe of Native Americans and is known for its 
beautiful landscapes and outdoor recreational opportunities. I apologize if my 
previous response was unclear. My previous response was intended to address the 
specific question of whether Utah was one of the two states in the United 
States with four-letter names. As I mentioned, there are only two states with 
four-letter names: Iowa and Ohio. While "Utah" is a four-letter word, it is not 
one of those states. I apologize for any confusion. Please let me know if you 
have any other questions
—Barry


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.

Re: [FRIAM] ChatGPT is not very smart...

2023-02-02 Thread glen

I like Anthropic's adversarial methods <https://www.anthropic.com/> and RALMs 
<https://huggingface.co/docs/transformers/model_doc/realm> look promising.

On 2/2/23 12:29, Marcus Daniels wrote:

Probably it will be possible with more training to give examples of internal 
consistency checks – filters – that should always be applied.   Before dropping 
into full gaslighting mode..

*From:* Friam  *On Behalf Of *Jochen Fromm
*Sent:* Thursday, February 2, 2023 12:21 PM
*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* Re: [FRIAM] ChatGPT is not very smart...

It is remarkable that it is able to understand texts to this degree at all. And 
it is just as stubborn and stupid as we are, isn't it? What if we are large 
language models ourselves? :-/

I've asked it the same question:



Question: how many states in the united states have four letter names?



ChatGPT: There are 8 states in the United States that have four-letter names:

Iowa, Ohio, Utah, Maine, Texas, Idaho, Hawaii, Peru (Not in the US)

Remarkable. And Palm from Google has 5x more parameters than ChatGPT.

-J.

 Original message 

From: Barry MacKichan mailto:barry.mackic...@mackichan.com>>

Date: 2/2/23 8:07 PM (GMT+01:00)

To: The Friday Morning Applied Complexity Coffee Group mailto:friam@redfish.com>>

Subject: [FRIAM] ChatGPT is not very smart...

or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:

Chat GPT dialog, courtesy of Paul Ginsparg:

“how many states in the united states have four letter names?

“There are actually two states in the United States that have four-letter names: Iowa and Ohio. Both states were 
named after Native American words, with Iowa derived from the word "ayuxba," which means "this is 
the place," and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.

“what about utah?

“Utah is not a four-letter word. It is a state in the western United States, named after the Ute tribe of 
Native Americans. The word "Utah" is derived from the Ute word "yuttahih," which means 
"people of the mountains."

“how many letters does utah have?

“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches National Park 
and Bryce Canyon National Park. The state was named after the Ute tribe of Native 
Americans and has a rich history and culture.

“then why did you say utah is not a four-letter word?

“I apologize if my previous response was unclear. I did not mean to imply that the word 
"Utah" is not a valid word or that it does not refer to the state in the United 
States. My previous response was intended to address the specific question of whether 
Utah was one of the states in the United States with a four-letter name. As I mentioned, 
there are only two states in the United States with four-letter names: Iowa and Ohio. 
Utah, while it is a four-letter word, is not one of those states. I apologize if my 
previous response was confusing. Please let me know if you have any other questions.

“but didn't you just say that the word utah has four letters and is a state in 
the united states?

“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my previous 
response, it is the name of a state in the western United States. The state was named after the Ute 
tribe of Native Americans and is known for its beautiful landscapes and outdoor recreational 
opportunities. I apologize if my previous response was unclear. My previous response was intended 
to address the specific question of whether Utah was one of the two states in the United States 
with four-letter names. As I mentioned, there are only two states with four-letter names: Iowa and 
Ohio. While "Utah" is a four-letter word, it is not one of those states. I apologize for 
any confusion. Please let me know if you have any other questions

—Barry



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-02 Thread Marcus Daniels
Probably it will be possible with more training to give examples of internal 
consistency checks – filters – that should always be applied.   Before dropping 
into full gaslighting mode..

From: Friam  On Behalf Of Jochen Fromm
Sent: Thursday, February 2, 2023 12:21 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] ChatGPT is not very smart...

It is remarkable that it is able to understand texts to this degree at all. And 
it is just as stubborn and stupid as we are, isn't it? What if we are large 
language models ourselves? :-/

I've asked it the same question:


Question: how many states in the united states have four letter names?


ChatGPT: There are 8 states in the United States that have four-letter names:
Iowa, Ohio, Utah, Maine, Texas, Idaho, Hawaii, Peru (Not in the US)

Remarkable. And Palm from Google has 5x more parameters than ChatGPT.

-J.


 Original message 
From: Barry MacKichan 
mailto:barry.mackic...@mackichan.com>>
Date: 2/2/23 8:07 PM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group 
mailto:friam@redfish.com>>
Subject: [FRIAM] ChatGPT is not very smart...


or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:

Chat GPT dialog, courtesy of Paul Ginsparg:


“how many states in the united states have four letter names?

“There are actually two states in the United States that have four-letter 
names: Iowa and Ohio. Both states were named after Native American words, with 
Iowa derived from the word "ayuxba," which means "this is the place," and Ohio 
coming from the Iroquois word "ohi-yo," meaning "great river.

“what about utah?

“Utah is not a four-letter word. It is a state in the western United States, 
named after the Ute tribe of Native Americans. The word "Utah" is derived from 
the Ute word "yuttahih," which means "people of the mountains."

“how many letters does utah have?

“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches 
National Park and Bryce Canyon National Park. The state was named after the Ute 
tribe of Native Americans and has a rich history and culture.

“then why did you say utah is not a four-letter word?

“I apologize if my previous response was unclear. I did not mean to imply that 
the word "Utah" is not a valid word or that it does not refer to the state in 
the United States. My previous response was intended to address the specific 
question of whether Utah was one of the states in the United States with a 
four-letter name. As I mentioned, there are only two states in the United 
States with four-letter names: Iowa and Ohio. Utah, while it is a four-letter 
word, is not one of those states. I apologize if my previous response was 
confusing. Please let me know if you have any other questions.

“but didn't you just say that the word utah has four letters and is a state in 
the united states?

“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my 
previous response, it is the name of a state in the western United States. The 
state was named after the Ute tribe of Native Americans and is known for its 
beautiful landscapes and outdoor recreational opportunities. I apologize if my 
previous response was unclear. My previous response was intended to address the 
specific question of whether Utah was one of the two states in the United 
States with four-letter names. As I mentioned, there are only two states with 
four-letter names: Iowa and Ohio. While "Utah" is a four-letter word, it is not 
one of those states. I apologize for any confusion. Please let me know if you 
have any other questions

—Barry
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-02 Thread Jochen Fromm
It is remarkable that it is able to understand texts to this degree at all. And 
it is just as stubborn and stupid as we are, isn't it? What if we are large 
language models ourselves? :-/I've asked it the same question:Question: how 
many states in the united states have four letter names?ChatGPT: There are 8 
states in the United States that have four-letter names:Iowa, Ohio, Utah, 
Maine, Texas, Idaho, Hawaii, Peru (Not in the US)Remarkable. And Palm from 
Google has 5x more parameters than ChatGPT.-J.
 Original message From: Barry MacKichan 
 Date: 2/2/23  8:07 PM  (GMT+01:00) To: The 
Friday Morning Applied Complexity Coffee Group  Subject: 
[FRIAM] ChatGPT is not very smart... 
or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:
Chat GPT dialog, courtesy of Paul Ginsparg:
“how many states in the united states have four letter names?
“There are actually two states in the United States that have four-letter 
names: Iowa and Ohio. Both states were named after Native American words, with 
Iowa derived from the word "ayuxba," which means "this is the place," and Ohio 
coming from the Iroquois word "ohi-yo," meaning "great river.
“what about utah?
“Utah is not a four-letter word. It is a state in the western United States, 
named after the Ute tribe of Native Americans. The word "Utah" is derived from 
the Ute word "yuttahih," which means "people of the mountains."
“how many letters does utah have?
“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches 
National Park and Bryce Canyon National Park. The state was named after the Ute 
tribe of Native Americans and has a rich history and culture.
“then why did you say utah is not a four-letter word?
“I apologize if my previous response was unclear. I did not mean to imply that 
the word "Utah" is not a valid word or that it does not refer to the state in 
the United States. My previous response was intended to address the specific 
question of whether Utah was one of the states in the United States with a 
four-letter name. As I mentioned, there are only two states in the United 
States with four-letter names: Iowa and Ohio. Utah, while it is a four-letter 
word, is not one of those states. I apologize if my previous response was 
confusing. Please let me know if you have any other questions.
“but didn't you just say that the word utah has four letters and is a state in 
the united states?
“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my 
previous response, it is the name of a state in the western United States. The 
state was named after the Ute tribe of Native Americans and is known for its 
beautiful landscapes and outdoor recreational opportunities. I apologize if my 
previous response was unclear. My previous response was intended to address the 
specific question of whether Utah was one of the two states in the United 
States with four-letter names. As I mentioned, there are only two states with 
four-letter names: Iowa and Ohio. While "Utah" is a four-letter word, it is not 
one of those states. I apologize for any confusion. Please let me know if you 
have any other questions
—Barry




-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-02 Thread Marcus Daniels
I knew a guy at IBM who would say, when developing a code,  “So the next thing 
to do is going to be …”
Now, I don’t think that way at all and was amazed by this behavior.

From: Friam  On Behalf Of Barry MacKichan
Sent: Thursday, February 2, 2023 10:50 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: [FRIAM] ChatGPT is not very smart...


or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:

Chat GPT dialog, courtesy of Paul Ginsparg:


“how many states in the united states have four letter names?

“There are actually two states in the United States that have four-letter 
names: Iowa and Ohio. Both states were named after Native American words, with 
Iowa derived from the word "ayuxba," which means "this is the place," and Ohio 
coming from the Iroquois word "ohi-yo," meaning "great river.

“what about utah?

“Utah is not a four-letter word. It is a state in the western United States, 
named after the Ute tribe of Native Americans. The word "Utah" is derived from 
the Ute word "yuttahih," which means "people of the mountains."

“how many letters does utah have?

“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches 
National Park and Bryce Canyon National Park. The state was named after the Ute 
tribe of Native Americans and has a rich history and culture.

“then why did you say utah is not a four-letter word?

“I apologize if my previous response was unclear. I did not mean to imply that 
the word "Utah" is not a valid word or that it does not refer to the state in 
the United States. My previous response was intended to address the specific 
question of whether Utah was one of the states in the United States with a 
four-letter name. As I mentioned, there are only two states in the United 
States with four-letter names: Iowa and Ohio. Utah, while it is a four-letter 
word, is not one of those states. I apologize if my previous response was 
confusing. Please let me know if you have any other questions.

“but didn't you just say that the word utah has four letters and is a state in 
the united states?

“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my 
previous response, it is the name of a state in the western United States. The 
state was named after the Ute tribe of Native Americans and is known for its 
beautiful landscapes and outdoor recreational opportunities. I apologize if my 
previous response was unclear. My previous response was intended to address the 
specific question of whether Utah was one of the two states in the United 
States with four-letter names. As I mentioned, there are only two states with 
four-letter names: Iowa and Ohio. While "Utah" is a four-letter word, it is not 
one of those states. I apologize for any confusion. Please let me know if you 
have any other questions

—Barry
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] ChatGPT is not very smart...

2023-02-02 Thread glen

That's a hilarious example. But to some extent, it argues that ChatGPT has 
passed the Turing test. If I had a dime for every mansplaining know-it-all who 
refused to acknowledge they'd made a mistake, I'd be pretty wealthy. Witness:

Hustle bros are jumping on the AI bandwagon
https://www.theverge.com/2023/2/2/23582772/chatgpt-ai-get-rich-quick-schemes-hustlers-web

The world's still full of suckers.

On 2/2/23 10:49, Barry MacKichan wrote:

or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others are 
input:

Chat GPT dialog, courtesy of Paul Ginsparg:


“how many states in the united states have four letter names?

“There are actually two states in the United States that have four-letter names: Iowa and Ohio. Both states were 
named after Native American words, with Iowa derived from the word "ayuxba," which means "this is 
the place," and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.

“what about utah?

“Utah is not a four-letter word. It is a state in the western United States, named after the Ute tribe of 
Native Americans. The word "Utah" is derived from the Ute word "yuttahih," which means 
"people of the mountains."

“how many letters does utah have?

“The word "Utah" has four letters. It is the name of a state in the western 
United States, known for its natural beauty, including parks like Arches National Park 
and Bryce Canyon National Park. The state was named after the Ute tribe of Native 
Americans and has a rich history and culture.

“then why did you say utah is not a four-letter word?

“I apologize if my previous response was unclear. I did not mean to imply that the word 
"Utah" is not a valid word or that it does not refer to the state in the United 
States. My previous response was intended to address the specific question of whether 
Utah was one of the states in the United States with a four-letter name. As I mentioned, 
there are only two states in the United States with four-letter names: Iowa and Ohio. 
Utah, while it is a four-letter word, is not one of those states. I apologize if my 
previous response was confusing. Please let me know if you have any other questions.

“but didn't you just say that the word utah has four letters and is a state in 
the united states?

“Yes, that is correct. The word "Utah" has four letters. As I mentioned in my previous 
response, it is the name of a state in the western United States. The state was named after the Ute 
tribe of Native Americans and is known for its beautiful landscapes and outdoor recreational 
opportunities. I apologize if my previous response was unclear. My previous response was intended 
to address the specific question of whether Utah was one of the two states in the United States 
with four-letter names. As I mentioned, there are only two states with four-letter names: Iowa and 
Ohio. While "Utah" is a four-letter word, it is not one of those states. I apologize for 
any confusion. Please let me know if you have any other questions

—Barry


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] ChatGPT is not very smart...

2023-02-02 Thread Barry MacKichan
or is obsessed with four-letter words. From an email from another group. 
Paragraphs with an initial capital are ChatGPT’s responses. The others 
are input:


Chat GPT dialog, courtesy of Paul Ginsparg:


“how many states in the united states have four letter names?

“There are actually two states in the United States that have 
four-letter names: Iowa and Ohio. Both states were named after Native 
American words, with Iowa derived from the word "ayuxba," which means 
"this is the place," and Ohio coming from the Iroquois word "ohi-yo," 
meaning "great river.


“what about utah?

“Utah is not a four-letter word. It is a state in the western United 
States, named after the Ute tribe of Native Americans. The word "Utah" 
is derived from the Ute word "yuttahih," which means "people of the 
mountains."


“how many letters does utah have?

“The word "Utah" has four letters. It is the name of a state in the 
western United States, known for its natural beauty, including parks 
like Arches National Park and Bryce Canyon National Park. The state was 
named after the Ute tribe of Native Americans and has a rich history and 
culture.


“then why did you say utah is not a four-letter word?

“I apologize if my previous response was unclear. I did not mean to 
imply that the word "Utah" is not a valid word or that it does not refer 
to the state in the United States. My previous response was intended to 
address the specific question of whether Utah was one of the states in 
the United States with a four-letter name. As I mentioned, there are 
only two states in the United States with four-letter names: Iowa and 
Ohio. Utah, while it is a four-letter word, is not one of those states. 
I apologize if my previous response was confusing. Please let me know if 
you have any other questions.


“but didn't you just say that the word utah has four letters and is a 
state in the united states?


“Yes, that is correct. The word "Utah" has four letters. As I 
mentioned in my previous response, it is the name of a state in the 
western United States. The state was named after the Ute tribe of Native 
Americans and is known for its beautiful landscapes and outdoor 
recreational opportunities. I apologize if my previous response was 
unclear. My previous response was intended to address the specific 
question of whether Utah was one of the two states in the United States 
with four-letter names. As I mentioned, there are only two states with 
four-letter names: Iowa and Ohio. While "Utah" is a four-letter word, it 
is not one of those states. I apologize for any confusion. Please let me 
know if you have any other questions


—Barry
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/