Re: [PEIRCE-L] Chat GPT and Peirce

2023-07-20 Thread Jerry LR Chandler
List:

Just a brief comment on Professor Everett wide-reaching scientific assertion 
that appears to me to subscribe too and pontificate about CSP writings with 
respect to realism of scientific phenomenology.

> On Jul 18, 2023, at 1:44 PM, Thomas903  wrote:
> 
> Dan,
> 
> I wanted to comment briefly on a sentence from your earlier posting: 
> "ChatGPT simply and conclusively shows that there is no need for any innate 
> learning module in the brain to learn language.”

Human language is widely regarded as a vehicle of communication between 
individuals.  The possibility of human linguistic communications requires 
necessarily both a speaker and a listener. 

Both are necessary; one without the other is insufficient.

My comments seek to explore three well-differentiated aspects of the possible 
interpretations of this conjecture.

First, it is obvious to most philosophers that a common mother tongue is the 
foundation of human culture and that the capability to speak and understand the 
same tongue is essential to normal human communication.  The natural genetic 
potential of a new-born does not entail instantaneous linguistic proficiency.  
This assertion must be explored from this perspective.  

Secondly, CSP referred to the critical notions of the distinctions among token, 
type and tone on the interpretation of signs and signals. Learning these 
distinctions are necessary for analysis of pragmatic realism associated with 
human communication.

In her well-grounded work, Logic-Language-Ontology, Professor Ursula Skardowska 
demonstrates the roots of forms of understandings between speaker and listener 
in terms of Peircian tokens and types.   In order for verbal communication to 
occur, both participants need experience with tokens, types and tones as 
described by CSP.  The inscription of semantic terms in both minds is essential 
for the precise reproduction of meaningful terms.This assertion must be 
explored from this perspective.  

Thirdly, CSP developed his trichotomy for the communication of the factual 
foundations of natural sciences.  Such communications are functions of the 
knowledge bases associated with the internal semes of the individual minds.  
Historical sensory experiences are necessary to ground the relationships among 
the scientific symbols used to express the tokens, types and tones of 
scientific communication. This assertion must be explored from this 
perspective. 

WRT specifically ChatGTP, I would ask two simple questions: 

1.  Under what situational circumstances would subscriptions to the algorithm 
correspond to circumscriptions of natural descriptions (such that the 
intentions of questioners’ sentences inscribed in the responses of the 
algorithm) ?

2. How do human communicators inscribe meaning into words (as logical terms) 
such that the presentation to the recipient corresponds with the 
re-presentations of the speaker? 

Cheers

Jerry


 



_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.


Re: [PEIRCE-L] Chat GPT and Peirce

2023-07-20 Thread John F Sowa
Tom, Dan, and Helmut,

We must distinguish three different systems: (1) the Large Language Modlels 
(LLMs) which are derived from large volumes of texts, (2) ChatGPT and other 
systems that use the LLMs for various purposes, and (3) the human brain + body 
+ all the human experience of interacting with the world.

!. The LLMs are derived by tensor calculus to establish a huge collection of 
sentence patterns, which were originally designed by Google for machine 
translation of natural languages.  They are also useful for translating 
artificial languages, such as many versions of logic and other kinds of 
notations used by  various computer systems.

2. Many computational systems that process the LLMs serve various purposes.  
The original versions of GPT 1, 2, 3, and 3.5 did very little processing beyond 
creating and using the collection of LLMs.  But many people around the world 
did a huge amount of work in developing an open-ended variety of applications 
with those LLMs.

3. The psychologists, neuroscientists, philosophers, and linguists who have 
collaborated for centuries on trying to understand the underlying principles of 
human use of language.  A century ago, Peirce discovered and formulated some 
fundamental principles and guidelines for analyzing, relating, and 
understanding all these issues.

Tom> ChatGPT did not evolve naturally, but was developed by humans who 
certainly do understand how language works. Those humans fed ChatGPT vast 
amounts of carefully curated (not random) examples of human language and images.

Google had a large number of people with various backgrounds, including 
linguistics.  But the development of LLMs was primarily designed for machine 
translation.It does not depend in any way any linguistic theory or logic 
theory..  It just computes the probability of the next symbol (word,  morpheme, 
affix, or whatever may be used in any kind of notation).

There was very little curation, other than an attempt to get a representative 
sample of the many kinds of documents.  Copyright and other legal issues have 
no influence on the accuracy of how LLM technology works.

Tom> To the extent ChatGPT "learns" language, its success depends upon the *a 
priori element provided by humans. This a priori element is the equivalent of 
an "innate" potential or quality.

No.  There is no corrspondence whatever between the way children learn and GPT 
developls.  At every step from the earliest days of GPT-1, every sentence 
generated was grammatical.  But children do not use any grammatical features 
that they don't yet understand.  The psycholinguists have much deeper insights 
into the nature of language than Chomsky.  The LLMs provide zero insight into 
the nature of language.

Tom> It appears that ChatGPT infers from the uses of signs in a multitude of 
settings -- many of which represent unsuccessful, failed, or irrelevant 
efforts.  It seems that Peircean inferences about language would revolve around 
pragmatic meanings.

The derivation of LLMs and their step by step generation of text does not 
depend on anything related to logic, meaning, or reasoning.  It  just generates 
one token after another.  For machine translation, Google linguists determined 
what significant prefixes, infixes, or suffixes should be distinguished in any 
word form.  After that, the LLMs are just based on patterns of those items.

But ChatGPT does do some significant processing that does use various methods 
of reasoning.  It may use any methods that any programmer may invent.  It 
cannot be used to support or refute any theory of linguistics, psychology, or 
philosophy of any kind.

John

From: "Thomas903" 
Sent: 7/18/23 2:45 PM

Dan,

I wanted to comment briefly on a sentence from your earlier posting:
"ChatGPT simply and conclusively shows that there is no need for any innate 
learning module in the brain to learn language."

1- ChatGPT did not evolve naturally, but was developed by humans who certainly 
do understand how language works. Those humans fed ChatGPT vast amounts of 
carefully curated (not random) examples of human language and images.   
Evidence that digital computers and software can learn language on their own is 
therefore absent.  To the extent ChatGPT "learns" language, its success depends 
upon the *a priori element provided by humans. This a priori element is the 
equivalent of an "innate" potential or quality.

2- ChatGPT is a tool.  Tools do not act on their own, or learn on their own.  
They have no intentions, no interests, no responsibilities.  They are directed 
by their users/operators.  Without direction, they learn nothing.

3- It is well known that ChatGPT frequently commits gross/obvious errors, and 
those gross errors are pragmatic evidence that it has failed at learning the 
language. Pattern recognition & matching may be a better description of what it 
does.  (Does ChatGPT ever invent new words?)

4- According to press reports, ChatGPT depends upon the use (sca

Re: [PEIRCE-L] Chat GPT and Peirce

2023-07-18 Thread John F Sowa
Dan and Tom,

That article by Steven Piantadosi, which is dated Marh 2023, is obsolete.  The 
author used a version of OpenAI, which was supposed to be based on GPT-4, but 
was actually based on features that were added to 8 copies of GPT 3.5,  Each 
copy used the older version of GPT (LLMs by themselves) and added a front-end 
that did more complex reasoning with LLMs, such as creating stories, generating 
poetry, or drawing pictures.

The only thing that LLMs can do well is translate languages, natural of 
artificial, to and from one another.  Even then, the LLMs cannot accommodate 
semantic differences.  They do very well for Standard Average European (SAE), 
as Whorf called them.  But they are much worse for languages whose semantics 
differ.

For example, Polish and Russian are two Slavic languages with similar grammar.  
But the Polish religion is Roman Catholc, and the religious language is based 
on medieval Latin, as is SAE.  But the Russian religion is based on Old Church 
Slavonic whose semantics is better preserved in Russian than in any SAE lnguage.

As a result, Polish belongs to the SAE group..  Translations of  Russian math 
and science texts to and from SAE languages are good.  But translations of 
Russian literature to and from SAE are considerably worse.

LLM-based translations of languages other than SAE are not so good, although 
texts about modern technology are usually quite good because the subject matter 
is defined in SAE languages.

None of this supports Chomsky.  If anything, it gives more support to Whorf, 
Jakobson, Peirce, Halliday, and others who put more emphasis on semantics than 
syntax.

Following are the slides of a talk I presented that emphasizes Peirce and his 
diagrammatic foundations: https://jfsowa.com/talks/bionlp.pdf

John
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.


Re: [PEIRCE-L] Chat GPT and Peirce

2023-07-18 Thread Thomas903
Sorry - that doesn't do it for me.  ChatGPT's success depends upon a priori
information & structures provided to it by humans.  As I said before, this
is the equivalent of the "innate learning module" which you say is not
needed for learning language.

Since you were able to respond to my comment faster than you could have
read it, I have to wonder if ChatGPT composed your reply.

Thanks.
Tom Wyrick



On Tue, Jul 18, 2023 at 1:46 PM Daniel L Everett 
wrote:

> Your points are well known. You might want to read this:
>
> Modern language models refute Chomsky’s approach to language -
> lingbuzz/007180 
> ling.auf.net 
> [image: favicon.ico] 
> 
>
>
> On Jul 18, 2023, at 11:44, Thomas903  wrote:
>
> 
>
> Dan,
>
> I wanted to comment briefly on a sentence from your earlier posting:
> "ChatGPT simply and conclusively shows that there is no need for any
> innate learning module in the brain to learn language."
>
> 1- ChatGPT did not evolve naturally, but was developed by humans who
> certainly do understand how language works. Those humans fed ChatGPT vast
> amounts of carefully curated (not random) examples of human language and
> images.   Evidence that digital computers and software can learn
> language on their own is therefore absent.  To the extent ChatGPT "learns"
> language, its success depends upon the *a priori element provided by
> humans. This a priori element is the equivalent of an "innate" potential or
> quality.
>
> 2- ChatGPT is a tool.  Tools do not act on their own, or learn on their
> own.  They have no intentions, no interests, no responsibilities.  They are
> directed by their users/operators.  Without direction, they learn nothing.
>
> 3- It is well known that ChatGPT frequently commits gross/obvious errors,
> and those gross errors are pragmatic evidence that it has failed at
> learning the language. Pattern recognition & matching may be a better
> description of what it does.  (Does ChatGPT ever invent new words?)
>
> 4- According to press reports, ChatGPT depends upon the use (scanning) of
> *stolen articles, books, etc.  So the developers of ChatGPT do not have a
> morality/ethics algorithm, and neither does ChatGPT.  This correspondence
> is direct evidence that the potentials/qualities of ChatGPT are the *same
> as the potentials/qualities provided by its developers/users. That
> correspondence principle applies to ChatGPT's language potentials, too (I
> believe).
>
> I agree with your closing sentence that ChatGPT is inferring from signs,
> which you refer to as Peircean, but do not perceive that it is inferring
> from the *meaning of signs, which reflect pragmatic objectives.  It appears
> that ChatGPT infers from the uses of signs in a multitude of settings --
> many of which represent unsuccessful, failed, or irrelevant efforts.  It
> seems that Peircean inferences about language would revolve around
> pragmatic meanings.
>
> Thanks
> Tom Wyrick
>
>
>
>
> .
>
>
>
> On Wed, Apr 19, 2023 at 12:37 PM Dan Everett 
> wrote:
>
>> ChatGPT simply and conclusively shows that there is no need for any
>> innate learning module in the brain to learn language. Here is the paper on
>> it that states this best. https://ling.auf.net/lingbuzz/007180
>>
>> From a Peircean perspective, it is important to realize that this works
>> by inference over signs.
>>
>> Dan
>>
>> On Apr 19, 2023, at 12:58 PM, Helmut Raulien  wrote:
>>
>> Dan, list,
>>
>> ok, so it is like I wrote "or it is so, that ChatGPT is somehow referred
>> to universal logic as well, builds its linguistic competence up from there,
>> and so can skip the human grammar-module". But that neither is witchcraft,
>> nor does it say, that there is no human-genetic grammar-module. And I too
>> hope with the Linguist, that we dont have to fear ChatGPT more than we have
>> to fear a refrigerator.
>>
>> Best
>> Helmut
>>
>>
>>
>> _ _ _ _ _ _ _ _ _ _
>> ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
>> PEIRCE-L to this message. PEIRCE-L posts should go to
>> peirce-L@list.iupui.edu .
>> ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to
>> l...@list.iupui.edu with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the
>> message and nothing in the body.  More at
>> https://list.iupui.edu/sympa/help/user-signoff.html .
>> ► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;
>> and co-managed by him and Ben Udell.
>>
>


favicon.ico
Description: Binary data
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEI

Re: [PEIRCE-L] Chat GPT and Peirce

2023-07-18 Thread Thomas903
Dan,

I wanted to comment briefly on a sentence from your earlier posting:
"ChatGPT simply and conclusively shows that there is no need for any innate
learning module in the brain to learn language."

1- ChatGPT did not evolve naturally, but was developed by humans who
certainly do understand how language works. Those humans fed ChatGPT vast
amounts of carefully curated (not random) examples of human language and
images.   Evidence that digital computers and software can learn
language on their own is therefore absent.  To the extent ChatGPT "learns"
language, its success depends upon the *a priori element provided by
humans. This a priori element is the equivalent of an "innate" potential or
quality.

2- ChatGPT is a tool.  Tools do not act on their own, or learn on their
own.  They have no intentions, no interests, no responsibilities.  They are
directed by their users/operators.  Without direction, they learn nothing.

3- It is well known that ChatGPT frequently commits gross/obvious errors,
and those gross errors are pragmatic evidence that it has failed at
learning the language. Pattern recognition & matching may be a better
description of what it does.  (Does ChatGPT ever invent new words?)

4- According to press reports, ChatGPT depends upon the use (scanning) of
*stolen articles, books, etc.  So the developers of ChatGPT do not have a
morality/ethics algorithm, and neither does ChatGPT.  This correspondence
is direct evidence that the potentials/qualities of ChatGPT are the *same
as the potentials/qualities provided by its developers/users. That
correspondence principle applies to ChatGPT's language potentials, too (I
believe).

I agree with your closing sentence that ChatGPT is inferring from signs,
which you refer to as Peircean, but do not perceive that it is inferring
from the *meaning of signs, which reflect pragmatic objectives.  It appears
that ChatGPT infers from the uses of signs in a multitude of settings --
many of which represent unsuccessful, failed, or irrelevant efforts.  It
seems that Peircean inferences about language would revolve around
pragmatic meanings.

Thanks
Tom Wyrick




.



On Wed, Apr 19, 2023 at 12:37 PM Dan Everett 
wrote:

> ChatGPT simply and conclusively shows that there is no need for any innate
> learning module in the brain to learn language. Here is the paper on it
> that states this best. https://ling.auf.net/lingbuzz/007180
>
> From a Peircean perspective, it is important to realize that this works by
> inference over signs.
>
> Dan
>
> On Apr 19, 2023, at 12:58 PM, Helmut Raulien  wrote:
>
> Dan, list,
>
> ok, so it is like I wrote "or it is so, that ChatGPT is somehow referred
> to universal logic as well, builds its linguistic competence up from there,
> and so can skip the human grammar-module". But that neither is witchcraft,
> nor does it say, that there is no human-genetic grammar-module. And I too
> hope with the Linguist, that we dont have to fear ChatGPT more than we have
> to fear a refrigerator.
>
> Best
> Helmut
>
>
>
> _ _ _ _ _ _ _ _ _ _
> ► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
> PEIRCE-L to this message. PEIRCE-L posts should go to
> peirce-L@list.iupui.edu .
> ► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to
> l...@list.iupui.edu with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the
> message and nothing in the body.  More at
> https://list.iupui.edu/sympa/help/user-signoff.html .
> ► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and
> co-managed by him and Ben Udell.
>
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.


Re: [PEIRCE-L] Chat GPT and Peirce

2023-06-15 Thread Robert Junqueira
he Turing Test, as Searle points out when he also
>>> argues that a computer’s “understanding” is based on inference of indexes
>>> and icons rather than symbols (though he does not use such terms).
>>>
>>> I discuss these points at length in my forthcoming book and I will be
>>> giving a talk on this at Google’s headquarters in July.
>>>
>>> Another benefit of Peirce’s philosophy over standard linguistics comes
>>> into view when we consider what I call “Frege’s error.” As we all know
>>> Peirce and Frege were developing propositional and first-order logic nearly
>>> simultaneously. However, Frege’s axiom-based system proposes a crucial role
>>> for the Fregean concept of compositionality in language, whereas Peirce’s
>>> Existential Graphs provide an inferential, non-compositional model of
>>> meaning. In my forthcoming work (and in a few talks I have given recently
>>> in pro-Fregean linguistics departments (which is pretty much all
>>> linguistics departments) I argue that compositionality is too weak (it
>>> cannot extend beyond the sentence/proposition) and too strong (it creates
>>> faux problems such as the veritable core of most formal linguistics,
>>> “gap-filler” analyses, e.g. movement rules) whereas inferentialism provides
>>> the best coverage.
>>>
>>> Peirce’s inferentialism is similar to, but much more general, than
>>> Brandom’s inferentialism (also as developed by Peregrin). So Peirce, in my
>>> analysis, is right at the center of current debates on the nature of human
>>> language. I also make this point in my 2017 book, How Language Began (and
>>> Homo erectus scholar Larry Barham and I make this point based on much more
>>> archaeological evidence from Homo erectus sites:
>>> https://link.springer.com/article/10.1007/s10816-020-09480-9
>>>
>>> All best,
>>>
>>> Dan
>>>
>>> On Apr 20, 2023, at 4:47 PM, Helmut Raulien  wrote:
>>>
>>> Dan, if I would read all of Chomsky´s, and would not find him claiming,
>>> that his genetic grammar-module is not based on logic, then I would have to
>>> quote all he ever has written. The other way round would be easier. And:
>>> Refutation is a strong accusation, and I think the prosecutor has the
>>> burden of proof.
>>> Best, Helmut
>>>
>>>
>>> *Gesendet:* Mittwoch, 19. April 2023 um 20:28 Uhr
>>> *Von:* "Dan Everett" 
>>> *An:* "Helmut Raulien" 
>>> *Cc:* g...@gnusystems.ca, "Peirce-L" 
>>> *Betreff:* Re: [PEIRCE-L] Chat GPT and Peirce
>>> You’ll have to read your way through the literature.
>>>
>>> D
>>>
>>>
>>> On Apr 19, 2023, at 2:27 PM, Helmut Raulien  wrote:
>>>
>>>
>>> Dan, List,
>>>
>>> First i apologize for posting unrelated in the main thread.
>>>
>>> I appreciate your argument and find it a great insight. Now, is this a
>>> refutation of Chomsky´s theory or not? A computer program perhaps does not
>>> need such a module, because it can research and develop language from
>>> universal (natural) logic with Peirce´s contribution to discovering it
>>> included. But maybe the evolution of the brain works differently: There is
>>> no direct, analytical reference to universal logic, I would say. Evolution
>>> is all about viability. But of course, viability is greater if it is in
>>> accord with universal logic. It then simply works out, while when not being
>>> in accord, it doesn´t. But, with a direct link to logic missing, I guess
>>> for evolution it is a good idea, to install viable, well tested routines
>>> for modules from time to time, which are then inherited and give
>>> instructions. So maybe humans do have a grammar module, although for a
>>> computer such a thing is not necessary. Instead of "module" you may call it
>>> "instinct", i think, like a bird knows how to build a nest without first
>>> logically pondering "What should I do to have something to lay my eggs
>>> in?". So, all i wanted to object, was, that all that is not a refutation of
>>> Chomsky´s work. That is, unless he explicitly should have claimed, that
>>> this module/instinct is the starting source/reference of language, and does
>>> itself not have a reference to logic. Which would be absurd, i think.
>>>
>>> Best Regards
>>> Helmut
>>>
>>> 19. April 2023 um 19:37 Uhr
>>> 

Re: [PEIRCE-L] Chat GPT and Peirce

2023-06-15 Thread Robert Junqueira
://www.cspeirce.com/menu/library/aboutcsp/shapiro/shapiro-mclc.pdf
>
>
>
>
> I read this paper several years ago when I asked Michael to explain the
> important notion of 'markedness' in linguistics for a NYC philosophy club
> we are both members of, and he pointed to this paper. But I haven't
> sufficient knowledge of linguistics nor Chat GPT to enter this discussion.
> So, this is offered as material that those who have such knowledge might
> find of interest, especially from a Peircean perspective.
> http://www.cspeirce.com/menu/library/aboutcsp/shapiro/shapiro-mclc.pdf
>
> To all: this paper and many Peirce and Peirce-related papers may be found
> at *Arisbe: The Peirce Gateway *https://arisbe.sitehost.iu.edu
>
> Best,
>
> Gary R
>
>
>
>
>
>
>
> On Fri, Apr 21, 2023 at 5:18 AM Dan Everett 
> wrote:
>
>> Helmut,
>>
>> There are only two claims here, one by Chomsky and one by Peirce.Although
>> both use the term ‘instinct’  and ‘innate,’ these mean quite different
>> things for both of them (there is a tendency to interpret Peirce’s (Hume’s,
>> Locke’s, etc) use of “instinct” (and many other terms) anachronistically).
>>
>> In any case, Chomsky claims that language is not learned, in fact that it
>> cannot be learned. It is “acquired” via innate structure that emerges via
>> triggering via the environment.
>>
>> Peirce claims that all knowledge, ontogenetic or phylogenetic (but that
>> is often/usually misinterpreted as well) is gained via inference over signs.
>>
>> What ChatGPT has done (and the Piantadosi article is crucial to seeing
>> this clearly, so I assume you have read it) is to show that language
>> structures AND their meanings can be learned by inference over signs.
>> ChatGPT does rely on LLM (Large Language Models) and children do not, but
>> work is already being done to produce the results based on more realistic
>> data bases.
>>
>> Now if any system can learn a language via inference over signs, Chomsky
>> is wrong. QED.
>>
>> The question that arises, however, is whether ChatGPT (or computers in
>> Searle’s Chinese Room Gedanken experiment) are inferring over indexes and
>> icons or also symbols (human language is differentiated from all other
>> communication system via the open-ended cultural production of symbols).
>> This also challenges the Turing Test, as Searle points out when he also
>> argues that a computer’s “understanding” is based on inference of indexes
>> and icons rather than symbols (though he does not use such terms).
>>
>> I discuss these points at length in my forthcoming book and I will be
>> giving a talk on this at Google’s headquarters in July.
>>
>> Another benefit of Peirce’s philosophy over standard linguistics comes
>> into view when we consider what I call “Frege’s error.” As we all know
>> Peirce and Frege were developing propositional and first-order logic nearly
>> simultaneously. However, Frege’s axiom-based system proposes a crucial role
>> for the Fregean concept of compositionality in language, whereas Peirce’s
>> Existential Graphs provide an inferential, non-compositional model of
>> meaning. In my forthcoming work (and in a few talks I have given recently
>> in pro-Fregean linguistics departments (which is pretty much all
>> linguistics departments) I argue that compositionality is too weak (it
>> cannot extend beyond the sentence/proposition) and too strong (it creates
>> faux problems such as the veritable core of most formal linguistics,
>> “gap-filler” analyses, e.g. movement rules) whereas inferentialism provides
>> the best coverage.
>>
>> Peirce’s inferentialism is similar to, but much more general, than
>> Brandom’s inferentialism (also as developed by Peregrin). So Peirce, in my
>> analysis, is right at the center of current debates on the nature of human
>> language. I also make this point in my 2017 book, How Language Began (and
>> Homo erectus scholar Larry Barham and I make this point based on much more
>> archaeological evidence from Homo erectus sites:
>> https://link.springer.com/article/10.1007/s10816-020-09480-9
>>
>> All best,
>>
>> Dan
>>
>> On Apr 20, 2023, at 4:47 PM, Helmut Raulien  wrote:
>>
>> Dan, if I would read all of Chomsky´s, and would not find him claiming,
>> that his genetic grammar-module is not based on logic, then I would have to
>> quote all he ever has written. The other way round would be easier. And:
>> Refutation is a strong accusation, and I think the prosecutor has the
>> burden of proof.
>> Best, He

Re: [PEIRCE-L] Chat GPT and Peirce

2023-04-21 Thread Daniel L Everett
s crucial to seeing this clearly, so I assume you have read it) is to show that language structures AND their meanings can be learned by inference over signs. ChatGPT does rely on LLM (Large Language Models) and children do not, but work is already being done to produce the results based on more realistic data bases. Now if any system can learn a language via inference over signs, Chomsky is wrong. QED. The question that arises, however, is whether ChatGPT (or computers in Searle’s Chinese Room Gedanken experiment) are inferring over indexes and icons or also symbols (human language is differentiated from all other communication system via the open-ended cultural production of symbols). This also challenges the Turing Test, as Searle points out when he also argues that a computer’s “understanding” is based on inference of indexes and icons rather than symbols (though he does not use such terms).I discuss these points at length in my forthcoming book and I will be giving a talk on this at Google’s headquarters in July.Another benefit of Peirce’s philosophy over standard linguistics comes into view when we consider what I call “Frege’s error.” As we all know Peirce and Frege were developing propositional and first-order logic nearly simultaneously. However, Frege’s axiom-based system proposes a crucial role for the Fregean concept of compositionality in language, whereas Peirce’s Existential Graphs provide an inferential, non-compositional model of meaning. In my forthcoming work (and in a few talks I have given recently in pro-Fregean linguistics departments (which is pretty much all linguistics departments) I argue that compositionality is too weak (it cannot extend beyond the sentence/proposition) and too strong (it creates faux problems such as the veritable core of most formal linguistics, “gap-filler” analyses, e.g. movement rules) whereas inferentialism provides the best coverage. Peirce’s inferentialism is similar to, but much more general, than Brandom’s inferentialism (also as developed by Peregrin). So Peirce, in my analysis, is right at the center of current debates on the nature of human language. I also make this point in my 2017 book, How Language Began (and Homo erectus scholar Larry Barham and I make this point based on much more archaeological evidence from Homo erectus sites: https://link.springer.com/article/10.1007/s10816-020-09480-9All best,Dan On Apr 20, 2023, at 4:47 PM, Helmut Raulien <h.raul...@gmx.de> wrote:Dan, if I would read all of Chomsky´s, and would not find him claiming, that his genetic grammar-module is not based on logic, then I would have to quote all he ever has written. The other way round would be easier. And: Refutation is a strong accusation, and I think the prosecutor has the burden of proof.

Best, Helmut

 
 

Gesendet: Mittwoch, 19. April 2023 um 20:28 Uhr
Von: "Dan Everett" <danleveret...@gmail.com>
An: "Helmut Raulien" <h.raul...@gmx.de>
Cc: g...@gnusystems.ca, "Peirce-L" <PEIRCE-L@list.iupui.edu>
Betreff: Re: [PEIRCE-L] Chat GPT and Peirce


You’ll have to read your way through the literature.
 

D
 

On Apr 19, 2023, at 2:27 PM, Helmut Raulien <h.raul...@gmx.de> wrote:
 




 


Dan, List,

 

First i apologize for posting unrelated in the main thread.

 

I appreciate your argument and find it a great insight. Now, is this a refutation of Chomsky´s theory or not? A computer program perhaps does not need such a module, because it can research and develop language from universal (natural) logic with Peirce´s contribution to discovering it included. But maybe the evolution of the brain works differently: There is no direct, analytical reference to universal logic, I would say. Evolution is all about viability. But of course, viability is greater if it is in accord with universal logic. It then simply works out, while when not being in accord, it doesn´t. But, with a direct link to logic missing, I guess for evolution it is a good idea, to install viable, well tested routines for modules from time to time, which are then inherited and give instructions. So maybe humans do have a grammar module, although for a computer such a thing is not necessary. Instead of "module" you may call it "instinct", i think, like a bird knows how to build a nest without first logically pondering "What should I do to have something to lay my eggs in?". So, all i wanted to object, was, that all that is not a refutation of Chomsky´s work. That is, unless he explicitly should have claimed, that this module/instinct is the starting source/reference of language, and does itself not have a reference to logic. Which would be absurd, i think.

 

Best Regards

Helmut

 

19. April 2023 um 19:37 Uhr
 "Dan Everett" <danleveret...@gmail.com>
wrote:


ChatGPT simply and conclusively shows that there is no need for any innate learning module in the brain to learn language. Here is the paper on

Re: [PEIRCE-L] Chat GPT and Peirce

2023-04-21 Thread Gary Richmond
ut
> work is already being done to produce the results based on more realistic
> data bases.
>
> Now if any system can learn a language via inference over signs, Chomsky
> is wrong. QED.
>
> The question that arises, however, is whether ChatGPT (or computers in
> Searle’s Chinese Room Gedanken experiment) are inferring over indexes and
> icons or also symbols (human language is differentiated from all other
> communication system via the open-ended cultural production of symbols).
> This also challenges the Turing Test, as Searle points out when he also
> argues that a computer’s “understanding” is based on inference of indexes
> and icons rather than symbols (though he does not use such terms).
>
> I discuss these points at length in my forthcoming book and I will be
> giving a talk on this at Google’s headquarters in July.
>
> Another benefit of Peirce’s philosophy over standard linguistics comes
> into view when we consider what I call “Frege’s error.” As we all know
> Peirce and Frege were developing propositional and first-order logic nearly
> simultaneously. However, Frege’s axiom-based system proposes a crucial role
> for the Fregean concept of compositionality in language, whereas Peirce’s
> Existential Graphs provide an inferential, non-compositional model of
> meaning. In my forthcoming work (and in a few talks I have given recently
> in pro-Fregean linguistics departments (which is pretty much all
> linguistics departments) I argue that compositionality is too weak (it
> cannot extend beyond the sentence/proposition) and too strong (it creates
> faux problems such as the veritable core of most formal linguistics,
> “gap-filler” analyses, e.g. movement rules) whereas inferentialism provides
> the best coverage.
>
> Peirce’s inferentialism is similar to, but much more general, than
> Brandom’s inferentialism (also as developed by Peregrin). So Peirce, in my
> analysis, is right at the center of current debates on the nature of human
> language. I also make this point in my 2017 book, How Language Began (and
> Homo erectus scholar Larry Barham and I make this point based on much more
> archaeological evidence from Homo erectus sites:
> https://link.springer.com/article/10.1007/s10816-020-09480-9
>
> All best,
>
> Dan
>
> On Apr 20, 2023, at 4:47 PM, Helmut Raulien  wrote:
>
> Dan, if I would read all of Chomsky´s, and would not find him claiming,
> that his genetic grammar-module is not based on logic, then I would have to
> quote all he ever has written. The other way round would be easier. And:
> Refutation is a strong accusation, and I think the prosecutor has the
> burden of proof.
> Best, Helmut
>
>
> *Gesendet:* Mittwoch, 19. April 2023 um 20:28 Uhr
> *Von:* "Dan Everett" 
> *An:* "Helmut Raulien" 
> *Cc:* g...@gnusystems.ca, "Peirce-L" 
> *Betreff:* Re: [PEIRCE-L] Chat GPT and Peirce
> You’ll have to read your way through the literature.
>
> D
>
>
> On Apr 19, 2023, at 2:27 PM, Helmut Raulien  wrote:
>
>
> Dan, List,
>
> First i apologize for posting unrelated in the main thread.
>
> I appreciate your argument and find it a great insight. Now, is this a
> refutation of Chomsky´s theory or not? A computer program perhaps does not
> need such a module, because it can research and develop language from
> universal (natural) logic with Peirce´s contribution to discovering it
> included. But maybe the evolution of the brain works differently: There is
> no direct, analytical reference to universal logic, I would say. Evolution
> is all about viability. But of course, viability is greater if it is in
> accord with universal logic. It then simply works out, while when not being
> in accord, it doesn´t. But, with a direct link to logic missing, I guess
> for evolution it is a good idea, to install viable, well tested routines
> for modules from time to time, which are then inherited and give
> instructions. So maybe humans do have a grammar module, although for a
> computer such a thing is not necessary. Instead of "module" you may call it
> "instinct", i think, like a bird knows how to build a nest without first
> logically pondering "What should I do to have something to lay my eggs
> in?". So, all i wanted to object, was, that all that is not a refutation of
> Chomsky´s work. That is, unless he explicitly should have claimed, that
> this module/instinct is the starting source/reference of language, and does
> itself not have a reference to logic. Which would be absurd, i think.
>
> Best Regards
> Helmut
>
> 19. April 2023 um 19:37 Uhr
>  "Dan Everett" 
> *wrote:*
> ChatGPT simply and conclusively shows that there is no need for any innate
> l

Re: [PEIRCE-L] Chat GPT and Peirce

2023-04-21 Thread Dan Everett
Helmut,

There are only two claims here, one by Chomsky and one by Peirce.Although both 
use the term ‘instinct’  and ‘innate,’ these mean quite different things for 
both of them (there is a tendency to interpret Peirce’s (Hume’s, Locke’s, etc) 
use of “instinct” (and many other terms) anachronistically). 

In any case, Chomsky claims that language is not learned, in fact that it 
cannot be learned. It is “acquired” via innate structure that emerges via 
triggering via the environment. 

Peirce claims that all knowledge, ontogenetic or phylogenetic (but that is 
often/usually misinterpreted as well) is gained via inference over signs.

What ChatGPT has done (and the Piantadosi article is crucial to seeing this 
clearly, so I assume you have read it) is to show that language structures AND 
their meanings can be learned by inference over signs. ChatGPT does rely on LLM 
(Large Language Models) and children do not, but work is already being done to 
produce the results based on more realistic data bases. 

Now if any system can learn a language via inference over signs, Chomsky is 
wrong. QED. 

The question that arises, however, is whether ChatGPT (or computers in Searle’s 
Chinese Room Gedanken experiment) are inferring over indexes and icons or also 
symbols (human language is differentiated from all other communication system 
via the open-ended cultural production of symbols). This also challenges the 
Turing Test, as Searle points out when he also argues that a computer’s 
“understanding” is based on inference of indexes and icons rather than symbols 
(though he does not use such terms).

I discuss these points at length in my forthcoming book and I will be giving a 
talk on this at Google’s headquarters in July.

Another benefit of Peirce’s philosophy over standard linguistics comes into 
view when we consider what I call “Frege’s error.” As we all know Peirce and 
Frege were developing propositional and first-order logic nearly 
simultaneously. However, Frege’s axiom-based system proposes a crucial role for 
the Fregean concept of compositionality in language, whereas Peirce’s 
Existential Graphs provide an inferential, non-compositional model of meaning. 
In my forthcoming work (and in a few talks I have given recently in pro-Fregean 
linguistics departments (which is pretty much all linguistics departments) I 
argue that compositionality is too weak (it cannot extend beyond the 
sentence/proposition) and too strong (it creates faux problems such as the 
veritable core of most formal linguistics, “gap-filler” analyses, e.g. movement 
rules) whereas inferentialism provides the best coverage. 

Peirce’s inferentialism is similar to, but much more general, than Brandom’s 
inferentialism (also as developed by Peregrin). So Peirce, in my analysis, is 
right at the center of current debates on the nature of human language. I also 
make this point in my 2017 book, How Language Began (and Homo erectus scholar 
Larry Barham and I make this point based on much more archaeological evidence 
from Homo erectus sites: 
https://link.springer.com/article/10.1007/s10816-020-09480-9

All best,

Dan 

> On Apr 20, 2023, at 4:47 PM, Helmut Raulien  wrote:
> 
> Dan, if I would read all of Chomsky´s, and would not find him claiming, that 
> his genetic grammar-module is not based on logic, then I would have to quote 
> all he ever has written. The other way round would be easier. And: Refutation 
> is a strong accusation, and I think the prosecutor has the burden of proof.
> Best, Helmut
>  
>  
> Gesendet: Mittwoch, 19. April 2023 um 20:28 Uhr
> Von: "Dan Everett" 
> An: "Helmut Raulien" 
> Cc: g...@gnusystems.ca, "Peirce-L" 
> Betreff: Re: [PEIRCE-L] Chat GPT and Peirce
> You’ll have to read your way through the literature.
>  
> D
>  
> On Apr 19, 2023, at 2:27 PM, Helmut Raulien  wrote:
>  
>  
> Dan, List,
>  
> First i apologize for posting unrelated in the main thread.
>  
> I appreciate your argument and find it a great insight. Now, is this a 
> refutation of Chomsky´s theory or not? A computer program perhaps does not 
> need such a module, because it can research and develop language from 
> universal (natural) logic with Peirce´s contribution to discovering it 
> included. But maybe the evolution of the brain works differently: There is no 
> direct, analytical reference to universal logic, I would say. Evolution is 
> all about viability. But of course, viability is greater if it is in accord 
> with universal logic. It then simply works out, while when not being in 
> accord, it doesn´t. But, with a direct link to logic missing, I guess for 
> evolution it is a good idea, to install viable, well tested routines for 
> modules from time to time, which are then inherited and give instructions. So 
> maybe humans do have a grammar module, although for a computer such

Re: [PEIRCE-L] Chat GPT and Peirce

2023-04-19 Thread Dan Everett
You’ll have to read your way through the literature.

D

> On Apr 19, 2023, at 2:27 PM, Helmut Raulien  wrote:
> 
>  
> Dan, List,
>  
> First i apologize for posting unrelated in the main thread.
>  
> I appreciate your argument and find it a great insight. Now, is this a 
> refutation of Chomsky´s theory or not? A computer program perhaps does not 
> need such a module, because it can research and develop language from 
> universal (natural) logic with Peirce´s contribution to discovering it 
> included. But maybe the evolution of the brain works differently: There is no 
> direct, analytical reference to universal logic, I would say. Evolution is 
> all about viability. But of course, viability is greater if it is in accord 
> with universal logic. It then simply works out, while when not being in 
> accord, it doesn´t. But, with a direct link to logic missing, I guess for 
> evolution it is a good idea, to install viable, well tested routines for 
> modules from time to time, which are then inherited and give instructions. So 
> maybe humans do have a grammar module, although for a computer such a thing 
> is not necessary. Instead of "module" you may call it "instinct", i think, 
> like a bird knows how to build a nest without first logically pondering "What 
> should I do to have something to lay my eggs in?". So, all i wanted to 
> object, was, that all that is not a refutation of Chomsky´s work. That is, 
> unless he explicitly should have claimed, that this module/instinct is the 
> starting source/reference of language, and does itself not have a reference 
> to logic. Which would be absurd, i think.
>  
> Best Regards
> Helmut
>  
> 19. April 2023 um 19:37 Uhr
>  "Dan Everett" 
> wrote:
> ChatGPT simply and conclusively shows that there is no need for any innate 
> learning module in the brain to learn language. Here is the paper on it that 
> states this best. https://ling.auf.net/lingbuzz/007180
>  
> From a Peircean perspective, it is important to realize that this works by 
> inference over signs. 
>  
> Dan
>  
> On Apr 19, 2023, at 12:58 PM, Helmut Raulien  wrote:
>  
> Dan, list,
>  
> ok, so it is like I wrote "or it is so, that ChatGPT is somehow referred to 
> universal logic as well, builds its linguistic competence up from there, and 
> so can skip the human grammar-module". But that neither is witchcraft, nor 
> does it say, that there is no human-genetic grammar-module. And I too hope 
> with the Linguist, that we dont have to fear ChatGPT more than we have to 
> fear a refrigerator.
>  
> Best
> Helmut
>  
>  
> _ _ _ _ _ _ _ _ _ _ ��� PEIRCE-L subscribers: Click on "Reply List" or "Reply 
> All" to REPLY ON PEIRCE-L to this message. PEIRCE-L posts should go to 
> peirce-L@list.iupui.edu . ��� To UNSUBSCRIBE, send a message NOT to PEIRCE-L 
> but to l...@list.iupui.edu with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of 
> the message and nothing in the body. More at 
> https://list.iupui.edu/sympa/help/user-signoff.html . ��� PEIRCE-L is owned 
> by THE PEIRCE GROUP; moderated by Gary Richmond; and co-managed by him and 
> Ben Udell.

_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . 
► To UNSUBSCRIBE, send a message NOT to PEIRCE-L but to l...@list.iupui.edu 
with UNSUBSCRIBE PEIRCE-L in the SUBJECT LINE of the message and nothing in the 
body.  More at https://list.iupui.edu/sympa/help/user-signoff.html .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.