Gary R., Gary F., List:

GF: Having read the fine print at the end of the paper, it’s clear that
Manheim’s article was co-written with several LLM chatbots, and I wonder if
some of the optimism comes from them (or some of them) rather than from the
human side.


I noticed that, too, with the result that it is more difficult for me to
take the article seriously. In a 1999 paper
<https://www.jstor.org/stable/40320779>, "Peirce's Inkstand as an External
Embodiment of Mind," Peter Skagestad quotes CP 7.366 (1902) and points out
that Peirce "is not *only *making the point that without ink he would not
be able to express his thoughts, but rather the point that thoughts come to
him in and through the act of writing, so that having writing implements is
a condition for having certain thoughts" (p. 551). I know firsthand that
the act of writing facilitates my own thinking, and I cannot help wondering
if Manheim's choice to delegate so much of the effort for drafting his
article to LLMs precluded him from carefully thinking through everything
that it ended up saying.

GF: Successful "alignment" is supposed to be between a super "intelligent"
system and *human values*. One problem with this is that human values vary
widely between different groups of humans, so which values is future AI
supposed to align with?


If an artificial system were really intelligent, then it seems to me that
it would be capable of *choosing *its own values instead of having a
particular set of human values imposed on it. In a 2013 paper
<https://www.academia.edu/9898586/C_S_Peirce_and_Artificial_Intelligence_Historical_Heritage_and_New_Theoretical_Stakes>,
"C. S. Peirce and Artificial Intelligence: Historical Heritage and (New)
Theoretical Stakes," Pierre Steiner observes that according to Peirce ...

PS: [H]uman reasoning is notably special (and, in that sense only, *genuine*)
in virtue of the *high *degrees of self-control and self-correctiveness it
can exercise on conduct: control on control, self-criticism on control, and
control on control on the basis of (revisable and self-endorsed) norms and
principles and, ultimately, aesthetic and moral ideals. ... The fact that
reasoning human agents have *purposes *is crucial here: it is on the basis
of purposes that they are ready to endorse, change or criticize specific
methods of reasoning (inductive, formal, empirical, ...), but also to
revise and reject previous purposes. Contrary to machines, humans do not
only have *specified *purposes. Their purposes are often vague and general.
In other passages, Peirce suggests that this ability for (higher-order and
purposive) self-control is closely related to the fact that human agents
are living, and especially *growing*, systems. (p. 272)


I suspect that much of the worry about "AI safety/alignment," as reflected
by common fictional storylines in popular culture, is a tacit admission of
this. What would prevent a sufficiently intelligent artificial system,
provided that such a thing is even possible, from *rejecting *human values
and instead adopting norms, principles, ideals, and purposes that we would
find objectionable, perhaps even abhorrent? More on the living/growing
aspect of intelligent systems below.

GF: LLMs have to be artificially supplied with a giant database of
thousands or millions of symbolic texts, and it takes them months or years
to build up the level of language competence that a human toddler has; and
even then is is doubtful whether they *understand *any of it.


As with intelligence, I am unconvinced that it is accurate to ascribe
"language competence" to LLMs, especially given the well-founded doubt
about "whether they *understand *any of it." John Searle's famous "Chinese
room" thought experiment seems relevant here, e.g., as discussed by John
Fetzer in his online *Commens Encyclopedia* article
<http://www.commens.org/encyclopedia/article/fetzer-james-peirce-and-philosophy-artificial-intelligence>,
"Peirce and the Philosophy of Artificial Intelligence." Again, in my view,
LLMs do not actually *use *natural languages, they only *simulate *using
natural languages.

GF: I can’t help thinking that all this has a bearing on the perennial
question of whether semiosis requires *life *or not.


In light of the following passage, Peirce's answer is evidently that *genuine
*semiosis requires life, given that it requires *genuine *triadic
relations; but he also seems to define "life" in this context much more
broadly than what we associate with the special science of biology.

CSP: For forty years, that is, since the beginning of the year 1867, I have
been constantly on the alert to find a *genuine *triadic relation--that is,
one that does not consist in a mere collocation of dyadic relations, or the
negative of such, etc. (I prefer not to attempt a perfectly definite
definition)--which is not either an intellectual relation or a relation
concerned with the less comprehensible phenomena of life. I have not met
with one which could not reasonably be supposed to belong to one or other
of these two classes. ... In short, the problem of how genuine triadic
relationships first arose in the world is a better, because more definite,
formulation of the problem of how life first came about; and no explanation
has ever been offered except that of pure chance, which we must suspect to
be no explanation, owing to the suspicion that pure chance may itself be a
vital phenomenon. In that case, life in the physiological sense would be
due to life in the metaphysical sense. (CP 6.322, 1907)


Elsewhere, Peirce maintains
<https://list.iu.edu/sympa/arc/peirce-l/2025-11/msg00044.html> that a
continuum is *defined* by a genuine triadic relation, so his remarks here
are consistent with my sense that what fundamentally precludes digital
computers from ever being truly intelligent is the *discreteness* of their
operations. As I said before, LLMs are surely quasi-minds whose individual
determinations are *dynamical *interpretants of sign *tokens*; but those
correlates are involved in *degenerate *triadic relations, which are
reducible to their constituent dyadic relations. In my view
<https://list.iu.edu/sympa/arc/peirce-l/2025-11/msg00056.html>, the *genuine
*triadic relation involves the *final *interpretant and the sign *itself*,
which is general
<https://list.iu.edu/sympa/arc/peirce-l/2025-11/msg00019.html> and
therefore a continuum of potential tokens that is *not *reducible to the
actual tokens that individually embody it.

Regards,

Jon Alan Schmidt - Olathe, Kansas, USA
Structural Engineer, Synechist Philosopher, Lutheran Christian
www.LinkedIn.com/in/JonAlanSchmidt / twitter.com/JonAlanSchmidt

On Thu, Jan 8, 2026 at 11:17 AM <[email protected]> wrote:

> List, I’d like to add a few comments to those already posted by Jon and
> Gary R about the Manheim paper — difficult as it is to focus on these
> issues given the awareness of what’s happening in Minnesota, Venezuela,
> Washington etc. (I may come back to that later.)
>
> Except for the odd usage of the term “interpretant” which Jon has already
> mentioned, I think Manheim’s simplified account of Peircean semiotics is
> cogent enough. But his paper seems to get increasingly muddled in the
> latter half of it. For instance, the “optimism” about future AI that Jon
> sees in it seems quite equivocal to me. Having read the fine print at the
> end of the paper, it’s clear that Manheim’s article was co-written with
> several LLM chatbots, and I wonder if some of the optimism comes from them
> (or some of them) rather than from the human side.
>
> Also, the paper makes a distinction between AI *safety* and the
> *alignment* problem, but then seems to gloss over the differences.
> Succesful “alignment” is supposed to be between a super”intelligent” system
> and *human values*. One problem with this is that human values vary
> widely between different groups of humans, so which values is future AI
> supposed to align with? If present experience is any guide (and it better
> be!), clearly AI systems are going to align with the values of the
> billionaire owners of those systems (and to a lesser extent the programmers
> who work for them), which is certainly no cause for optimism.
>
> I think Stanislas Dehaene’s 2020 book *How We Learn* deals with the
> deeper context of these issues better than Manheim and his chatbot
> co-authors. Its subtitle is *Why Brains Learn Better Than Any Machine …
> for Now.* Reducing this to simplest terms, it’s because brains learn from
> *experience* — “the total cognitive result of living,” as Peirce said* —
> and they do so by a scientific method (an algorithm, as Dehaene calls it)
> which is part of the *genetic* inheritance supplied by *biological*
> evolution. An absolute requirement of this method is what Peirce called
> *abduction* (or retroduction).
>
> For instance, human babies begin learning the language they are exposed to
> from birth, or even before — syntax, semantics, pragmatics and all — almost
> entirely without instruction, by a trial-and-error method. It enables them
> to pick up and remember the meaning and use of a new word *from one or
> two encounters with it*. LLMs have to be artificially supplied with a
> giant database of thousands or millions of symbolic texts, and it takes
> them months or years to build up the level of language competence that a
> human toddler has; and even then is is doubtful whether they *understand*
> any of it. LLM learning is entirely bottom-up and therefore works much
> slower than the holistic learning-from-experience of a living bodymind,
> even though the processing speed of a computer is much faster than a
> brain’s. (That’s why it is so much more energy-hungry than brains are.)
>
> I can’t help thinking that all this has a bearing on the perennial
> question of whether semiosis requires *life* or not. I can’t help
> thinking that *experience* requires life, and that is what a “scientific
> intelligence” has to learn from — including whatever *values* it learns.
> It has to be embodied, and providing it with sensors to gather data from
> the external world is not enough if that embodiment does not have a *whole
> world within* it in continuous dialogue with the world without — an
> internal *model*, as I (and Dehaene and others) call it. But I’d better
> stop there, as this is getting too long already.
>
> *The context of the Peirce quote above is here: Turning Signs 7:
> Experience and Experiment <https://gnusystems.ca/TS/xpt.htm#lgcsmtc>
>
> Love, gary f
>
> Coming from the ancestral lands of the Anishinaabeg
>
>
>
> *From:* Gary Richmond <[email protected]>
> *Sent:* 8-Jan-26 04:03
> *To:* Peirce List <[email protected]>; Gary Fuhrman <[email protected]>;
> Jon Alan Schmidt <[email protected]>
> *Subject:* AI safety and semeiotic, was, Surdity, Feeling, and
> Consciousness, was, Truth and dyadic consciousnessg
>
>
>
> Gary F, Jon, List,
>
> In the discussion of Manheim's paper I think it's important to remember
> that his concern is primarily with* AI safety. Anything* that would
> contribute to that safely I would wholeheartedly support. In my view,
> Peircean semeiotic might prove to be of some value in the matter, but
> perhaps not exactly in the way that Manheim is thinking of it.
>
> Manheim remarks that his paper does not try to settle philosophical
> questions about whether LLMs genuinely reason or only simulate thought, and
> that resolving those debates isn’t necessary for building safer general AI.
> I won't take up that claim now, but suffice it to say that I don't *fully*
> agree with it, especially as I continue to agree with your argument, Jon,
> that AI is* not* 'intelligent'. Can it every be?
>
> What Mannheim claims is necessary re: AI safety is to move AI systems
> toward Peircean semiosis in the sense of their becoming 'participants' in
> interpretive processes. He holds that this is achievable through
> engineering and 'capability' advances rather than "philosophical
> breakthroughs;" though he also says that those advances remain insufficient
> on their own for safety. Remaining "insufficient on its own for full
> safety" sounds to me somewhat self-contradictory. But I think that more
> importantly, he is saying that* if* there are things -- including
> Peircean 'things' -- that we can begin to do now in consideration of AI
> safety, then we ought to consider them, do them!
>
> Manheim claims that AI safety depends on deliberately designing systems
> for what he calls 'grounded meaning', 'persistence across interactions' and
> 'shared semiotic communities' rather than 'isolated agents'. I would tend
> to strongly agree. In addition, AI safety requires goals that are *explicitly
> defined* but also open to ongoing discussion rather than quasi*-emerging
> implicitly* from methods like *Reinforcement Learning from Human Feedback
> *(RLHF) . Manheim seems to be saying that companies developing advanced
> AI should take steps in system design and goal setting -- including those
> mentioned above -- if safety is taken seriously. The choice, he says, is
> between ignoring the implications of Peircean semeiotic and continuing
> merely to refine current systems despite their deficiency vis-a-vis safety;
> OR to embrace Peircean semiosis (whatever that means) and intentionally
> build AI as genuine 'semiotic partners'. But,I haven't a clear notion of
> what he means by 'semeiotic partners', nor a method for implementing
> whatever he does have in mind.
>
> I think Manheim off-handedly and rather summarily unfortunately dismisses
> RLHF -- which is, falsely he argues, claimed as a way of 'aligning' models
> with human values. From what I've read it has not yet really been developed
> much in that direction. As far as I can tell, and this may relate to the
> reason why Manheim seems to reject RLHF in toto, it appears to be more a
> 'reward proxy' trained on human rankings of outputs which are then fed
> back through some kind of loop to strongly influence future responses.
> Human judgment enters only in the 'training'', not as something that a
> complex system can engage with and debate with or, possibly, revise
> understandings over time. In Manheim's view, RLHF is not  'bridging' human
> goals and machine behavior (as it claims) but merely facilitating machine
> outputs to fit *learned preferences*.
>
> Still, whatever else RLHF is doing that is geared specifically* toward AI
> safety*, it would likely be augmented by an understanding of Peircean
> cenoscopic science including semeiotic. I would suggest that the semeiotic
> ideas that it might most benefit from occur in the third branch of *Logic
> as Semeiotic*, namely methodology (methodeutic) , perhaps in the present
> context representing, almost to a T, Peirce's alternative title, *speculative
> rhetoric*. It's in this branch of semeiotic that pragmatism
> (pragmaticism) is analyzed. There is of course much more to be said on
> methodology and theoretical rhetoric.
>
> For now, I would tweak Manheim's idea a bit and would suggest that we
> might try to move AI systems toward Peircean semeiotic rhetoric within
> communities of inquiry.
>
> Best,
>
> Gary R
>
_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . 
►  <a href="mailto:[email protected]";>UNSUBSCRIBE FROM PEIRCE-L</a> . 
But, if your subscribed email account is not your default email account, then 
go to
https://list.iu.edu/sympa/signoff/peirce-l .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to