List, I’d like to add a few comments to those already posted by Jon and Gary R 
about the Manheim paper — difficult as it is to focus on these issues given the 
awareness of what’s happening in Minnesota, Venezuela, Washington etc. (I may 
come back to that later.)

Except for the odd usage of the term “interpretant” which Jon has already 
mentioned, I think Manheim’s simplified account of Peircean semiotics is cogent 
enough. But his paper seems to get increasingly muddled in the latter half of 
it. For instance, the “optimism” about future AI that Jon sees in it seems 
quite equivocal to me. Having read the fine print at the end of the paper, it’s 
clear that Manheim’s article was co-written with several LLM chatbots, and I 
wonder if some of the optimism comes from them (or some of them) rather than 
from the human side.

Also, the paper makes a distinction between AI safety and the alignment 
problem, but then seems to gloss over the differences. Succesful “alignment” is 
supposed to be between a super”intelligent” system and human values. One 
problem with this is that human values vary widely between different groups of 
humans, so which values is future AI supposed to align with? If present 
experience is any guide (and it better be!), clearly AI systems are going to 
align with the values of the billionaire owners of those systems (and to a 
lesser extent the programmers who work for them), which is certainly no cause 
for optimism.

I think Stanislas Dehaene’s 2020 book How We Learn deals with the deeper 
context of these issues better than Manheim and his chatbot co-authors. Its 
subtitle is Why Brains Learn Better Than Any Machine … for Now. Reducing this 
to simplest terms, it’s because brains learn from experience — “the total 
cognitive result of living,” as Peirce said* — and they do so by a scientific 
method (an algorithm, as Dehaene calls it) which is part of the genetic 
inheritance supplied by biological evolution. An absolute requirement of this 
method is what Peirce called abduction (or retroduction). 

For instance, human babies begin learning the language they are exposed to from 
birth, or even before — syntax, semantics, pragmatics and all — almost entirely 
without instruction, by a trial-and-error method. It enables them to pick up 
and remember the meaning and use of a new word from one or two encounters with 
it. LLMs have to be artificially supplied with a giant database of thousands or 
millions of symbolic texts, and it takes them months or years to build up the 
level of language competence that a human toddler has; and even then is is 
doubtful whether they understand any of it. LLM learning is entirely bottom-up 
and therefore works much slower than the holistic learning-from-experience of a 
living bodymind, even though the processing speed of a computer is much faster 
than a brain’s. (That’s why it is so much more energy-hungry than brains are.)

I can’t help thinking that all this has a bearing on the perennial question of 
whether semiosis requires life or not. I can’t help thinking that experience 
requires life, and that is what a “scientific intelligence” has to learn from — 
including whatever values it learns. It has to be embodied, and providing it 
with sensors to gather data from the external world is not enough if that 
embodiment does not have a whole world within it in continuous dialogue with 
the world without — an internal model, as I (and Dehaene and others) call it. 
But I’d better stop there, as this is getting too long already.

*The context of the Peirce quote above is here: Turning Signs 7: Experience and 
Experiment <https://gnusystems.ca/TS/xpt.htm#lgcsmtc> 

Love, gary f

Coming from the ancestral lands of the Anishinaabeg

 

From: Gary Richmond <[email protected]> 
Sent: 8-Jan-26 04:03
To: Peirce List <[email protected]>; Gary Fuhrman <[email protected]>; Jon 
Alan Schmidt <[email protected]>
Subject: AI safety and semeiotic, was, Surdity, Feeling, and Consciousness, 
was, Truth and dyadic consciousnessg

 

Gary F, Jon, List,

 

In the discussion of Manheim's paper I think it's important to remember that 
his concern is primarily with AI safety. Anything that would contribute to that 
safely I would wholeheartedly support. In my view, Peircean semeiotic might 
prove to be of some value in the matter, but perhaps not exactly in the way 
that Manheim is thinking of it. 

Manheim remarks that his paper does not try to settle philosophical questions 
about whether LLMs genuinely reason or only simulate thought, and that 
resolving those debates isn’t necessary for building safer general AI. I won't 
take up that claim now, but suffice it to say that I don't fully agree with it, 
especially as I continue to agree with your argument, Jon, that AI is not 
'intelligent'. Can it every be?

What Mannheim claims is necessary re: AI safety is to move AI systems toward 
Peircean semiosis in the sense of their becoming 'participants' in interpretive 
processes. He holds that this is achievable through engineering and 
'capability' advances rather than "philosophical breakthroughs;" though he also 
says that those advances remain insufficient on their own for safety. Remaining 
"insufficient on its own for full safety" sounds to me somewhat 
self-contradictory. But I think that more importantly, he is saying that if 
there are things -- including Peircean 'things' -- that we can begin to do now 
in consideration of AI safety, then we ought to consider them, do them!

Manheim claims that AI safety depends on deliberately designing systems for 
what he calls 'grounded meaning', 'persistence across interactions' and 'shared 
semiotic communities' rather than 'isolated agents'. I would tend to strongly 
agree. In addition, AI safety requires goals that are explicitly defined but 
also open to ongoing discussion rather than quasi-emerging implicitly from 
methods like Reinforcement Learning from Human Feedback (RLHF) . Manheim seems 
to be saying that companies developing advanced AI should take steps in system 
design and goal setting -- including those mentioned above -- if safety is 
taken seriously. The choice, he says, is between ignoring the implications of 
Peircean semeiotic and continuing merely to refine current systems despite 
their deficiency vis-a-vis safety; OR to embrace Peircean semiosis (whatever 
that means) and intentionally build AI as genuine 'semiotic partners'. But,I 
haven't a clear notion of what he means by 'semeiotic partners', nor a method 
for implementing whatever he does have in mind.

I think Manheim off-handedly and rather summarily unfortunately dismisses RLHF 
-- which is, falsely he argues, claimed as a way of 'aligning' models with 
human values. From what I've read it has not yet really been developed much in 
that direction. As far as I can tell, and this may relate to the reason why 
Manheim seems to reject RLHF in toto, it appears to be more a 'reward proxy' 
trained on human rankings of outputs which are then fed back through some kind 
of loop to strongly influence future responses. Human judgment enters only in 
the 'training'', not as something that a complex system can engage with and 
debate with or, possibly, revise understandings over time. In Manheim's view, 
RLHF is not  'bridging' human goals and machine behavior (as it claims) but 
merely facilitating machine outputs to fit learned preferences.

Still, whatever else RLHF is doing that is geared specifically toward AI 
safety, it would likely be augmented by an understanding of Peircean cenoscopic 
science including semeiotic. I would suggest that the semeiotic ideas that it 
might most benefit from occur in the third branch of Logic as Semeiotic, namely 
methodology (methodeutic) , perhaps in the present context representing, almost 
to a T, Peirce's alternative title, speculative rhetoric. It's in this branch 
of semeiotic that pragmatism (pragmaticism) is analyzed. There is of course 
much more to be said on methodology and theoretical rhetoric. 

For now, I would tweak Manheim's idea a bit and would suggest that we might try 
to move AI systems toward Peircean semeiotic rhetoric within communities of 
inquiry. 

Best,

Gary R

 

On Tue, Jan 6, 2026 at 9:55 AM <[email protected] <mailto:[email protected]> 
> wrote:

Gary R, list,

I’d like to pick up where your post ended, Gary:

GR: Can we say that consciousness is, in its 'primitive' form, surd, while mind 
in its fullest sense is semiosic', that consciousness offers the brute 'given' 
of existence, while thought supplies the purposive, essentially semiotic 
structure that consciousness itself cannot provide?

GF: Peirce’s remarks about surdity are mostly in the context of Secondness, 
i.e. dyadic relations. I would say it is the dyadic consciousness that “offers 
the brute ‘given’ of existence,” and that’s why “real relations” — genuine 
indexicality — are necessary to ground the ability of triadic or semiotic 
relations to convey information, or for a proposition to be true. 
Phaneroscopically, the “pure consciousness” of Firstness lacks that grounding. 
But this never happens in (what we call) reality! For the sense or feeling of 
reality to occur in perception, something must be “present to the mind” and 
other than the mind, external to it or independent of it, in order for words 
like “true” or “real” to have any meaning. 

So you can’t have genuine Thirdness without genuine Secondness. Or to put it 
physiologically, both the perceived object and the perceiver must be embodied, 
and the quality of the experience is determined by the real relation between 
the two. The percept is not just a representation of the object, nor is it just 
an artifact of internal activity within the perceiving mind or brain.

It feel rather odd to even be writing this because it seems so obvious and yet 
feels like beating around the bush when I try to express it verbally. 

However I just came across an open access article which strikes me as an 
important application of it. It’s in Philosophy & Technology (2026) 39:9,

https://doi.org/10.1007/s13347-025-00975-5, “Language Models’ Hall of Mirrors 
Problem: Why AI Alignment Requires Peircean Semiosis,” by David Manheim. Here 
is the abstract:

[[ This paper examines some limitations of large language models (LLMs) through 
the framework of Peircean semiotics. We argue that basic LLMs exist within a 
“hall of mirrors,” reflecting only the linguistic surface of training data 
without indexical grounding in a shared external world, and manipulating 
symbols without participation in socially-mediated epistemology. We then argue 
that newer developments, including extended context windows, persistent memory, 
and mediated interactions with reality, are moving towards making newer 
Artificial Intelligence (AI) systems into genuine Peircean interpretants, and 
conclude that LLMs may be approaching this goal, and we identify no fundamental 
architectural barriers that would prevent this. This lens reframes a central 
challenge for AI alignment: without grounding in the semiotic process, a 
model’s linguistic encoding of goals may diverge from real-world values. By 
synthesizing Peirce’s pragmatic view of signs, contemporary discussions of AI 
alignment, and recent work on relational realism, we illustrate a fundamental 
epistemological and practical challenge to AI safety and point to part of a 
solution. ]]

Love, gary f

Coming from the ancestral lands of the Anishinaabeg

 

From: [email protected] <mailto:[email protected]>  
<[email protected] <mailto:[email protected]> > On Behalf 
Of Gary Richmond
Sent: 2-Jan-26 00:29
To: Peirce List <[email protected] <mailto:[email protected]> >; Gary 
Fuhrman <[email protected] <mailto:[email protected]> >
Subject: [PEIRCE-L] Surdity, Feeling, and Consciousness, was, Truth and dyadic 
consciousnessg

 

Gary F, List,

The material you linked to in Turning Signs on 'consciousness' and 'feeling' is 
thought-provoking, especially your comments on 'surdity' in relation to 
'feeling' and 'consciousness'. I have changed the subject line to show my 
emphasis on surdity. In TS you write:

GF:The truth of a proposition depends on the dyadic or real relation (as 
opposed to a relation of reason) between that sign and its dynamic object. It 
must involve ‘action of brute force, physical or psychical,’ of the dynamic 
object upon the sign, so that the relation between the two is ‘real,’ i.e. surd 
– no sign can express or describe it.


GR: I agree that the real relation between a sign and its object is surd,  
which concept I'd like to explore a bit in this post. You continued:

GF: The Secondness of experience itself is a dyadic relation or dynamic action 
between two subjects; it is ‘brute,’ ‘surd,’ indicible, ineffable. But a 
subject capable of both attention and intention can become a host, as it were, 
of Thirdness, or semiosis, so that another subject can become an ‘object of 
thought’ (Peirce, CP 1.343, 1903). Now we have a triadic relation involving the 
object, the sign or ‘thought,’ and the experiencing subject, the system of 
interpretation or ‘mind.’ This is the essential structure of mental experience. 
The 1ns of experience would be the pure feeling that there is something other 
than feeling itself – a world appearing to the subject, and thus becoming an 
object of attention (emphasis added by GR).


This reminded me that Peirce remarked -- and Joe Ransdell emphasized this point 
in several of his papers, on Peirce-L, and in private conversations -- that 
there can be no pure icon, which is to say that a qualisign must be embodied in 
a sinsign in order to be operative at all. So, there are no pure qualisigns in 
actual semiosis, only qualitative aspects of sinsigns: a qualisign is a 1ns 
that can signify only by being instantiated as a sinsign and interpreted via a 
legisign.

 At bottom consciousness is tied to what is immediate and irreducible, and this 
is linked to surdity. Consciousness is feeling:  “. . . consciousness is 
nothing but Feeling, in general” (CP 7.365). Feeling is simple, immediate, yet 
qualitative. It can't infer, intend, or mean anything as it is non-relational 
and non-purposive. In this primitive (primary?) sense, then, consciousness 
lacks any internal rational structure: it simply is what it is. This primary 
consciousness is pure 1ns, a qualitative 'given' of unmediated feeling. This is 
to say that it is not a sign at all (which, btw,  contradicts the pansemiotic 
views of some theorists).


Peirce is careful not to conflate surdity with mentality, so he distinguishes 
consciousness from mind: consciousness belongs to 1ns as feeling, while mind 
belongs to 3ns in semiosis. “The mediate element of experience is the mental 
element, which is semiosic but not necessarily conscious” (CP 7.366). Mentality 
does not essentially require consciousness since semiosis can -- and often does 
-- proceed without awareness (the famous example of the growth of a crystal; 
but there are many others). What defines mind isn't consciousness itself but, 
rather, final causation, purpose, mind being a living complexus of signs, 
habits, and consequent effects which these produce. 


In an essential sense, surdity would seem to take on the form of brute reaction 
(2ns) which may or may not be conscious. For example, the pain of touching a 
hot burner is a conscious instance of 2ns, while the reflex of removing the 
hand from the source of the pain isn't. Can one say that surdity lies in the 
brute action/reaction itself, while consciousness accompanies surdity only when 
feeling is present? In contrast, 3ns interprets brute facts and, over time, can 
make experience intelligible. Yet, I firmly believe that 3ns cannot eliminate 
surdity without cutting thought off from brute actuality, the hic et nunc. So 
Peirce argues that thought requires resistance, and that signs require dynamic 
objects as a kind of check on thought. 

 

Consciousness, then, is the ground upon which both semiosis and thought 
operate. 

This would seem to resolve an apparent contradiction in Peirce’s use of the 
term 'consciousness.' In its primary sense, consciousness is feeling, so surd, 
immediate, so, categorially 1ns. In a looser sense, Peirce speaks of “three 
modes of consciousness” (CP 8.256): the awareness of feeling, action, law. I 
would say that this usage represents, at best, a secondary, mediated 
consciousness.

 

Can we say that consciousness is, in its 'primitive' form, surd, while mind in 
its fullest sense is semiosic', that consciousness offers the brute 'given' of 
existence, while thought supplies the purposive, essentially semiotic structure 
that consciousness itself cannot provide?

 

Best,

 

Gary R

 

_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] 
<mailto:[email protected]>  . 
►  <a href="mailto:[email protected] 
<mailto:[email protected]> ">UNSUBSCRIBE FROM PEIRCE-L</a> . But, if 
your subscribed email account is not your default email account, then go to
https://list.iu.edu/sympa/signoff/peirce-l .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

_ _ _ _ _ _ _ _ _ _
► PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON 
PEIRCE-L to this message. PEIRCE-L posts should go to [email protected] . 
►  <a href="mailto:[email protected]";>UNSUBSCRIBE FROM PEIRCE-L</a> . 
But, if your subscribed email account is not your default email account, then 
go to
https://list.iu.edu/sympa/signoff/peirce-l .
► PEIRCE-L is owned by THE PEIRCE GROUP;  moderated by Gary Richmond;  and 
co-managed by him and Ben Udell.

Reply via email to