Gary F, list,

>[Gary] I've been following this thread with great interest -- "following" in 
>the sense that it's always a step or two ahead of me! But i'd like to insert 
>something with reference to Ben's question about words like "not," "probably," 
>"if," etc.

>[Gary] I don't think it is helpful to consider such words as signs; rather 
>they constitute part of a sign's internal structure, the sign proper being a 
>statement, sentence, or proposition -- or minimally, what Peirce calls a 
>"term". In linguistics, words like "if" are sometimes called "structure" words 
>as opposed to "content" words, a distinction that is sharper than it may 
>appear at first glance. Structure words, such as conjunctions, appear in 
>closed classes with a relatively small membership. In English, for instance, 
>there are probably less than a hundred prepositions (even counting those no 
>longer in current use), and the addition of a new preposition to the language 
>is extremely rare, compared to the frequency with which we add new nouns, 
>verbs and adjectives (those being open classes).

The "structure" words sound like that which Jon Awbrey once quoted Peirce 
calling "pure symbols" -- "and, or, of." The paucity of structure words, 
especially those of the syntactical kind which I've been discussing, is quite 
understandable. In one or another old file marked "Don't Look! (please look)" 
some of us have sets of invented syntactical words and if you have that, then 
you know how difficult it is actually to use them, even privately. Playing with 
the skeletal system of ordinary language is uncomfortable. Some languages like 
German don't even regularly form distinct adverbs. We're likelier to invent 
exactly defined syntactical written symbols (like the arrow) than words, and 
otherwise we make do with abstractions. 

Signs are built into complex signs and it wouldn't be helpful to have a level 
of internal structure where semiotics must dispense with its usual conceptions 
in order to reach. As the more complex signs are built, those "internal" 
structural links are expanded; they don't stay out of sight. Representational 
relations are internal, in a way; or if representational relations are an 
external character or effect of a sign, then the qualities (which they 
alternate, attribute, impute, etc.) are internal characters, internal 
"resources" of a sign, which sounds good, since now it sounds like I'm 
describing symbol and icon, respectively, in a reasonably recognizable way. One 
way or another, each is the other turned inside out, like probability and 
statistics, or like linear energy and rest mass. The more habitually we divide 
them, the more we make it take a person with crazy hair to reunite them.

Now, Peirce has already included representational (logical) relations as a 
fundamental category. And he has a class of signs -- symbols -- which represent 
by reference to representational relations embodied as an interpretant. Symbols 
are amazingly versatile and can represent abundant objects and qualities. I 
don't see why we can't regard them as sometimes directly representing 
representational relations as well, rather than treating representational 
relations as some sort of virtual particles to be barely glimpsed in the midst 
of other goings-on.  I've been discussing the "not," "if," etc., as pretty 
straightforward generalized ways of altering (not merely modifying) 
comprehension and discussing the symbol as pretty much telling the interpretant 
to negate, probabilize, logically condition, etc., a given predicate or 
proposition. The symbol does so as representing, and determined by, its object. 
And I think that Jim got it right with what I called his treating "not" as an 
elliptical "not...." Once we apply "not" to "blue," we have a comprehension and 
denotation for the new predicate "not blue." But we don't have a way to 
describe the representational contribution of the "not" itself. Now, I'm not 
against looking at classes and all that, but I'd like the description to be 
true to the experience that I have when I simply say the word "not." I'm not 
sure how to see this as some sort of 2nd-order comprehension or denotation, and 
I think of it as a kind of "transcomprehensioning," which sounds 2nd-orderish 
or 2nd-intentional, but "not" remains a 1st-order term indispensable at any 
level (or you could make do with "not both...and..." but in the end it's the 
same thing).

>[Gary] Another relevant distinction from linguistics is between semantics and 
>syntax. If we want to study what (or how) how closed-class words mean, then we 
>have to focus mainly on syntax, or the structure of utterances as determined 
>not by objects denoted or qualities signified but by the structure of the 
>language itself. 

That's another reason to regard signs for representational relations as 
symbols. The symbol is a sign defined by its effect on the interpretant in 
virtue only of an established rule or habit (e.g., that of an animal species or 
a human culture) (a rule or habit of treating an icon as an icon or an index as 
an index doesn't count toward making it a symbol). So when the symbol's purpose 
is contribute a representational relation, then the circle just gets drawn 
somewhat smaller. Now these sound like the "pure symbols" that were a source of 
much argument here a while back. I don't think that a mind, or anything which 
could be called a sub-mind (in a dialogical sense), could get by (though some 
algebraists supposedly don't do so badly) purely on symbols, let alone, purely 
on "pure" symbols.

>[Gary] Having said that, though, i think the line between syntactic and 
>semantic has become fuzzier in recent decades, for instance in Leonard Talmy's 
>work in cognitive semantics. He's shown how prepositions (for instance) not 
>only lend structure to utterances but also reveal conceptual structures which 
>are very deep aspects of meaning. And as i think Jim suggested, those aspects 
>are most easily specified in terms of relations between objects.

I'm not against people casting things as abstract objects but a question of 
philosophical interest is, what the real subject matter and guiding research 
interest are. Probability theory deals with distributions of properties, 
events, etc., modifications of one kind or another, and it does this with 
extensionally defined sets. But in the end it is interested in attributions of 
predicates, valuations of propositions, etc., and not with object mappings in a 
pure-mathematical sense (despite its using extensionally defined sets). 

I haven't heard of Talmy's work and it sounds interesting, I'll have to look it 
up. 

Prepositions "at," "in," "outside," "to the left of," and of course the 
endlessly useful "of," often have to do with mapping objects with regard to one 
another. 

(One of the interesting things about the idea of a mathematical functor is that 
it piggybacks on the preposition "of", so that we have a unitary form 
signifying, e.g., "triple-of." This seldom happens in ordinary language, though 
you can say "twice seven" and "thrice eight" etc.; in English we can sometimes 
combine the preposition with the prepositional object, so that "of France" 
becomes "France's" as in "France's capital" and we can say it as three elements 
"capital  of  France"; but you won't get a word-form that means "capital-of" in 
English or, I think, any European language). 

 Anyway here we find a whole further kind of representation. Instead of 
altering the predicates one is altering the subjects -- (re-)amassing them, 
(re)sequencing or (re)arranging them, (re)sorting them out into various 
equivalence classes, and (re)ranking them under some standard of comparison. 
Peircean semiotics treats these relationships as diagrammatic, and also uses 
diagrams to represent logical relations. Peircean semiotics classes the 
diagrams as icons. Now, there's a very big difference between a surface 
quality, a surface semblance, where we don't discount hidden depths, but 
instead taste and sample those samples which, in a sense, the surfaces, the 
appearances, are, in order to induce to conclusions about the material depths, 
and you can't always tell a book by its cover. However, a diagram may look 
utterly unlike its object(s), yet in virtue of an isomorphism or equivalence of 
some kind it serves as a sign fit for experimentation, interaction, 
"decision-making," on behalf of its object, such that what's proven about the 
diagram is proven about the object. Now, between a mathematical diagram and its 
object(s), an equivalence must be established which renders the diagram 
suitable for such treatment as a kind of proxy agent for its object. And that 
equivalence means that the proxy must act not at its own discretion but instead 
according to rules, somewhat as a lawyer represents the legal interests of his 
client. Now since a purpose of pure maths is to come up with easier or more 
powerful forms in which to deal with problems, and since this may involve 
developing an equivalent diagram which looks VERY different from its object(s), 
and since the salient constraint is some sort of equivalence to its object and 
object-observational legitimacy, there is no apparent reason not to define the 
diagram or, more generally, a "proxy," by that very constraint and drop the 
iconicity characterization. This would be a sign defined by its legitimacy and 
authority, how it would hold up under and support and deserve recognition by an 
observer collaterally observing the object(s). Now, I regard this as an even 
more "logical" kind of relationship than those involving symbols, 
interpretants, etc., because it has more to do with legitimacy, soundness, 
proofs, things like that. Signs and interpretants generally contain no 
transferrable familiarity with their object and have no automatic authority; 
they're merely representations and construals of representations; what 
authority they have is simply the recognition which they would merit from an 
observer collaterally observing their object.

Best, Ben


---
Message from peirce-l forum to subscriber archive@mail-archive.com

Reply via email to