I know next to nothing about computer language, but I'd like to try to give this a 
shot. Let me frame it in terms of the idea that among the perspectives David refers 
to, we seem to have not just two or even three, but four. (This will seem Peircean, 
except that I don't see the point of treating collateral observation of a sign's 
object, as not adding something different to the basic structure of objects, their 
signs, & interpretations of signs).

*1. The signs' objects.* Here one is in the position analogous to that of a pure 
mathematician, it's a question of saying which is or isn't which, & which is a 
combination of which & which, etc. One builds equational & reversibly deductive paths 
between various objects as between various aspects under which there appears what is 
really one same object. We have signs which are treated as aspects of the object. In 
distinguishing appearances or signs of a same object, we treat the varied appearances 
as objects themselves.
*2. The signs comprising a language.* Here one is in a position analogous to that of a 
logician or a probability theorist, insofar as one is concerned to encode only 
information & novelty, & not everything about the object(s). I.e., All A is B & All B 
is C, ergo (& here's what's brought to light) All A is C. The possibility of doubting 
our assumptions & premisses arises in the context of the possibility of faulty 
deductions, but also more generally. We need to be able to treat our signs as objects 
themselves, such that the signs form a distinct system of objects with "their own 
reality" -- schemata & forms.
*3. A metalanguage,* which relates signs to objects via translating the signs to more 
signs (or, trivially for most purposes, even into the same signs. E.g.:  "snow is 
white" is a sign that snow is white). Normally we think of this as even more abstract 
than the 1st-level language. But if, at this stage, one is decoding or reconstructing 
the message without knowing the parameters of the total set of messages, one is in a 
position analogous to that of an inferential statistician or whoever else has to 
proceed by careful induction. But often enough one can't thus make sense of the signs, 
& must run tests & conduct observations.
*4. Collateral observations,* collateral, that is, to the sign or sign system that one 
is immediately concerned with, & which test or support interpretations of signs as 
representatives of objects. I.e., I may need to test or be already familiar with the 
fact that "Schnee ist weiss" is a sign that snow is white, or that, for its part, 
"snow is white" means the same thing as some white-snow imagery in my head -- to know 
what all this is about, I need sooner or later to observe snow. Not being a vegetable, 
I test & learn. Here one is in a position analogous to that of an empirical scientist 
& must resort to surmise & inference to the best testable explanation -- I need to be 
able to make a leap which my premisses don't deductively imply, & by which my 
premisses may not be deductively implied in all strictness, & which may lead to the 
least slight adjustment of the premisses from which I made the leap. This means I must 
be able to suspect & doubt. Since as a practical matter our !
assumptions & premisses may be wrong, I need to be able to doubt them, or a portion of 
them in a given case; I need to not be a dogmatist but rather to believe that I am 
fallible; I also need to believe that I am not doomed to thorough error & that there 
is something independently real for me to know. I don't think we need for our 
deductive reasonings to be fallible (though we need to recognize their fallibility 
insofar as we don't always deduce so well) & I think it would most likely be helpful 
if we had stronger inborn computation abilities. Rather we need to be able to 
recognize that our premisses are fallible. So if there is an underlying deductive 
algorithm that guides my apparently non-deductive thought process, there is at least a 
relative uncertainty introduced by my making my thinking dependent on answers that I 
don't already have, but which I find by interactively asking questions, & seeking the 
collaboration, of things outside myself. Of course, one doesn't have to !
be  pursuing an empirical question in order to fruitfully be guide
premisses, but the deductive path remains obscure, then one at that point has the 
answer only indirectly in oneself.

- Ben Udell

----- Original Message ----- 
From: "David Barrett-Lennard" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, January 19, 2004 11:43 AM
Subject: Are conscious beings always fallible?


I'm wondering whether the following demonstrates that a computer that can only 
generate "thoughts" which are sentences derivable from some underlying axioms (and 
therefore can only generate "true" thoughts) is unable to think.

This is based on the fact that a formal system can't understand sentences written down 
within that formal system (forgive me if I've worded this badly). For example, the 
expression "3+4" is both equal and not equal to "7" depending on your point of view.  
If a mathematician was wearing special glasses that mapped every expression into the 
object represented by that expression, she couldn't possibly make sense of what she 
was doing.  Eg "3+4=7" wouldn't even appear as "7=7",  but simply "true".   An 
interesting theorem would be replaced by "true" which is not very interesting!

When a mathematician thinks about the expression "3+4=7",  there seems to be a duality 
of perspectives that must be juggled simultaneously. I think this duality is implicit 
to thinking.  It seems to be associated with the relationship between a language and a 
meta-language.

There are some good theorem proving programs around today.  The output (and internal 
trace) of these programs always correspond to true sentences, making them infallible 
(but only in the sense of never making a mistake). However,  every step is based on 
rules expressed in a meta-language where the underlying mathemetical truth seems to be 
isolated to the computer scientist who wrote the program.

Consider that a computer has a rule "It is a good idea to simplify 3+4 to 7".  If this 
rule is applied to itself the semantics of the rule is changed. Clearly 3+4 is not 
equal to 7, for the purposes of the meta-language used to represent both the 
computer's line of reasoning and the rules that it uses to advance its line of 
reasoning.

This suggests that thoughts must be expressed in a meta-language that can't directly 
be modelled on productions from a set of "true" axioms.  Does this demonstrate that 
any conscious being must be fallible?

Consider that we allow an expression to have sub-expressions *quoted* within it?  Eg 
We may write a rule like this

It is a good idea to simplify "3+4" to "7"

Somehow we would need to support free parameters within quoted expressions.
Eg to specify the rule

It is a good idea to simplify "x+0" to "x"

It is not clear that language reflection can be supported in a completely general way. 
 If it can, does this eliminate the need for a meta-language? How does this relate to 
the claim above?

- David

Reply via email to