Le 21-juil.-06, à 08:05, Russell Standish a écrit :

> On Sun, Jul 23, 2006 at 04:38:01PM +0200, Bruno Marchal wrote:
>> Functionalism is the same as comp, except that functionalist
>> traditionally presuppose some knowable high level of substitution (and
>> then like materialist presuppose a physical stuffy level).
>> So I would say comp is just the "old functionalism" corrected for
>> taking the UDA consequence into account (the level of substitution is
>> unkowable, and physical stuff is either contradictory or devoid of
>> explanation power and redundant).
> Hmm - you use the term functionalism quite differently to my
> understanding. My take is that functionalism implies if you replace
> the parts of my brain with things which were functionally equivalent,
> you would end up with a copy of my consciousness. The description
> given by Janet Levin on plato.stanford.edu seems to be in agreement
> with this notion (even though she uses different words).

I use functionalism in some standard sense. If you know an explicit non  
comp author defending functionalism I would be interested in the  
reference. Most functionalist I know accept Dennett's idea that a  
theory explaining brain and consciousness should not rely on either  
conscious or intelligent sub-entities, nor (unless explicitly so like  
Penrose did in some writings, but few would call those writing  
"functionalist") entities more complex that the overal structure under  
But ok, this is just terminology and I am not sanguine about it except  
for preventing misunderstanding.

> Nowhere in this discussion is an assumption of a level of
> substitution,

yes, apparently I am the one to introduce this, and to insist such a  
levcel cannot be known. Note that Maudlin made an allusion to such  
"unbounded" comp and argues this make comp obviously true. My work  
illustrates this is not the case, even if it forces, strictly speaking,  
a non-computationalist to postulate some actual infinity in the  
"generalized brain". Again, I know only Penrose as defending such a non  
comp view. I take this as a courageous attitude, and as a symptom that  
some physicist can take the mind/body problem seriously. It is just a  
pity that he got his interpretation of Godel's wrong, especially after  
Judson Webb wrote his book (ref in my url).

> nor of stuffy matter.

Many scientist take "stuffy matter" for granted, so it is true that it  
is very rare people mentions explicitly the hypothesis of stuffy  

> Suppose I had a non Turing-emulable soul, composed of identical non
> Turing-emulable particles called "soulons". Functionalism would imply
> I can copy my brain by adding in an appropriate arrangement of
> physical particles, as well as an appropriate arrangement of
> soulons. Yet, by construction, this theory is not computationalist!
> So I stand by my remarks that computationalism is a specialised
> variant of functionalism.

I am afraid your solon will just have this application: to give a non  
standard meaning to functionalism. Even a strict catholic can be a  
functionalist in that sense: just imagine that the solon, thanks to  
their non-comp (by construction) feature are connected to "Descartes'  

>> It depends what you put in the "B". It is indeed a sort of scientific
>> knowledge when starting with B = the provability predicate of some
>> fixed theory like Peano arithmetic, but such a theory can
>> (autonomously) transcends itself in the (constructive) transfinite,  
>> and
>> the "arithmetical" meaning of "B" will evolved, letting invariant the
>> modal logic G, G*, S4Grz, ...
>> Then the justification is that it works. It gives an unameable  
>> creative
>> subject which lives in a non describable temporal structure, etc. You
>> can take this as a simplification. With comp the simple first person
>> already leads to a notion of arithmetical quantization. Then sensible
>> matter is also given by adding "& p" , but on "Bp & Dp", ...
> I can (sort of) see this. However, it is only one model, and not even
> a terribly convincing one (to me at least).

I don't think it is a model. Once we say yes to the doctor, it is  
normal to be interested in what any machine (perhaps ideally correct)  
can prove about herself, and then we inherit of the nuances *forced* by  
incompleteness. We just cannot threw them in the trash, and then it is  
just an amazing news that they behave like we were expecting. I have  
worked with more complex definition (based on Kleene realizability and  
on Hyland effective topos until I discover the variant of provability  
were quite enough for distinguishing in number-theoretic terms the  
notion of persons corresponding to their use in the UDA.
This does not preclude more fine grain "model" of course, but let us  
first extract all the juice from simpler idea before. No?

> Do you have any uniqueness
> results showing that the &p is necessary for obtaining the unamable
> creative subject or the temporality?

For Bp & p, I mentionned the appendix "Artemov" thesis of my text  
"conscience et mécanisme", here:

>> Except that Dp always entail ~BDp (by second incompleteness). This
>> would make your refutability notion much too large.
> Oh, well another idea bites the dust!
>>> Anyway, thats by the bye. If I accept the Theatetical notion for the
>>> sake of argument (since I can see how it might work for mathematical
>>> knowledge), I still struggle to see how the "&p" part leads to self
>>> awareness.
>> To be just a little bit more specific, "Bp" is 3-self-referential (the
>> machine proves correct proposition on any of third person description
>> made at some level, correctly chosen in a serendipitous way).
>> But by adding "& p", by a theorem similar to Tarski theorem, we are
>> lead to a first person  self-reference (Bp & p) without any nameable
>> subject. It is the "I" which has no name. That "I", somehow, could
>> correctly said about himself that he is not a program, that he is not
>> duplicable (and indeed the first person is not duplicable from its
>> first person point of view (despite Chalmers).
> You would need to be more specific in your claims, but that would
> probably be the subject of a full scientific paper, and perhaps you
> are only speculating at present anyway. I will need to be patient.

The whole thing is already in my SANE paper:


> But even so, I don't see anywhere the necessity of 1st person
> self-awareness, which is what I was driving at.

What difference are you doing between self-awareness and consciousness.  
At this stage this is, imo, a 1004 fallacy.
I find remarkable that the "Bp & p" variant of B gives a theory of  
consciousness isomorphic to Brouwer's theory of Consciousness:
- the subject feel not to be machine or formal
-the subject has no name
-the subject define an intrinsic intuitionist logic
-the subject is related to a temporal irreversible branching  
multiverse, etc.
Then with comp we get the shadows of the quantum.

> I wish I shared your certainty that any n-person POV can be captured
> by means of a modal logic. But I don't. All I can say is that I find
> it unconvincing, whilst admitting that perhaps you have a point.

Modal logic only simplifies the assertions on the provability logic.  
The first version of my work has been completed in the seventies  
without any use of modal logic. The important thing are the provability  
logics and their variant. I could use only formula of arithmetic. The  
modalities appears from purely number theoretical reasons>. It is not  
something we have to introduce. We can hide them by working with the  
pure language of arithmetic, but this makes thing less easy to study.
... will say more in the road map ...



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to