Re: On the computability of consciousness

2010-03-20 Thread David Nyman
On 20 March 2010 18:22, Bruno Marchal  wrote:

> Well, if by 3-p Chalmers you mean some 'body', such a body *is* a zombie.
> The 1-p Chalmers is Chalmers, the person. Its body does not think, but makes
> higher the probability that the 1-p thoughts refers to the most probable
> computations you are sharing with him.

Well, if its body does not think (which of course Chalmers assumes
that it does, even though he says from his epiphenomenalist-dualist
standpoint that this does not logically entail consciousness), just
how does it increase the probabilities in the way you say above?  IOW,
what is the systematic correlation supposed to be between the
"physical events" in its brain and the 1-p thoughts of 1-p Chalmers?
This, after all, is a major aspect of  the mind<-->body problem, and
it's one thing for the explanation to be counter-intuitive, but right
now I'm not sure I could claim any firm intuition about it at all.

Let me try to tease this out, giving your words, as you say, their
most favourable interpretation - for me, that is.  By "the
computations you are sharing with him" I assume you to refer to the
1-p-plural computations (as you reserve 3-p for the arithmetical
reality).  IOW the UD generates (amongst everything else) the
1-p-plural appearances that constitute all possible perceptions of our
"shared environment" in all its possible extensions.  Included in
these, of course, are our bodies, and we expect - pace white rabbits -
our bodily activities (including, naturally, our brains) to be
consistent with our thoughts and with the behaviour of the rest of the
environment.

Nonetheless, presumably it is the case that there are "white rabbit"
extensions in which my response to the pain of being burned is to do
something pathological such as thrust my hand further into the flame.
But the effect in experience of even a high measure of divergent
pathological extensions is hypothesised as being damped by the
convergence of "normal" behavioural extensions (maybe corresponding to
some version of the least action principle, a la Feynman - and perhaps
illuminating also the unreasonable a posteriori effectiveness of
Occam).  So the effect is to make it very much more likely that my
actions will be consistent with my thoughts, including the actions of
my brain.  Does this mean that there may be "white rabbit" extensions
in which the behaviour of my brain is grossly inconsistent with my
thoughts?  I suppose so.

In this view, the concept of causation, if it is valid at all, must be
reserved for the 3-p arithmetical operators - for the internal
computational relations themselves.  The higher-order relations
between computations are rather correlative, and the appearance of
causation in the correlative domain that we inhabit is that of
consistency with expectation - normal, or non-pathological behaviour,
IOW.  So it isn't a case of the brain causing thoughts, or thoughts
causing the brain, but rather a question of which thoughts emerge as
being consistent with which brains.  The remarkable thing then would
be that we seem to find ourselves only in situations where most
(perhaps all) brains are consistent with most (perhaps all) thoughts.
The dreams of the machines, finally, seem to have converged on shared
"physical" universes of staggering complexity and consistency.

Is this anything like what you were trying to convey (interpreted
favourably, of course)?

David

>
> On 20 Mar 2010, at 16:56, David Nyman wrote:
>
>> On 24 February 2010 17:57, Bruno Marchal  wrote:
>>
>>> Please, keep in mind I may miss your point, even if I prefer to say that
>>> you
>>> are missing something, for being shorter and keeping to the point. You
>>> really put your finger right on the hardest part of the mind-body
>>> problem.
>>
>> Bruno, I've been continuing to think, and meditate, about our recent
>> discussions, and have been re-reading (insofar as I can follow it)
>> your on-line paper "Computation, Consciousness and the Quantum".  I
>> feel I have more of a sense of how the aspects I've been questioning
>> you about fit together in the comp view, but if I may, I would like to
>> press you on a couple of points.
>>
>> My original post on "non-computability" was motivated by re-reading
>> Chalmers and struggling again with his assertion that a zombie (e.g.
>> including the "3-p Chalmers" that wrote The Conscious Mind!) could
>> nonetheless refer to "consciousness" and hence be behaviourally
>> indistinguishable from a conscious entity.
>
> So you talk here on the philosophical zombie which is counterfactually
> correct.
> Well, if by 3-p Chalmers you mean some 'body', such a body *is* a zombie.
> The 1-p Chalmers is Chalmers, the person. Its body does not think, but makes
> higher the probability that the 1-p thoughts refers to the most probable
> computations you are sharing with him.
>
>
>
>
>>  I realise, by the way,
>> that when considering thought experiments, including your own, one
>> should not treat them in a naively re

Re: On the computability of consciousness

2010-03-20 Thread Bruno Marchal


On 20 Mar 2010, at 16:56, David Nyman wrote:


On 24 February 2010 17:57, Bruno Marchal  wrote:

Please, keep in mind I may miss your point, even if I prefer to say  
that you
are missing something, for being shorter and keeping to the point.  
You
really put your finger right on the hardest part of the mind-body  
problem.


Bruno, I've been continuing to think, and meditate, about our recent
discussions, and have been re-reading (insofar as I can follow it)
your on-line paper "Computation, Consciousness and the Quantum".  I
feel I have more of a sense of how the aspects I've been questioning
you about fit together in the comp view, but if I may, I would like to
press you on a couple of points.

My original post on "non-computability" was motivated by re-reading
Chalmers and struggling again with his assertion that a zombie (e.g.
including the "3-p Chalmers" that wrote The Conscious Mind!) could
nonetheless refer to "consciousness" and hence be behaviourally
indistinguishable from a conscious entity.


So you talk here on the philosophical zombie which is counterfactually  
correct.
Well, if by 3-p Chalmers you mean some 'body', such a body *is* a  
zombie.
The 1-p Chalmers is Chalmers, the person. Its body does not think, but  
makes higher the probability that the 1-p thoughts refers to the most  
probable computations you are sharing with him.






 I realise, by the way,
that when considering thought experiments, including your own, one
should not treat them in a naively realistic way, but rather focus on
their logical implications.


Indeed! Absolutely so. I thought this was obvious (it should be for  
deductive philosophers).





The problem with Chalmers' logic seems to
me to be that he has to assume that his zombie will have formal access
to what AFAICS are non-formalisable states.


Well, assuming comp, if the zombie has the "right" computer in its  
skull, it has access to the non formalisable propositions, notions, 1- 
states etc. (the 3-states are always formal).
But if the zombie skull is empty, then its counterfactual correctness  
is just magical, and it makes no sense to say it accesses some states  
or not. There are no 1-person state (because it is a zombie), nor 3- 
person state, because there is no digital machine in its (local) body.





 Now, in CC&Q, and in
discussion, you appear to say that Lobian machines can in fact refer
formally to what is non-formalisable.  This could at first glance seem
to support Chalmers' argument (which I assume is not your intention)
unless you also mean that the formal consequences ("extensions") of
such non-formalisable references would somehow be characteristically
different in the absence of the non-formal aspect (i.e. zombie-land
would in fact look very different).  IOW, consciousness should give
the appearance of exerting a "causal influence" on the physical, in
(naive) everyday terms.


Yes indeed. Except that "appearance" applies on the "physical". The  
"causal" is the real thing, here, and it is incarnated, or implemented  
with infinite redundancy (like the M set) in elementary arithmetic.







In CC&Q you point out that "we must not forget that the extensions
must not only be consistent, but must also be accessible by the
universal dovetailer".  Hence, which extensions are accessible by a
conscious (non-formalisable) decision-maker would appear nonetheless
to be formalisable.


Indeed, by the UD, or by that tiny (but sigma_1 complete) fragment of  
arithmetic, like Robinson arithmetic. It does not need to be Löbian.  
The UD is NOT a Löbian entity. It is much logically poorer.





Again, my question is: how would the range of
accessible extensions for a zombie (purely formal) decision-maker be
characteristically different?  For example, you cite the
"self-speeding-up" effect of consciousness with respect to the
organism's relation to its "neighbourhood" as a pragmatic argument for
the selective utility of consciousness.  I assume this implies that a
conscious decision-maker would be likely to find itself in
characteristically different extensions to its "environment" as
compared with a non-conscious decision-maker, but some clarification
on this would be very helpful.


This is not entirely clear for me. For a non-conscious decision-maker,  
there is just no sense at all to say that he could find itself (in the  
first person sense) in some particular environment.
There is a sense in which it can find itself in the third person  
sense, in some particular environment, but consciousness is a first  
person notion, and it makes sense only when you ascribe it to the  
(genuine) abstract computational states occurring infinitely often in  
the UD*. It makes sense for a first person to find itself in an  
infinite ensemble of computations/continuations.


Empirically we share a lot of very similar computations, and this  
makes us believe that physics describes some local 3-reality, but comp  
makes it describe only a sharable infinite set

Re: Jack's partial brain paper

2010-03-20 Thread Bruno Marchal

You will find them by clicking on "publications" on my home page
http://iridia.ulb.ac.be/~marchal/

The main one is "informatique théorique et philosophie de  
l'esprit" (theoretical computer science and philosophy of mind).  
Toulouse 1988. Like my thesis I have been asked to do in french (alas).


You can ask any question, in case you have difficulty of  
understanding, or critics (I mean on the available english versions:  
sane04 is rather complete, if you accompany it with some books in  
mathematical logic.


Best,

Bruno


On 19 Mar 2010, at 18:36, L.W. Sterritt wrote:


Bruno,

Your response is most appreciated. Your publications will keep me  
busy for while.  You also mentioned earlier some of your  
publications that are not on your URL.  That reference has gone  
missing in my labyrinthine filing system.  Would you please post  
those references again.


William


On Mar 19, 2010, at 2:11 AM, Bruno Marchal wrote:


William,

On 18 Mar 2010, at 18:06, L.W. Sterritt wrote:


Bruno and others,

Perhaps more progress can be made by avoiding self referential  
problems and viewing this issue mechanistically.


I don't see what self-referential problems you are alluding too,  
especially when viewing the issue mechanistically.


Self-reference is where computer science and mathematical logic  
excel.


A self-duplicator is just a duplicator applied to itself. If Dx  
gives xx, DD gives DD. Note the double diagonalization. That basic  
idea transforms mechanically "self-reference problem" into amazing  
feature about machines.
The most in topic, imo, is that it leads to two modal theories G  
and G* axiomatizing (completely at the propositional level) the  
provable and true, respectively, logics of self-reference. Machines  
can prove their own limitation theorems, and study the productive  
geometry of their ignorance, and indetermination. They can easily  
infer a large class of true but unprovable propositions, and used  
them in different ways. Useful when an argument (UDA) shows that  
matter (physical science) are a product of that indetermination  
reflexion. It makes comp testable.
Actually it leads to a general arithmetical interpretation of  
Plotinus neoplatonist theory of everything (God-without, God- 
within, the universal soul, intelligible Matter, sensible matter  
(qualia) etc.).


The theory is there. It is also the theory on which converge the  
self-referentially correct machines which look inward.

It is computer science. The key of comp.


 Where I start:  Haim Sompolinsky, "Statistical Mechanics of  
Neural Networks," Physics Today (December 1988). He discussed  
"emergent computational properties of large highly connected  
networks of simple neuron-like processors," HP has recently  
succeeded in making titanium dioxide "memristors" which behave  
very like the synapses in our brains,  i.e. the memristor's  
resistance at any time depends upon the last signal passing  
through it.  Work is underway to make brain-like computers with  
these devices; see Wei Lu, Nano letters, DOI:10.1021/nl904092h.   
It seems that there is a growing consensus that conscious machines  
will be built, and perhaps with the new Turing test proposed by  
Koch and Tonini, their consciousness may be verified. Then we can  
measure properties that are now speculative.


I think the contrary. If a scientist speculates that consciousness  
can be tested, he has not understood what consciousness is. We may  
evaluate it by bets, and self-identification.
Any way, this is the strong AI thesis, which is implied by comp  
(*I* am a machine). With *I* = you, really, hoping you know that  
you are conscious. Tononi has interesting ideas, typically he  
belongs to comp. He is not aware, or interested, in the body  
problem to which comp leads (and he is wrong on Mary).
But the comp body problem is not just a problem.  Like evolution  
theory, it is the beginning of an explanation of where the  
appearance of a material world comes from, and why it is necessary,  
once you believe in 0, 1, 2, 3, ..., and addition and multiplication.




 I guess I'm in the QM camp that believes that  what you can  
measure is what you can know.


What I say depends only of saying yes to a doctor at some level. No  
problem if you choose the quantum level. In all case physics has to  
be derived, in a precise way (based on the logics of self- 
reference) from arithmetic (see my url for the papers).


Bruno




William



On Mar 18, 2010, at 1:44 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 19:12, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the  
requirement
for zombiehood is that it doesn't understand anything at all  
but it
behaves as if it does what makes us not zombies? How do we not  
we are
not? But more importantly, are there known cases of zombies?  
Perhaps a
silly question because it might be just a thoug

Re: On the computability of consciousness

2010-03-20 Thread David Nyman
On 24 February 2010 17:57, Bruno Marchal  wrote:

> Please, keep in mind I may miss your point, even if I prefer to say that you
> are missing something, for being shorter and keeping to the point. You
> really put your finger right on the hardest part of the mind-body problem.

Bruno, I've been continuing to think, and meditate, about our recent
discussions, and have been re-reading (insofar as I can follow it)
your on-line paper "Computation, Consciousness and the Quantum".  I
feel I have more of a sense of how the aspects I've been questioning
you about fit together in the comp view, but if I may, I would like to
press you on a couple of points.

My original post on "non-computability" was motivated by re-reading
Chalmers and struggling again with his assertion that a zombie (e.g.
including the "3-p Chalmers" that wrote The Conscious Mind!) could
nonetheless refer to "consciousness" and hence be behaviourally
indistinguishable from a conscious entity.  I realise, by the way,
that when considering thought experiments, including your own, one
should not treat them in a naively realistic way, but rather focus on
their logical implications.  The problem with Chalmers' logic seems to
me to be that he has to assume that his zombie will have formal access
to what AFAICS are non-formalisable states.  Now, in CC&Q, and in
discussion, you appear to say that Lobian machines can in fact refer
formally to what is non-formalisable.  This could at first glance seem
to support Chalmers' argument (which I assume is not your intention)
unless you also mean that the formal consequences ("extensions") of
such non-formalisable references would somehow be characteristically
different in the absence of the non-formal aspect (i.e. zombie-land
would in fact look very different).  IOW, consciousness should give
the appearance of exerting a "causal influence" on the physical, in
(naive) everyday terms.

In CC&Q you point out that "we must not forget that the extensions
must not only be consistent, but must also be accessible by the
universal dovetailer".  Hence, which extensions are accessible by a
conscious (non-formalisable) decision-maker would appear nonetheless
to be formalisable.  Again, my question is: how would the range of
accessible extensions for a zombie (purely formal) decision-maker be
characteristically different?  For example, you cite the
"self-speeding-up" effect of consciousness with respect to the
organism's relation to its "neighbourhood" as a pragmatic argument for
the selective utility of consciousness.  I assume this implies that a
conscious decision-maker would be likely to find itself in
characteristically different extensions to its "environment" as
compared with a non-conscious decision-maker, but some clarification
on this would be very helpful.

David

>
> On 23 Feb 2010, at 22:05, David Nyman wrote:
>
> Bruno, I want to thank you for such a complete commentary on my recent
> posts - I will need to spend quite a bit of time thinking carefully
> about everything you have said before I respond at length.
>
> Thanks for your attention, David.
> Please, keep in mind I may miss your point, even if I prefer to say that you
> are missing something, for being shorter and keeping to the point. You
> really put your finger right on the hardest part of the mind-body problem.
>
>
>  I'm sure
> that I'm quite capable of becoming confused between a theory and its
> subject, though I am of course alive to the distinction.  In the
> meantime, I wonder if you could respond to a supplementary question in
> "grandmother" mode, or at least translate for grandma, into a more
> every-day way of speaking, the parts of your commentary that are most
> relevant to her interest in this topic.
>
> I am a bit panicking, because you may be asking for something impossible.
> How to explain in *intuitive every-day terms* (cf grandmother) what is
> provably counter-intuitive for any ideally perfect Löbian entity?
> Bohr said that to say we understand quantum mechanics, means that we don't
> understand.
> Comp says this with a revenge: it proves that there is necessarily an
> unbridgeable gap. You will not believe it, not understand it, nor know it to
> be true, without losing consistency and soundness. But you may understand
> completely while assuming comp it has to be like that.
> But I will try to help grandma.
>
> Let us suppose that, to use the example I have already cited, that
> grandma puts her hand in a flame, feels the unbearable agony of
> burning, and is unable to prevent herself from withdrawing her hand
> with a shriek of pain.
>
> OK.
>
>
>  Let us further suppose (though of course this
> may well be ambiguous in the current state of neurological theory)
> that a complete and sufficient 3-p description of this (partial)
> history of events is also possible in terms of nerve firings,
> cognitive and motor processing, etc. (the details are not so important
> as the belief that such a complete history could be given).
>