RE: problem of size '10

2010-02-17 Thread Jack Mallah
--- On Mon, 2/15/10, Stephen P. King  wrote:
> On reading the first page of your paper a thought occurred to me. What 
> actually happens in the case of progressive Alzheimer’s disease is a bit 
> different from the idea that I get from the discussion.

Hi Stephen.  Certainly, Alzheimer's disease is not the same as the kind of 
partial brains that I talk about in my paper, which maintain the same inputs as 
they would have within a full normal brain.

> Are you really considering “something” that I can realistically map to my own 
> 1st person experience or could it be merely some abstract idea.

That brings in the 'hard problem' discussion, which has been brought up on this 
list recently and which I have also been thinking about recently.  I won't 
attempt to answer it right now.  I will say that ALL approaches (eliminativism, 
reductionism, epiphenomenal dualism, interactionist dualism, and idealism) seem 
to have severe problems.  'None of the above' is no better as the list seems 
exhaustive.  In any case, if my work sheds light on only some of the approaches 
that is still progress.

BTW, I replied to Bruno and the reply appeared on Google groups but I don't 
think I got a copy in my email so I am putting a copy of what I posted here:

--- On Fri, 2/12/10, Bruno Marchal  wrote:
> Jack Mallah wrote:
> --- On Thu, 2/11/10, Bruno Marchal 
> > > MGA is more general (and older).
> > > The only way to escape the conclusion would be to attribute consciousness 
> > > to a movie of a computation
> >
> > That's not true.  For partial replacement scenarios, where part of a brain 
> > has counterfactuals and the rest doesn't, see my partial brain paper: 
> > http://cogprints.org/6321/
>
> It is not a question of true or false, but of presenting a valid or non valid 
> deduction.

What is false is your statement that "The only way to escape the conclusion 
would be to attribute consciousness to a movie of a computation".  So your 
argument is not valid.

> I don't see anything in your comment or links which prevents the conclusions 
> of being reached from the assumptions. If you think so, tell me at which 
> step, and provide a justification.

Bruno, I don't intend to be drawn into a detailed discussion of your arguments 
at this time.  The key idea though is that a movie could replace a computer 
brain.  The strongest argument for that is that you could gradually replace the 
components of the computer (which have the standard counterfactual (if-then) 
functioning) with components that only play out a pre-recorded script or which 
behave correctly by luck.  You could then invoke the 'fading qualia' argument 
(qualia could plausibly not vanish either suddenly or by gradually fading as 
the replacement proceeds) to argue that this makes no difference to the 
consciousness.  My partial brain paper shows that the 'fading qualia' argument 
is invalid.

I think there was also a claim that counterfactual sensitivity amounts to 
'prescience' but that makes no sense and I'm pretty sure that no one (even 
those who accept the rest of your arguments) agrees with you on that.  
Counterfactual behaviors are properties of the overall system and are 
mathematically defined.

> Jack Mallah wrote:
> > It could be physicalist or platonist - mathematical systems can implement 
> > computations if the exist in a strong enough (Platonic) sense.  I am 
> > agnostic on Platonism.
> 
> This contradicts your definition of computationalism given in your papers.
> I quote your glossary: < consciousness arises as a result of implementation of computations by 
> physical systems. >>

It's true that I didn't mention Platonism in that glossary entry (in the MCI 
paper), which was an oversight, but not a big deal given that the paper was 
aimed at physicists.  The paper has plenty of jobs to do already, and 
championing the possibility of the Everything Hypothesis was not the focus.

On p. 14 of the the MCI paper I wrote "A computation can be implemented by a 
physical system which shares appropriate features with it, or (in an analogous 
way) by another computation."  If a computation exists in a Platonic sense, 
then it could implement other computations.

On p. 46 of the paper I briefly discussed the All-Universes Hypothesis.  That 
should leave no doubt as to my position.




  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: On the computability of consciousness

2010-02-17 Thread Bruno Marchal


On 16 Feb 2010, at 19:07, David Nyman wrote:


 Is consciousness - i.e. the actual first-
person experience itself - literally uncomputable from any third-
person perspective?


There is an ambiguity in you phrasing. I will proceed like I always  
do, by interpreting your term favorably, relatively to  
computationalism and its (drastic) consequences.


The first person notion, and consciousness, are not clearly notion to  
which the label computable can be applied. The fact is that, no  
machine can even define what is the first person, or what is  
consciousness.


You may already understand (by uda) that the first person notions are  
related to infinite sum of computations (and this is not obviously  
computable, not even partially).


But auda makes this utterly clear. Third person self-reference is  
entirely described by the provability predicate, the one that I write  
with the letter "B". Bp is " *I* prove p", Beweisbar ('p'), for p some  
arithmetical proposition.
The corresponding first person notion is Bp & Tp, with Tp = True('p').  
By a theorem of Tarski "true" cannot be define (even just define!) by  
the machine, and the logic of Bp&Tp (= Bp & p) is quite different from  
Bp, from the point of view of the machine. That result on "truth" has  
been extended by Kaplan & Montague for "knowledge".


Let Bp = I prove p
Let Kp = Bp & Tp = Bp & p = I know p

Then, what happens is that

G* proves Bp <-> Kp
NOT(G proves Bp <-> Kp)

 G does not prove the equivalence of Bp and Kp, for correct machine.  
It is false that G proves Bp <-> Kp, and the machine cannot have  
access to the truth of that equivalence (or indirectly by postulating  
comp).








 The only rationale for adducing the additional
existence of any 1-p experience in a 3-p world is the raw fact that we
possess it (or "seem" to, according to some).  We can't "compute" the
existence of any 1-p experiential component of a 3-p process on purely
3-p grounds.


I guess you mean that we cannot "prove" the existence of the 1-p from  
the 3-p grounds. That's correct (both intuitively with UDA, and it is  
a theorem of machine's theology (AUDA).




Further, if we believe that 3-p process is a closed and
sufficient explanation for all events, this of course leads to the
uncomfortable conclusion (referred to, for example, by Chalmers in
TCM) that 1-p conscious phenomena (the "raw feels" of sight, sound,
pain, fear and all the rest) are totally irrelevant to what's
happening, including our every thought and action.



That is why a materialist who want to keep the mechanist hypothesis  
have no other choice than to abandon consciousness as an illusion or  
matter as an illusion. In this list most people, including you (if I  
remember well) accept that it is just impossible to dismiss  
consciousness, so ... Ah, I see you are OK with this in some replies  
today.


Note that the movie graph shows directly that the notion of primitive  
(3-p- matter makes no sense, and shows the way how to recover the  
appearance of matter from the logic of the first person plural point  
of view (somewhere in between Bp & Dp and Bp & Dp & p where Dp is ~B~p).




But doesn't this lead to paradox?  For example, how are we able to
refer to these phenomena if they are causally disconnected from our
behaviour - i.e. they are uncomputable (i.e. inaccessible) from the 3-
p perspective?


Good point. But you are lead to this because you still believe that  
matter is a primitive 3-p notion.




Citing "identity" doesn't seem to help here - the
issue is how 1-p phenomena could ever emerge as features of our shared
behavioural world (including, of course, talking about them) if they
are forever inaccessible from a causally closed and sufficient 3-p
perspective.


But the physical 3-p notions are just NOT closed for explanation. It  
collapses all the points of view. It explains consciousness away!





Does this in fact lead to the conclusion that the 3-p
world can't be causally closed to 1-p experience, and that I really do
withdraw my finger from the fire because it hurts, and not just
because C-fibres are firing?  But how?



Because it concerns knowledge, which, by definition, relate your  
beliefs to the truth. But that relation belongs itself to the corona  
G* minus G, and is unavailable by the machine itself.


Nice and clear and important questions. You explain well the mind-body  
problem. You put your fingers where it hurts!


I will comment some answers hereby:



On 16 Feb 2010, at 19:19, Stephen P. King wrote:



Is there a problem with the idea that 3-p can be derived from some
combinatorics of many interacting 1-p's? Is there a reason why we  
keep

trying to derive 1-p from 3-p?



This is a reasonable question. But with comp it is both 1-p and  
physical-3-p which are derived from arithmetical 3-p, yet it forces us  
to attribute personhood for machine (but this is comp, after all, and  
the logic of self-refrence justifies such an idea).



Re: On the computability of consciousness

2010-02-17 Thread David Nyman
On 17 February 2010 02:39, Brent Meeker  wrote:

> My intuition is that once we have a really good 3-p theory, 1-p will seem
> like a kind of shorthand way of speaking about brain processes.  That
> doesn't mean you questions will be answered.  It will be like Bertrand
> Russell's neutral monoids.  There are events and they can be arranged in 3-p
> relations or in 1-p relations.  Explanations will ultimately be circular -
> but not viciously so.

Yes, I've been sympathetic to this intuition myself.  It's just that
I've been troubled recently by the apparent impossibility of
reconciling the two accounts - i.e. what I'm calling (justifiably
AFAICS) the non-computability of 1-p from 3-p, leading directly to the
apparent causal irrelevance of (and mystery of our references to) 1-p
phenomena.  So I've started to wonder again if we've given up too soon
on the possibility of an "interactionist" approach - one that doesn't
fall back on "two substance" dualism with all its hopeless defects.
We certainly don't know that it's ruled out - i.e. that it is indeed
the case that all experiential phenomena map directly to neurological
phenomena in a straightforward 3-p way; this is currently merely an
assumption.  If it could be demonstrated robustly this would dismiss
my doubts, though not my puzzlement.  But the intriguing empirical
possibility exists that, for example, "consciously seeing" (1-p) and
"visually detecting" (3-p) may act on the world by partially different
paths (i.e. that there is an additional possibility - beyond mechanism
- in the deep structure of things that, moreover, has not been missed
by evolution).

David

> David Nyman wrote:
>>
>> On 17 February 2010 00:16, Brent Meeker  wrote:
>>
>>
>>>
>>> But suppose we had a really good theory and understanding of the brain so
>>> that we could watch yours in operation on some kind of scope (like an
>>> fMRI,
>>> except in great detail) and from our theory we could infer that "David's
>>> now
>>> thinking X.  And it's going to lead him to next think Y.  And then he'll
>>> remember Z and strenghten this synapse over here.  And..."   Then
>>> wouldn't
>>> you start to regard the 1-p account as just another level of description,
>>> as
>>> when you start you car on a cold day it "wants" a richer fuel mixture and
>>> the ECU "remembers" to keep the idle speed up until it's warm.
>>>
>>
>> In short, yes.  But that doesn't make the problem as I've defined it
>> go away.  At the level of reconciliation you want to invoke, you would
>> have to stop putting scare quotes round the experiential vocabulary,
>> unless your intention - like Dennett's AFAICS - is to deny the
>> existence, and causal relevance, of genuinely experiential qualities
>> (as opposed to "seemings", whatever they might be).  At bottom, 1-p is
>> not a "level of description" - i.e. something accessed *within*
>> consciousness - it *is* the very mode of access itself.
>
> I think "accessed" creates the wrong image - as though there is some "you"
> outside of this process that is "accessing" it.  But I'm not sure that
> vitiates your point.
>
>
>> The trouble
>> comes because in the version you cite the default assumption is that
>> the synapse-strengthening stuff - the 3-p narrative - is sufficient to
>> account for all the observed phenomena - including of course all the
>> 3-p references to experiential qualities and their consequences.
>>
>> But such qualities are entirely non-computable from the 3-p level,
>
> How can you know that?
>
>> so
>> how can such a narrative refer to them?  And indeed, looked at the
>> other way round, given the assumed causal closure of the 3-p level,
>> what further function would be served by such 1-p references?
>
> "Function" in the sense of purpose?  Why should it have one?
>>
>> Now, if
>> we indeed had the robust state of affairs that you describe above,
>> this would be a stunning puzzle, because 1-p and 3-p are manifestly
>> not "identical", nor are they equivalently "levels of description" in
>> any relevant sense. Consequently, we would be faced with a brute
>> reality without any adequate explanation.
>>
>> However, in practice, the theory and observations you characterise are
>> very far from the current state of the art. This leaves scope for some
>> actual future theory and observation to elucidate "interaction"
>> between 1-p and 3-p with real consequences that would be inexplicable
>> in terms of facile "identity" assertions.  For example, that I
>> withdraw my hand from the fire *because* I feel the pain, and this
>> turns out to both in theory and observation to be inexplicable in
>> terms of any purely 3-p level of description.  Prima facie, this might
>> seem to lead to an even more problematic interactive dualism, but my
>> suspicion is that there is scope for some genuinely revelatory
>> reconciliation at a more fundamental level - i.e. a truly explanatory
>> identity theory.  But we won't get to that by ignoring the problem.
>>
>
> My intuitio

Re: On the computability of consciousness

2010-02-17 Thread David Nyman
On 17 February 2010 07:28, Diego Caleiro  wrote:

> You guys should Read Chalmers: Philosophy of Mind, Classical and
> contemporary Readings
> and
>
> Philosophy and the mirror of nature.  Richard Rorty
>
> In particular "The Concepts of Counsciousness" By Ned Block and "Mental
> Causation" by stephen Yablo will get you nearer to where you are trying to
> get.

Thanks.  I've already read quite a bit of Chalmers, Rorty, Block, etc,
and before committing to a comprehensive re-perusal I would appreciate
your view on the specific nature of the corrective to be gained.  What
I guess I'm trying to suggest here is that I think we may have
retreated too hastily from an "interactionist" relation between 1-p
and 3-p because of its association with an apparently outmoded
dualism.  The problem is that current "identity" assumptions leave us
stuck with a causally-closed 3-p world in which the very nature of our
apparent access to 1-p phenomena is opaque, leave alone its (lack of)
causal relevance.  I'm hesitant to commit too strongly here to what a
deeper and genuinely illuminating resolution to the "identity" issue
might look like, partly because I have only a vague intuition, and
because it would probably jump-start one of the endless circular
debates on the topic.  Perhaps I'm trying to tempt others away from
current standard positions to re-consider what would have to be the
case for it to really make a difference in the world that we
*experience* (say) pain, rather than merely observing that its 3-p
correlates mediate our behaviour.

David



> You guys should Read Chalmers: Philosophy of Mind, Classical and
> contemporary Readings
> and
>
> Philosophy and the mirror of nature.  Richard Rorty
>
> In particular "The Concepts of Counsciousness" By Ned Block and "Mental
> Causation" by stephen Yablo will get you nearer to where you are trying to
> get.
>
> Best wish for all
>
> Diego Caleiro
>
> Philosopher of Mind
> University of São Paulo.
>
>
>
> On Wed, Feb 17, 2010 at 12:39 AM, Brent Meeker 
> wrote:
>>
>> David Nyman wrote:
>>>
>>> On 17 February 2010 00:16, Brent Meeker  wrote:
>>>
>>>

 But suppose we had a really good theory and understanding of the brain
 so
 that we could watch yours in operation on some kind of scope (like an
 fMRI,
 except in great detail) and from our theory we could infer that "David's
 now
 thinking X.  And it's going to lead him to next think Y.  And then he'll
 remember Z and strenghten this synapse over here.  And..."   Then
 wouldn't
 you start to regard the 1-p account as just another level of
 description, as
 when you start you car on a cold day it "wants" a richer fuel mixture
 and
 the ECU "remembers" to keep the idle speed up until it's warm.

>>>
>>> In short, yes.  But that doesn't make the problem as I've defined it
>>> go away.  At the level of reconciliation you want to invoke, you would
>>> have to stop putting scare quotes round the experiential vocabulary,
>>> unless your intention - like Dennett's AFAICS - is to deny the
>>> existence, and causal relevance, of genuinely experiential qualities
>>> (as opposed to "seemings", whatever they might be).  At bottom, 1-p is
>>> not a "level of description" - i.e. something accessed *within*
>>> consciousness - it *is* the very mode of access itself.
>>
>> I think "accessed" creates the wrong image - as though there is some "you"
>> outside of this process that is "accessing" it.  But I'm not sure that
>> vitiates your point.
>>
>>
>>> The trouble
>>> comes because in the version you cite the default assumption is that
>>> the synapse-strengthening stuff - the 3-p narrative - is sufficient to
>>> account for all the observed phenomena - including of course all the
>>> 3-p references to experiential qualities and their consequences.
>>>
>>> But such qualities are entirely non-computable from the 3-p level,
>>
>> How can you know that?
>>
>>> so
>>> how can such a narrative refer to them?  And indeed, looked at the
>>> other way round, given the assumed causal closure of the 3-p level,
>>> what further function would be served by such 1-p references?
>>
>> "Function" in the sense of purpose?  Why should it have one?
>>>
>>> Now, if
>>> we indeed had the robust state of affairs that you describe above,
>>> this would be a stunning puzzle, because 1-p and 3-p are manifestly
>>> not "identical", nor are they equivalently "levels of description" in
>>> any relevant sense. Consequently, we would be faced with a brute
>>> reality without any adequate explanation.
>>>
>>> However, in practice, the theory and observations you characterise are
>>> very far from the current state of the art. This leaves scope for some
>>> actual future theory and observation to elucidate "interaction"
>>> between 1-p and 3-p with real consequences that would be inexplicable
>>> in terms of facile "identity" assertions.  For example, that I
>>> withdraw my hand from the fire *because* I feel

Re: On the computability of consciousness

2010-02-17 Thread David Nyman
On 17 February 2010 02:08, Brent Meeker  wrote:

> I'm not sure in what sense you mean "gratuitous".  In a sense it is
> gratuitous to describe anything - hence the new catch-phrase, "It is what it
> is."  If one is just a different description of the other then they have the
> same consequences - in different terms.

What I mean is that it is superfluous to what we presume is already a
complete account (i.e. the 3-p one) of all the relevant events and
their consequences.  We would have no reason to suspect (nor could we
characterise) the existence of 1-p experience if we only had access to
the 3-p account.  Furthermore, if we believe the 3-p account to be
complete and causally closed, we are committed to accepting that all
thoughts, beliefs, statements or behaviour apparently relating to 1-p
experience are in fact entirely motivated by the 3-p account.  This
leads to the paradox of the existence of 3-p references to 1-p
experiences which simply cannot be extrapolated from the 3-p account
(i.e. they are non-computable).

>> More problematic still,
>> neither the existence nor the experiential characteristics of 1-p
>> experience is computable from the confines of the 3-p narrative.
>
> How do you know that?   In my computation of what's happening in your brain
> I might well say, "And *there's* where David is feeling confused."

Yes, of course.  But you can only analogise with some "feeling of
confusion" to which you have (or "seem" to have) personal privileged
access (this is the really hard bit to keep in mind).  Had you no
access to such 1-p experience (e.g. you were one of Chalmers'
affect-less zombies) you would have no basis from which to extrapolate
from the 3-p account to 1-p experience, or even to suspect such a
possibility or what its nature could be (hence non-computable).
Nonetheless, belief in the causal completeness and closure of the 3-p
account simultaneously commits us to believing that all your beliefs,
statements and behaviour with respect to "1-p" would be unaltered!
This is the paradox.

The standard move, which is implicit in your proposal, is to try to
wave all this away by asserting the "identity" of 3-p and 1-p.  I'm
trying to say two things about this: first, it's meaningless to say
that two different things are identical without showing how their
apparent differences are to be reconciled; second, if we accept this
it leaves us in the position of continuing to exhibit every one of our
thoughts, beliefs, statements and behaviours with respect to 1-p
experience, even though the existence and nature of such phenomena
can't be computed from the basis of 3-p, and even in the case that the
phenomena didn't exist at all!  This doesn't strike me as a
satisfactory resolution.

David


> David Nyman wrote:
>>
>> On 17 February 2010 00:06, Brent Meeker  wrote:
>>
>>
>>>
>>> I don't see that my 1-p experience is at all "causally closed".  In fact,
>>> thoughts pop into my head all the time with no provenance and no hint of
>>> what caused them.
>>>
>>
>> The problem is that if one believes that the 3-p narrative is causally
>> sufficient, then the "thoughts" that pop into your head - and their
>> consequences -  are entirely explicable in terms of some specific 3-p
>> rendition.  If you also "seem" to have the 1-p experience of the
>> "sound" of a voice in your head, this is entirely gratuitous to the
>> 3-p "thought-process" and its consequences.
>
> I'm not sure in what sense you mean "gratuitous".  In a sense it is
> gratuitous to describe anything - hence the new catch-phrase, "It is what it
> is."  If one is just a different description of the other then they have the
> same consequences - in different terms.
>
>
>> More problematic still,
>> neither the existence nor the experiential characteristics of 1-p
>> experience is computable from the confines of the 3-p narrative.
>
> How do you know that?   In my computation of what's happening in your brain
> I might well say, "And *there's* where David is feeling confused."
>
> Brent
>
>>  So
>> how can it be possible for any such narrative to *refer* to the
>> experiential quality of a thought?
>>
>> David
>>
>>
>>
>>>
>>> David Nyman wrote:
>>>
>
>  Is there a problem with the idea that 3-p can be derived from some
> combinatorics of many interacting 1-p's? Is there a reason why we keep
> trying to derive 1-p from 3-p?
>
>

 I suspect there's a problem either way.  AFAICS the issue is that, in
 3-p and 1-p, there exist two irreducibly different renditions of a
 given state of affairs (hence not "identical" in any
 non-question-begging sense of the term). It then follows that, in
 order to fully account for a given set of events involving both
 renditions, you have to choose between some sort of non-interacting
 parallelism, or the conundrum of how one "causally closed" account
 becomes informed about the other, or the frank denial of one or the
 other rendition.  None of these