Re: MGA 3

2008-12-11 Thread Russell Standish

On Wed, Dec 10, 2008 at 10:39:34AM +, Michael Rosefield wrote:
 This distinction between physicalism and materialism, with materialism
 allowing for features to emerge, it sounds to me like a join-the-dots puzzle
 - the physical substrate provides the dots, but the supervening system also
 contains lines - abstract structures implied by but not contained within the
 system implementing it. But does that not mean that this also implies
 further possible layers to the underlying reality? That no matter how many
 turtles you go down, there's always more turtles to come?
 

I don't think it implies it, but it is certainly possible. Emergence
is possible with just two incommensurate levels.

Cheers

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-11 Thread Russell Standish

On Mon, Dec 08, 2008 at 09:43:47AM +0100, Bruno Marchal wrote:
 
  Michael Lockwood distinguishes between materialism (consciousness
  supervenes on the physical world) and physicalism (the physical world
  suffices to explain everything). The difference between the two is
  that in physicalism, consciousness (indeed any emergent phenomenon) is
  mere epiphenomena, a computational convenience, but not necessary for
  explanation, whereas in non-physicalist materialism, there are  
  emergent
  phenomena that are not explainable in terms of the underlying physics,
  even though supervenience holds.
 
 In what sense are they emergent? They emerge from what?

They emerge from the underlying physics (or chemistry, or whatever the
syntactic layer is). Supervenience is AFAICT nothing other than the
concept of emergence applied to consciousness. In many respects it
could be considered to be synonymous.

 
 
  This has been argued in the famous
  paper by Philip Anderson. One very obvious distinction between
  the two positions is that strong emergence is possible in materialism,
  but strictly forbidden by physicalism. An example I give of strong
  emergence in my book is the strong anthropic principle.
 
  So - I'm convinced your argument works to show the contradiction
  between COMP and physicalism, but not so the more general
  materialism.
 
 I don't see why. When I state the supervenience thesis, I explain that  
 the type of supervenience does not play any role, be it a causal  
 relation or an epiphenomenon.
 

In your Lille thesis (sorry I still haven't read your Brussels thesis)
you say at the end of section 4.4.1 that SUP-PHYS supposes at minimum
a concrete physical world. I don't see how this follows at all from
the concept of supervenience, but I accept that it is necessary for
(naive) physicalism.

 
  I think you have confirmed this in some of your previous
  responses to me in this thread.
 
  Which is just as well. AFAICT, supervenience is the only thing
  preventing the Occam catastrophe. We don't live in a magical world,
  because such a world (assuming COMP) would have so many contradictory
  statements that we'd disappear in a puff of destructive logic!
  (reference to my previous posting about destructive phenomena).
 
 
 I don' really understand. If such argument is correct, how could  
 classical logic not be quantum like. The problem of the white rabbits  
 is that they are consistent. 

Sorry, to be clear - the white rabbits themselves are consistent, and
also also quite rare (ie improbable). However they also tend to come
in equal and opposite (ie contradictory) forms so when combined
contribute to the measure of a non-magical world. That is 
information destructve phenomena.

As for logic, each individual observer sees a world according to
classical logic. Only by quantifying over multiple observers does
quantum logic come into play. This is a key point I make on page 219
of my book. I'm sorry I haven't found the best way to express the
argument yet - it really is quite subtle. I know Youness had
difficulties with this aspect as well.

I apologise - I have been speaking in coded sentences which require a
deal of unpacking if you are unfamiliar with the concepts. But I'm in
good company here...

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 hpco...@hpcoders.com.au
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-10 Thread Michael Rosefield
This distinction between physicalism and materialism, with materialism
allowing for features to emerge, it sounds to me like a join-the-dots puzzle
- the physical substrate provides the dots, but the supervening system also
contains lines - abstract structures implied by but not contained within the
system implementing it. But does that not mean that this also implies
further possible layers to the underlying reality? That no matter how many
turtles you go down, there's always more turtles to come?

--
- Did you ever hear of The Seattle Seven?
- Mmm.
- That was me... and six other guys.


2008/12/7 Russell Standish [EMAIL PROTECTED]


 On Sat, Dec 06, 2008 at 03:32:53PM +0100, Bruno Marchal wrote:
 
  I would be pleased if you can give me a version of MAT or MEC to which
  the argument does not apply. For example, the argument applies to most
  transfinite variant of MEC. It does not apply when some magic is
  introduced in MAT, and MAT is hard to define in a way to exclude that
  magic. If you can help, I thank you in advance.
 
  Bruno
 

 Michael Lockwood distinguishes between materialism (consciousness
 supervenes on the physical world) and physicalism (the physical world
 suffices to explain everything). The difference between the two is
 that in physicalism, consciousness (indeed any emergent phenomenon) is
 mere epiphenomena, a computational convenience, but not necessary for
 explanation, whereas in non-physicalist materialism, there are emergent
 phenomena that are not explainable in terms of the underlying physics,
 even though supervenience holds. This has been argued in the famous
 paper by Philip Anderson. One very obvious distinction between
 the two positions is that strong emergence is possible in materialism,
 but strictly forbidden by physicalism. An example I give of strong
 emergence in my book is the strong anthropic principle.

 So - I'm convinced your argument works to show the contradiction
 between COMP and physicalism, but not so the more general
 materialism. I think you have confirmed this in some of your previous
 responses to me in this thread.

 Which is just as well. AFAICT, supervenience is the only thing
 preventing the Occam catastrophe. We don't live in a magical world,
 because such a world (assuming COMP) would have so many contradictory
 statements that we'd disappear in a puff of destructive logic!
 (reference to my previous posting about destructive phenomena).

 --


 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052 [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au

 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-10 Thread Bruno Marchal

Abram Demski wrote:

 Bruno,

 Thanks for the references.

You are welcome.


 ps- it is final exam crunch time, so I haven't been checking email so
 much as usual... I may get around to more detailed replies et cetera
 this weekend or next week.

With pleasure.

Best,

Bruno





 On Sun, Dec 7, 2008 at 1:12 PM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:

 On 07 Dec 2008, at 06:19, Abram Demski wrote:

 Bruno,

 Yes, I think there is a big difference between making an argument  
 more
 detailed and making it more understandable. They can go together or  
 be
 opposed. So a version of the argument targeted at my complaint might
 not be good at all pedagogically...

 I would be pleased if you can give me a version of MAT or MEC to  
 which

 the argument does not apply. For example, the argument applies to  
 most

 transfinite variant of MEC. It does not apply when some magic is

 introduced in MAT, and MAT is hard to define in a way to exclude that

 magic. If you can help, I thank you in advance.

 My particular brand of magic appears to be a requirement of
 counterfactual/causal structure that reflects the
 counterfactual/causal structure of (abstract) computation.

 Sometimes I think I should first explain what a computation is. I  
 take it
 in the sense of theoretical computer science, a computation is  
 always define
 relatively to a universal computation from outside, and an infinity  
 of
 universal computations from inside. This asks for a bit of computer  
 science.
 But there is not really abstract computation, there are always  
 relative
 computation (both with comp and Everett QM). They are always concrete
 relatively to the universal machine which execute them. The  
 starting point
 in no important (for our fundamental concerns), you can take number  
 with
 addition and multiplication, or lambda terms with abstraction and
 application.



 Stathis has
 pointed out some possible ways to show such ideas incoherent (which I
 am not completely skeptical of, despite my arguments).

 I appreciate.


 Since this type
 of theory is the type that matches my personal intuition, MGA will
 feel empty to me until such alternatives are explicitly dealt a
 killing blow (after which the rest is obvious, since I intuitively
 feel the contradiction in versions of COMP+MAT that don't require
 counterfactuals).

 Understanding UD(1...7) could perhaps help you to figure out what  
 happens
 when we abandon the physical supervenience thesis, and embrace what  
 remains,
 if keeping comp, that is the comp supervenience. It will explain  
 how the
 physical laws have to emerge and why we believe (quasi-correctly)  
 in brains.





 Of course, as you say, you'd be in a hard spot if you were required  
 to
 deal with every various intuition that anybody had... but, for what
 it's worth, that is mine.


 I respect your intuition and appreciate the kind attitude. My  
 feeling is
 that if front of very hard problems we have to be open to the fact  
 that we
 could be surprised and that truth could be counterintuitive. The
 incompleteness phenomena, from Godel and Lob, are surprising and
 counterintuitive, and in the empirical world the SWE, whatever
 interpretation we find more plausible, is always rather  
 counterintuitive
 too.
 I interpret the self-referentially correct scientist M by the  
 logic of
 Godel's provability predicates beweisbar_M. But the intuitive  
 knower, the
 first person, is modelled (or defined) by the Theatetus trick: the  
 machine M
 knows p in case beweisbar_M('p') and p. Although extensionally  
 equivalent,
 their are intensionally different. They prove the same arithmetical
 propositions, but they obey different logics. This is enough for  
 showing
 that the first person associated with the self-referentially correct
 scientist will already disbelieve the comp hypothesis or find it very
 doubtful. We are near a paradox: the correct machine cannot know or  
 believe
 their are machine. No doubt comp will appear counterintuitive for  
 them. I
 know it is a sort of trap/ the solution consists in admitting that  
 comp
 needs a strong act of faith, and I try to put light on the  
 consequences for
 a machine, when she makes the bet.

 The best reference on the self-reference logics are
 Boolos, G. (1979). The unprovability of consistency. Cambridge  
 University
 Press, London.Boolos, G. (1993). The Logic of Provability. Cambridge
 University Press, Cambridge.Smoryński, P. (1985). Self-Reference  
 and Modal
 Logic. Springer Verlag, New York.Smullyan, R. (1987). Forever  
 Undecided.
 Knopf, New York.

 The last one is a recreative book, not so simple, and rather quick  
 in the
 heart of the matter chapter. Smullyan wrote many lovely  books,  
 recreative
 and technical on that theme.
 The bible, imo, is Martin Davis book The undecidable which  
 contains some
 of the original papers by Gödel, Church, Kleene, Post and indeed  
 the most
 key starting points of the parts of 

Re: MGA 3

2008-12-09 Thread Abram Demski
Bruno,

Thanks for the references.

--Abram

ps- it is final exam crunch time, so I haven't been checking email so
much as usual... I may get around to more detailed replies et cetera
this weekend or next week.

On Sun, Dec 7, 2008 at 1:12 PM, Bruno Marchal [EMAIL PROTECTED] wrote:

 On 07 Dec 2008, at 06:19, Abram Demski wrote:

 Bruno,

 Yes, I think there is a big difference between making an argument more
 detailed and making it more understandable. They can go together or be
 opposed. So a version of the argument targeted at my complaint might
 not be good at all pedagogically...

 I would be pleased if you can give me a version of MAT or MEC to which

 the argument does not apply. For example, the argument applies to most

 transfinite variant of MEC. It does not apply when some magic is

 introduced in MAT, and MAT is hard to define in a way to exclude that

 magic. If you can help, I thank you in advance.

 My particular brand of magic appears to be a requirement of
 counterfactual/causal structure that reflects the
 counterfactual/causal structure of (abstract) computation.

 Sometimes I think I should first explain what a computation is. I take it
 in the sense of theoretical computer science, a computation is always define
 relatively to a universal computation from outside, and an infinity of
 universal computations from inside. This asks for a bit of computer science.
 But there is not really abstract computation, there are always relative
 computation (both with comp and Everett QM). They are always concrete
 relatively to the universal machine which execute them. The starting point
 in no important (for our fundamental concerns), you can take number with
 addition and multiplication, or lambda terms with abstraction and
 application.



 Stathis has
 pointed out some possible ways to show such ideas incoherent (which I
 am not completely skeptical of, despite my arguments).

 I appreciate.


 Since this type
 of theory is the type that matches my personal intuition, MGA will
 feel empty to me until such alternatives are explicitly dealt a
 killing blow (after which the rest is obvious, since I intuitively
 feel the contradiction in versions of COMP+MAT that don't require
 counterfactuals).

 Understanding UD(1...7) could perhaps help you to figure out what happens
 when we abandon the physical supervenience thesis, and embrace what remains,
 if keeping comp, that is the comp supervenience. It will explain how the
 physical laws have to emerge and why we believe (quasi-correctly) in brains.





 Of course, as you say, you'd be in a hard spot if you were required to
 deal with every various intuition that anybody had... but, for what
 it's worth, that is mine.


 I respect your intuition and appreciate the kind attitude. My feeling is
 that if front of very hard problems we have to be open to the fact that we
 could be surprised and that truth could be counterintuitive. The
 incompleteness phenomena, from Godel and Lob, are surprising and
 counterintuitive, and in the empirical world the SWE, whatever
 interpretation we find more plausible, is always rather counterintuitive
 too.
 I interpret the self-referentially correct scientist M by the logic of
 Godel's provability predicates beweisbar_M. But the intuitive knower, the
 first person, is modelled (or defined) by the Theatetus trick: the machine M
 knows p in case beweisbar_M('p') and p. Although extensionally equivalent,
 their are intensionally different. They prove the same arithmetical
 propositions, but they obey different logics. This is enough for showing
 that the first person associated with the self-referentially correct
 scientist will already disbelieve the comp hypothesis or find it very
 doubtful. We are near a paradox: the correct machine cannot know or believe
 their are machine. No doubt comp will appear counterintuitive for them. I
 know it is a sort of trap/ the solution consists in admitting that comp
 needs a strong act of faith, and I try to put light on the consequences for
 a machine, when she makes the bet.

 The best reference on the self-reference logics are
 Boolos, G. (1979). The unprovability of consistency. Cambridge University
 Press, London.Boolos, G. (1993). The Logic of Provability. Cambridge
 University Press, Cambridge.Smoryński, P. (1985). Self-Reference and Modal
 Logic. Springer Verlag, New York.Smullyan, R. (1987). Forever Undecided.
 Knopf, New York.

 The last one is a recreative book, not so simple, and rather quick in the
 heart of the matter chapter. Smullyan wrote many lovely  books, recreative
 and technical on that theme.
 The bible, imo, is Martin Davis book The undecidable which contains some
 of the original papers by Gödel, Church, Kleene, Post and indeed the most
 key starting points of the parts of theoretical computer science we are
 confonted to. It has been reedited by Dover.
 Bruno
 Other references here:
 

Re: MGA 3

2008-12-08 Thread Bruno Marchal


On 08 Dec 2008, at 00:59, Russell Standish wrote:


 On Sat, Dec 06, 2008 at 03:32:53PM +0100, Bruno Marchal wrote:

 I would be pleased if you can give me a version of MAT or MEC to  
 which
 the argument does not apply. For example, the argument applies to  
 most
 transfinite variant of MEC. It does not apply when some magic is
 introduced in MAT, and MAT is hard to define in a way to exclude that
 magic. If you can help, I thank you in advance.

 Bruno


 Michael Lockwood distinguishes between materialism (consciousness
 supervenes on the physical world) and physicalism (the physical world
 suffices to explain everything). The difference between the two is
 that in physicalism, consciousness (indeed any emergent phenomenon) is
 mere epiphenomena, a computational convenience, but not necessary for
 explanation, whereas in non-physicalist materialism, there are  
 emergent
 phenomena that are not explainable in terms of the underlying physics,
 even though supervenience holds.

In what sense are they emergent? They emerge from what?


 This has been argued in the famous
 paper by Philip Anderson. One very obvious distinction between
 the two positions is that strong emergence is possible in materialism,
 but strictly forbidden by physicalism. An example I give of strong
 emergence in my book is the strong anthropic principle.

 So - I'm convinced your argument works to show the contradiction
 between COMP and physicalism, but not so the more general
 materialism.

I don't see why. When I state the supervenience thesis, I explain that  
the type of supervenience does not play any role, be it a causal  
relation or an epiphenomenon.


 I think you have confirmed this in some of your previous
 responses to me in this thread.

 Which is just as well. AFAICT, supervenience is the only thing
 preventing the Occam catastrophe. We don't live in a magical world,
 because such a world (assuming COMP) would have so many contradictory
 statements that we'd disappear in a puff of destructive logic!
 (reference to my previous posting about destructive phenomena).


I don' really understand. If such argument is correct, how could  
classical logic not be quantum like. The problem of the white rabbits  
is that they are consistent. Your explanation would make the world  
quantum or not independently of the degree of independence of the  
computational histories. Observation would not make a logic classical,  
as it is the case in QM.

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-07 Thread Bruno Marchal

On 07 Dec 2008, at 06:19, Abram Demski wrote:


 Bruno,

 Yes, I think there is a big difference between making an argument more
 detailed and making it more understandable. They can go together or be
 opposed. So a version of the argument targeted at my complaint might
 not be good at all pedagogically...

 I would be pleased if you can give me a version of MAT or MEC to  
 which
 the argument does not apply. For example, the argument applies to  
 most
 transfinite variant of MEC. It does not apply when some magic is
 introduced in MAT, and MAT is hard to define in a way to exclude that
 magic. If you can help, I thank you in advance.

 My particular brand of magic appears to be a requirement of
 counterfactual/causal structure that reflects the
 counterfactual/causal structure of (abstract) computation.


Sometimes I think I should first explain what a computation is. I  
take it in the sense of theoretical computer science, a computation is  
always define relatively to a universal computation from outside, and  
an infinity of universal computations from inside. This asks for a bit  
of computer science. But there is not really abstract computation,  
there are always relative computation (both with comp and Everett QM).  
They are always concrete relatively to the universal machine which  
execute them. The starting point in no important (for our fundamental  
concerns), you can take number with addition and multiplication, or  
lambda terms with abstraction and application.




 Stathis has
 pointed out some possible ways to show such ideas incoherent (which I
 am not completely skeptical of, despite my arguments).


I appreciate.



 Since this type
 of theory is the type that matches my personal intuition, MGA will
 feel empty to me until such alternatives are explicitly dealt a
 killing blow (after which the rest is obvious, since I intuitively
 feel the contradiction in versions of COMP+MAT that don't require
 counterfactuals).


Understanding UD(1...7) could perhaps help you to figure out what  
happens when we abandon the physical supervenience thesis, and embrace  
what remains, if keeping comp, that is the comp supervenience. It will  
explain how the physical laws have to emerge and why we believe (quasi- 
correctly) in brains.






 Of course, as you say, you'd be in a hard spot if you were required to
 deal with every various intuition that anybody had... but, for what
 it's worth, that is mine.



I respect your intuition and appreciate the kind attitude. My feeling  
is that if front of very hard problems we have to be open to the fact  
that we could be surprised and that truth could be counterintuitive.  
The incompleteness phenomena, from Godel and Lob, are surprising and  
counterintuitive, and in the empirical world the SWE, whatever  
interpretation we find more plausible, is always rather  
counterintuitive too.

I interpret the self-referentially correct scientist M by the logic  
of Godel's provability predicates beweisbar_M. But the intuitive  
knower, the first person, is modelled (or defined) by the Theatetus  
trick: the machine M knows p in case beweisbar_M('p') and p.  
Although extensionally equivalent, their are intensionally different.  
They prove the same arithmetical propositions, but they obey different  
logics. This is enough for showing that the first person associated  
with the self-referentially correct scientist will already disbelieve  
the comp hypothesis or find it very doubtful. We are near a paradox:  
the correct machine cannot know or believe their are machine. No doubt  
comp will appear counterintuitive for them. I know it is a sort of  
trap/ the solution consists in admitting that comp needs a strong act  
of faith, and I try to put light on the consequences for a machine,  
when she makes the bet.


The best reference on the self-reference logics are

Boolos, G. (1979). The unprovability of consistency. Cambridge  
University Press, London.
Boolos, G. (1993). The Logic of Provability. Cambridge University  
Press, Cambridge.
Smoryński, P. (1985). Self-Reference and Modal Logic. Springer Verlag,  
New York.
Smullyan, R. (1987). Forever Undecided. Knopf, New York.


The last one is a recreative book, not so simple, and rather quick in  
the heart of the matter chapter. Smullyan wrote many lovely  books,  
recreative and technical on that theme.

The bible, imo, is Martin Davis book The undecidable which contains  
some of the original papers by Gödel, Church, Kleene, Post and indeed  
the most key starting points of the parts of theoretical computer  
science we are confonted to. It has been reedited by Dover.

Bruno

Other references here:
http://iridia.ulb.ac.be/~marchal/lillethesis/these/node79.html#SECTION00130


 --Abram

 On Sat, Dec 6, 2008 at 9:32 AM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:


 Le 05-déc.-08, à 22:11, Abram Demski a écrit :


 Bruno,

 Perhaps all I am saying is that you need to state more explicitly  
 

Re: MGA 3

2008-12-07 Thread Russell Standish

On Fri, Dec 05, 2008 at 10:06:30AM +0100, Bruno Marchal wrote:
 
 
 Perhaps, but the whole point is that remains to be justify. It is  
 *the* problem. If we assume comp, then we have to justify this. No  
 doubt little programs play a key role, but the bigger one too, unless  
 some destructive probability phenomenon occur. Now, interviewing the  
 universal machine gives indeed a shadow of explanation of why such  
 destructive phenomenon do occur indeed from the first person (plural)  
 points of view of self-observing machine.
 I mainly agree with what you want, but we have to explain it.
 
 Bruno
 

Destructive phenomena do occur. To see this, realise that an infinite
set of histories will correspond to a given logical statement. Two
inconsistent statements can be combined disjunctively (A or B),
and their conjunction is false. Such a disjunction corresponds to the
union of the two sets of histories consistent with each statement. The
intersection of these sets of histories is, of course, empty.

So the measure of the histories consistent with A or B is now just
given by the sum of the measures of the two individual
statements. Since the information is given by the negative logarithm of these
measures, we see that the information of A or B is less than that of
either A or B taken separately. Information has been destroyed by
taking the inconsistent statements together.

It is this triangle inequality nature of information that gives rise
to the vector space structure in quantum mechanics.


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-07 Thread Russell Standish

On Sat, Dec 06, 2008 at 03:32:53PM +0100, Bruno Marchal wrote:
 
 I would be pleased if you can give me a version of MAT or MEC to which 
 the argument does not apply. For example, the argument applies to most 
 transfinite variant of MEC. It does not apply when some magic is 
 introduced in MAT, and MAT is hard to define in a way to exclude that 
 magic. If you can help, I thank you in advance.
 
 Bruno
 

Michael Lockwood distinguishes between materialism (consciousness
supervenes on the physical world) and physicalism (the physical world
suffices to explain everything). The difference between the two is
that in physicalism, consciousness (indeed any emergent phenomenon) is
mere epiphenomena, a computational convenience, but not necessary for
explanation, whereas in non-physicalist materialism, there are emergent
phenomena that are not explainable in terms of the underlying physics,
even though supervenience holds. This has been argued in the famous
paper by Philip Anderson. One very obvious distinction between
the two positions is that strong emergence is possible in materialism,
but strictly forbidden by physicalism. An example I give of strong
emergence in my book is the strong anthropic principle.

So - I'm convinced your argument works to show the contradiction
between COMP and physicalism, but not so the more general
materialism. I think you have confirmed this in some of your previous
responses to me in this thread.

Which is just as well. AFAICT, supervenience is the only thing
preventing the Occam catastrophe. We don't live in a magical world,
because such a world (assuming COMP) would have so many contradictory
statements that we'd disappear in a puff of destructive logic!
(reference to my previous posting about destructive phenomena).

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-06 Thread Bruno Marchal


Le 05-déc.-08, à 20:51, Abram Demski a écrit :


 Bruno,

 Are you asserting this based on published findings concerning
 provability logic? If so, I would be very interested in references. If
 not, then your results obviously seem publishable :).

I have published this in french a long time ago, but then I have 
discovered that it has been publishe before by Montague and Kaplan (see 
also Thomason). It is related to the fact that Knowledge, like truth 
(cf Tarski),  is not definable through an arithmetical predicate. In 
conscience and mécanisme I illustrate a similar fact by using 
(informally) the Lowenheim Skolem theorems.
Then I think the provability logic put a immense light on this, in a 
transparently clear (arithmetical) frame, and that is a big part of my 
thesis (the AUDA part).



 That is, if you
 can show that huge amounts of set theory beyond ZFC emerge from
 provability logic in some way...

I guess I have been unclear, because I am not saying that. I am saying 
the more obvious (once we are familiar with incompleteness, 
indefinissability, uncomputability etc) fact that a machine can infer 
true but unprovable (by her) things about herself. It is just that a 
provability machine, having furthermore inductive inference abilities 
will generate more truth about itself than those which are provable by 
the machine.



 Anyway, I'd definitely be interested in hearing those ideas.

Those ideas constitute the AUDA part. It is an abstract  translation of 
UDA in the language of the universal machine. It is needed to extract 
constructively physics from computer science. I only get the 
propositional physics (which is a billionth of real physics, yet I 
got both the communicable physical logic and the uncommunicable 
physical logic, that is both the quanta and the qualia. In that sense 
it is already more than usual physics, which (methodologically or 
not)  put the qualia and its subject under the rug.

Bruno



 --Abram

 On Fri, Dec 5, 2008 at 4:20 AM, Bruno Marchal [EMAIL PROTECTED] 
 wrote:


 On 05 Dec 2008, at 03:56, Russell Standish wrote:


 On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:

 I really don't know. I expect that the mathematical structure, as
 seen
 from inside, is so big that Platonia cannot have it neither as
 element
 nor as subpart. (Ah, well, I am aware that this is 
 counter-intuitive,
 but here mathematical logic can help to see the consistency, and the
 quasi necessity with formal version of comp).


 This point rather depends on what Platonia contains. If it contains
 all sets of cardinality 2^{\aleph_0}, then the inside view of the
 deployment will be conatained in it.

 I am not sure. In my opinion, to have a platonia capable of describing
 the first person views emerging from the UD entire work, even the
 whole of Cantor Paradise will be too little. Even big cardinals (far
 bigger than 2^(aleph_0)) will be like too constrained shoes. Actually
 I believe that the first person views raised through the deployment
 just escape the whole of human conceivable mathematics. It is big. But
 it is also structured. It could even be structured as a person. I
 don't know.




 I do understand that your concept of Platonia (Arithmetic Realism I
 believe you call it) is a Kronecker-like God made the integers, all
 the rest was made by man, and so what you say would be true of that.


 Yes the 3-Platonia can be very little, once we assume comp. But the
 first view inside could be so big that eventually all notion of 1-
 Platonia will happen to be inconsistent. It is for sure unameable (in
 the best case). I discussed this a long time ago with George Levy: the
 first person plenitude is big, very big, incredibly big. Nothing can
 expressed or give an idea of that bigness.

 At some point I will explain that the divine intellect of a lobian
 machine as simple as Peano-Arithmetic is really far bigger than the
 God of Peano-Arithmetic. I know it is bizarre (and a bit too
 technical for being addressed right now I guess).

 Have a good day,

 Bruno


 http://iridia.ulb.ac.be/~marchal/







 

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-06 Thread Bruno Marchal


Le 05-déc.-08, à 22:11, Abram Demski a écrit :


 Bruno,

 Perhaps all I am saying is that you need to state more explicitly the
 assumptions about the connection between 1st and 3rd person, in both
 MEC and MAT. Simply taking them to be the general ideas that you take
 them to be does not obviously justify the argument.


I don't see why nor how. The first person notions are defined in the 
three first steps of the UDA. Wait I come back on this in the 
discussion with Kim perhaps. In AUDA I define the first person by the 
knower, and I use the classical definition proposed by Theaetetus in 
the Theaetetus of Plato. Keep in mind that you arrived when I was 
explaining the real last step of an already long argument.
Of course you may be right, and I would really appreciate any 
improvements. But making things more precise could also be a red 
herring sometimes, or be very confusing pedagogically, like with the 
easy 1004 fallacy which can obviously crop here.
When I defended the thesis in France, it was already a work resulting 
from 30 years of discussions with open minded physicists, engineers, 
philosophers and mathematicians, and I have learned that what seems 
obvious for one of them is not for the others.
I don't think there is anything controversial in my work. I got 
academical problems in Brussels for not having find an original result 
(but then I think they did not read the work). Pedagogical difficulties 
stem from the intrinsical difficulty of the mind body problem, and from 
the technical abyss between logicians and physicists to cite only them. 
  It is more easy to collide two protons at the speed of light (minus 
epsilon) than to arrange an appointment between mathematical logicians 
and mathematical physicists (except perhaps nowadays on quantum 
computing issues thankfully).



 Furthermore, stating the assumptions more clearly will make it more
 clear where the contradiction is coming from, and thus which versions
 of MEC and MAT the argument applies to.

I would be pleased if you can give me a version of MAT or MEC to which 
the argument does not apply. For example, the argument applies to most 
transfinite variant of MEC. It does not apply when some magic is 
introduced in MAT, and MAT is hard to define in a way to exclude that 
magic. If you can help, I thank you in advance.

Bruno



 --Abram

 On Fri, Dec 5, 2008 at 4:36 AM, Bruno Marchal [EMAIL PROTECTED] 
 wrote:


 On 04 Dec 2008, at 15:58, Abram Demski wrote:


 PS Abram. I think I will have to meditate a bit longer on your
 (difficult) post. You may have a point (hopefully only pedagogical 
 :)

 A little bit more commentary may be in order then... I think my point
 may be halfway between pedagogical and serious...

 What I am saying is that people will come to the argument with some
 vague idea of which computations (or which physical entities) they
 pick out as conscious. They will compare this to the various
 hypotheses that come along during the argument-- MAT, MEC, MAT + MEC,
 Lucky Alice is conscious, Lucky Alice is not conscious, et
 cetera... These notions are necessarily 3rd-person in nature. It 
 seems
 like there is a problem there. Your argument is designed to talk 
 about
 1st-person phenomena.

 The whole problem consists, assuming hypotheses, in relating 1-views
 with 3-views.
 In UDA, the 1-views are approximated by 1-discourses (personal diary
 notes, memories in the brain, ...). But I do rely on the minimal
 intuition needed to give sense to the willingness of saying yes to a
 digitalist surgeon, and the believe in a comp survival, or a belief in
 the unchanged feeling of my consciousness in such annihilation-
 (re)creation experiences.




 If a 1st-person-perspective is a sort of structure (computational
 and/or physical), what type of structure is it?

 The surprise will be: there are none. The 1-views of a machine will
 appears to be already not expressible by the machine. The first and
 third God have no name. Think about Tarski theorem in the comp
 context. A sound machine cannot define the whole notion of truth
 about me.


 If we define it in
 terms of behavior only, then a recording is fine.

 We certainly avoid the trap of behaviorism. You can see this as a
 weakness, or as the full strong originality of comp, as I define it.
 We give some sense, albeit undefined, to the word consciousness
 apart from any behavior. But to reason we have to assume some relation
 between consciousness and possible discourses (by machines).


 If we define it in
 terms of inner workings, then a recording is probably not fine, but 
 we
 introduce magical dependence on things that shouldn't matter to
 us... ie, we should not care if we are interacting with a perfectly
 orchestrated recording, so long as to us the result is the same.

 It seems like this is independent of the differences between
 pure-comp / comp+mat.



 This is not yet quite clear for me. Perhaps, if you are patient
 enough, you will be able to 

Re: MGA 3

2008-12-06 Thread Brent Meeker

Stathis Papaioannou wrote:
 2008/12/6 Abram Demski [EMAIL PROTECTED]:

   
 The causal structure of a recording still looks far different from the
 causal structure of a person that happens to follow a recording and
 also happens to be wired to a machine that will kill them if they
 deviate. Or, even, correct them if they deviate. (Let's go with that
 so that I can't point out the simplistic difference a recording will
 not die if some external force causes it to deviate.)

 1. Realistic malfunctions of a machine playing a recording are far
 different from realistic malfunctions of the person-machine-combo. The
 person inherits the possible malfunctions of the machine, *plus*
 malfunctions in which the machine fails to modify the person's
 behavior to match the recording. (A malfunction can be defined in
 terms of cause-effect counterfactuals in two ways: first, if we think
 that cause/effect is somewhat probabilistic, we will think that any
 machine will occasionally malfunction; second, varying external
 factors can cause malfunctions.)

 2. Even during normal functioning, the cause/effect structure is very
 different; the person-combo will have a lot of extra structure, since
 it has a functioning brain and a corrective mechanism, neither needed
 for the recording.

 Also-- the level of the correction matters quite a bit I think. If
 only muscle actions are being corrected, the person seems obviously
 conscious-- lots of computations ( corresponding causal structure) is
 still going on.. If each neuron is corrected, this is not so
 intuitively obvious. (I suppose my intuition says that the person
 would lose consciousness when the first correction occurred, though
 that is silly upon reflection.)
 

 Yes, there are these differences, but why should the differences be
 relevant to the question of whether consciousness occurs or not? And
 what about the case where the extra machinery that would allow the
 right sort of causal structure but isn't actually used in a particular
 situation is temporarily disengaged?

 It seems to me that everyone contributing to these threads has an
 intuition about consciousness, then works backwards from this:
 obviously, recordings aren't conscious; now what are the qualities
 that recordings have which distinguish them from entities that are
 conscious?. There's nothing intrinsically wrong with this method, but
 it is possible to reach an impasse when the different parties have
 different intuitions.
   

Exactly so.  Consciousness is probably not the unified thing that we 
intuitively assume anyway.  There was an article in the newspaper today 
that Henry Molaison died. He had lived some 50yrs with profound amnesia 
after an operation on his brain to cure severe seizures.  He apparently 
could not form new memories.   But that only applied to verbal, i.e. 
conscious memories.  He could learn new tasks in the sense that he 
improved with practice even though if asked he would say he'd never done 
the task before. 

Brent Meeker

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-06 Thread Abram Demski

Stathis,

Yes, you are right. My main point is to show that such a point of view
is possible, not to actually argue for it... but I am largely just
asserting my intuitions nonetheless.

--Abram

On Sat, Dec 6, 2008 at 4:05 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 2008/12/6 Abram Demski [EMAIL PROTECTED]:

 The causal structure of a recording still looks far different from the
 causal structure of a person that happens to follow a recording and
 also happens to be wired to a machine that will kill them if they
 deviate. Or, even, correct them if they deviate. (Let's go with that
 so that I can't point out the simplistic difference a recording will
 not die if some external force causes it to deviate.)

 1. Realistic malfunctions of a machine playing a recording are far
 different from realistic malfunctions of the person-machine-combo. The
 person inherits the possible malfunctions of the machine, *plus*
 malfunctions in which the machine fails to modify the person's
 behavior to match the recording. (A malfunction can be defined in
 terms of cause-effect counterfactuals in two ways: first, if we think
 that cause/effect is somewhat probabilistic, we will think that any
 machine will occasionally malfunction; second, varying external
 factors can cause malfunctions.)

 2. Even during normal functioning, the cause/effect structure is very
 different; the person-combo will have a lot of extra structure, since
 it has a functioning brain and a corrective mechanism, neither needed
 for the recording.

 Also-- the level of the correction matters quite a bit I think. If
 only muscle actions are being corrected, the person seems obviously
 conscious-- lots of computations ( corresponding causal structure) is
 still going on.. If each neuron is corrected, this is not so
 intuitively obvious. (I suppose my intuition says that the person
 would lose consciousness when the first correction occurred, though
 that is silly upon reflection.)

 Yes, there are these differences, but why should the differences be
 relevant to the question of whether consciousness occurs or not? And
 what about the case where the extra machinery that would allow the
 right sort of causal structure but isn't actually used in a particular
 situation is temporarily disengaged?

 It seems to me that everyone contributing to these threads has an
 intuition about consciousness, then works backwards from this:
 obviously, recordings aren't conscious; now what are the qualities
 that recordings have which distinguish them from entities that are
 conscious?. There's nothing intrinsically wrong with this method, but
 it is possible to reach an impasse when the different parties have
 different intuitions.



 --
 Stathis Papaioannou

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-06 Thread Abram Demski

Bruno,

Yes, I think there is a big difference between making an argument more
detailed and making it more understandable. They can go together or be
opposed. So a version of the argument targeted at my complaint might
not be good at all pedagogically...

 I would be pleased if you can give me a version of MAT or MEC to which
 the argument does not apply. For example, the argument applies to most
 transfinite variant of MEC. It does not apply when some magic is
 introduced in MAT, and MAT is hard to define in a way to exclude that
 magic. If you can help, I thank you in advance.

My particular brand of magic appears to be a requirement of
counterfactual/causal structure that reflects the
counterfactual/causal structure of (abstract) computation. Stathis has
pointed out some possible ways to show such ideas incoherent (which I
am not completely skeptical of, despite my arguments). Since this type
of theory is the type that matches my personal intuition, MGA will
feel empty to me until such alternatives are explicitly dealt a
killing blow (after which the rest is obvious, since I intuitively
feel the contradiction in versions of COMP+MAT that don't require
counterfactuals).

Of course, as you say, you'd be in a hard spot if you were required to
deal with every various intuition that anybody had... but, for what
it's worth, that is mine.

--Abram

On Sat, Dec 6, 2008 at 9:32 AM, Bruno Marchal [EMAIL PROTECTED] wrote:


 Le 05-déc.-08, à 22:11, Abram Demski a écrit :


 Bruno,

 Perhaps all I am saying is that you need to state more explicitly the
 assumptions about the connection between 1st and 3rd person, in both
 MEC and MAT. Simply taking them to be the general ideas that you take
 them to be does not obviously justify the argument.


 I don't see why nor how. The first person notions are defined in the
 three first steps of the UDA. Wait I come back on this in the
 discussion with Kim perhaps. In AUDA I define the first person by the
 knower, and I use the classical definition proposed by Theaetetus in
 the Theaetetus of Plato. Keep in mind that you arrived when I was
 explaining the real last step of an already long argument.
 Of course you may be right, and I would really appreciate any
 improvements. But making things more precise could also be a red
 herring sometimes, or be very confusing pedagogically, like with the
 easy 1004 fallacy which can obviously crop here.
 When I defended the thesis in France, it was already a work resulting
 from 30 years of discussions with open minded physicists, engineers,
 philosophers and mathematicians, and I have learned that what seems
 obvious for one of them is not for the others.
 I don't think there is anything controversial in my work. I got
 academical problems in Brussels for not having find an original result
 (but then I think they did not read the work). Pedagogical difficulties
 stem from the intrinsical difficulty of the mind body problem, and from
 the technical abyss between logicians and physicists to cite only them.
  It is more easy to collide two protons at the speed of light (minus
 epsilon) than to arrange an appointment between mathematical logicians
 and mathematical physicists (except perhaps nowadays on quantum
 computing issues thankfully).



 Furthermore, stating the assumptions more clearly will make it more
 clear where the contradiction is coming from, and thus which versions
 of MEC and MAT the argument applies to.

 I would be pleased if you can give me a version of MAT or MEC to which
 the argument does not apply. For example, the argument applies to most
 transfinite variant of MEC. It does not apply when some magic is
 introduced in MAT, and MAT is hard to define in a way to exclude that
 magic. If you can help, I thank you in advance.

 Bruno



 --Abram

 On Fri, Dec 5, 2008 at 4:36 AM, Bruno Marchal [EMAIL PROTECTED]
 wrote:


 On 04 Dec 2008, at 15:58, Abram Demski wrote:


 PS Abram. I think I will have to meditate a bit longer on your
 (difficult) post. You may have a point (hopefully only pedagogical
 :)

 A little bit more commentary may be in order then... I think my point
 may be halfway between pedagogical and serious...

 What I am saying is that people will come to the argument with some
 vague idea of which computations (or which physical entities) they
 pick out as conscious. They will compare this to the various
 hypotheses that come along during the argument-- MAT, MEC, MAT + MEC,
 Lucky Alice is conscious, Lucky Alice is not conscious, et
 cetera... These notions are necessarily 3rd-person in nature. It
 seems
 like there is a problem there. Your argument is designed to talk
 about
 1st-person phenomena.

 The whole problem consists, assuming hypotheses, in relating 1-views
 with 3-views.
 In UDA, the 1-views are approximated by 1-discourses (personal diary
 notes, memories in the brain, ...). But I do rely on the minimal
 intuition needed to give sense to the willingness of saying yes to a
 digitalist 

Re: MGA 3

2008-12-06 Thread Abram Demski

Bruno,

Thanks, I will look up those names. If you have the time to reference
specific papers, I would be grateful.

--Abram

On Sat, Dec 6, 2008 at 9:07 AM, Bruno Marchal [EMAIL PROTECTED] wrote:


 Le 05-déc.-08, à 20:51, Abram Demski a écrit :


 Bruno,

 Are you asserting this based on published findings concerning
 provability logic? If so, I would be very interested in references. If
 not, then your results obviously seem publishable :).

 I have published this in french a long time ago, but then I have
 discovered that it has been publishe before by Montague and Kaplan (see
 also Thomason). It is related to the fact that Knowledge, like truth
 (cf Tarski),  is not definable through an arithmetical predicate. In
 conscience and mécanisme I illustrate a similar fact by using
 (informally) the Lowenheim Skolem theorems.
 Then I think the provability logic put a immense light on this, in a
 transparently clear (arithmetical) frame, and that is a big part of my
 thesis (the AUDA part).



 That is, if you
 can show that huge amounts of set theory beyond ZFC emerge from
 provability logic in some way...

 I guess I have been unclear, because I am not saying that. I am saying
 the more obvious (once we are familiar with incompleteness,
 indefinissability, uncomputability etc) fact that a machine can infer
 true but unprovable (by her) things about herself. It is just that a
 provability machine, having furthermore inductive inference abilities
 will generate more truth about itself than those which are provable by
 the machine.



 Anyway, I'd definitely be interested in hearing those ideas.

 Those ideas constitute the AUDA part. It is an abstract  translation of
 UDA in the language of the universal machine. It is needed to extract
 constructively physics from computer science. I only get the
 propositional physics (which is a billionth of real physics, yet I
 got both the communicable physical logic and the uncommunicable
 physical logic, that is both the quanta and the qualia. In that sense
 it is already more than usual physics, which (methodologically or
 not)  put the qualia and its subject under the rug.

 Bruno



 --Abram

 On Fri, Dec 5, 2008 at 4:20 AM, Bruno Marchal [EMAIL PROTECTED]
 wrote:


 On 05 Dec 2008, at 03:56, Russell Standish wrote:


 On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:

 I really don't know. I expect that the mathematical structure, as
 seen
 from inside, is so big that Platonia cannot have it neither as
 element
 nor as subpart. (Ah, well, I am aware that this is
 counter-intuitive,
 but here mathematical logic can help to see the consistency, and the
 quasi necessity with formal version of comp).


 This point rather depends on what Platonia contains. If it contains
 all sets of cardinality 2^{\aleph_0}, then the inside view of the
 deployment will be conatained in it.

 I am not sure. In my opinion, to have a platonia capable of describing
 the first person views emerging from the UD entire work, even the
 whole of Cantor Paradise will be too little. Even big cardinals (far
 bigger than 2^(aleph_0)) will be like too constrained shoes. Actually
 I believe that the first person views raised through the deployment
 just escape the whole of human conceivable mathematics. It is big. But
 it is also structured. It could even be structured as a person. I
 don't know.




 I do understand that your concept of Platonia (Arithmetic Realism I
 believe you call it) is a Kronecker-like God made the integers, all
 the rest was made by man, and so what you say would be true of that.


 Yes the 3-Platonia can be very little, once we assume comp. But the
 first view inside could be so big that eventually all notion of 1-
 Platonia will happen to be inconsistent. It is for sure unameable (in
 the best case). I discussed this a long time ago with George Levy: the
 first person plenitude is big, very big, incredibly big. Nothing can
 expressed or give an idea of that bigness.

 At some point I will explain that the divine intellect of a lobian
 machine as simple as Peano-Arithmetic is really far bigger than the
 God of Peano-Arithmetic. I know it is bizarre (and a bit too
 technical for being addressed right now I guess).

 Have a good day,

 Bruno


 http://iridia.ulb.ac.be/~marchal/







 

 http://iridia.ulb.ac.be/~marchal/


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-06 Thread Brent Meeker

Abram Demski wrote:
 Bruno,
 
 Yes, I think there is a big difference between making an argument more
 detailed and making it more understandable. They can go together or be
 opposed. So a version of the argument targeted at my complaint might
 not be good at all pedagogically...
 
 I would be pleased if you can give me a version of MAT or MEC to which
 the argument does not apply. For example, the argument applies to most
 transfinite variant of MEC. It does not apply when some magic is
 introduced in MAT, and MAT is hard to define in a way to exclude that
 magic. If you can help, I thank you in advance.
 
 My particular brand of magic appears to be a requirement of
 counterfactual/causal structure that reflects the
 counterfactual/causal structure of (abstract) computation. Stathis has
 pointed out some possible ways to show such ideas incoherent (which I
 am not completely skeptical of, despite my arguments). Since this type
 of theory is the type that matches my personal intuition, MGA will
 feel empty to me until such alternatives are explicitly dealt a
 killing blow (after which the rest is obvious, since I intuitively
 feel the contradiction in versions of COMP+MAT that don't require
 counterfactuals).

My intuition is similar, except I think it is causality that is necessary, 
rather than counterfactuals.  I find persuasive the argument that the brains 
potential for dealing with a counterfactual that never occurs cannot have any 
bearing on consciousnes.  After all there are infinitely many counterfactuals 
that never occur (that's why they're counterfactual) and my brain is no doubt 
unprepared to deal with most of them.

Causality, ISTM, is a physical relation and it is not captured by mathematical 
or logical relations.  That's probably why it has almost disappeared from 
physics theories, which are highly mathematical.  Bruno may think this is 
invoking magic, like Peter's insistence that existence is contingent.  So 
be it.

Brent

 
 Of course, as you say, you'd be in a hard spot if you were required to
 deal with every various intuition that anybody had... but, for what
 it's worth, that is mine.
 
 --Abram
 
 On Sat, Dec 6, 2008 at 9:32 AM, Bruno Marchal [EMAIL PROTECTED] wrote:

 Le 05-déc.-08, à 22:11, Abram Demski a écrit :

 Bruno,

 Perhaps all I am saying is that you need to state more explicitly the
 assumptions about the connection between 1st and 3rd person, in both
 MEC and MAT. Simply taking them to be the general ideas that you take
 them to be does not obviously justify the argument.

 I don't see why nor how. The first person notions are defined in the
 three first steps of the UDA. Wait I come back on this in the
 discussion with Kim perhaps. In AUDA I define the first person by the
 knower, and I use the classical definition proposed by Theaetetus in
 the Theaetetus of Plato. Keep in mind that you arrived when I was
 explaining the real last step of an already long argument.
 Of course you may be right, and I would really appreciate any
 improvements. But making things more precise could also be a red
 herring sometimes, or be very confusing pedagogically, like with the
 easy 1004 fallacy which can obviously crop here.
 When I defended the thesis in France, it was already a work resulting
 from 30 years of discussions with open minded physicists, engineers,
 philosophers and mathematicians, and I have learned that what seems
 obvious for one of them is not for the others.
 I don't think there is anything controversial in my work. I got
 academical problems in Brussels for not having find an original result
 (but then I think they did not read the work). Pedagogical difficulties
 stem from the intrinsical difficulty of the mind body problem, and from
 the technical abyss between logicians and physicists to cite only them.
  It is more easy to collide two protons at the speed of light (minus
 epsilon) than to arrange an appointment between mathematical logicians
 and mathematical physicists (except perhaps nowadays on quantum
 computing issues thankfully).


 Furthermore, stating the assumptions more clearly will make it more
 clear where the contradiction is coming from, and thus which versions
 of MEC and MAT the argument applies to.
 I would be pleased if you can give me a version of MAT or MEC to which
 the argument does not apply. For example, the argument applies to most
 transfinite variant of MEC. It does not apply when some magic is
 introduced in MAT, and MAT is hard to define in a way to exclude that
 magic. If you can help, I thank you in advance.

 Bruno


 --Abram

 On Fri, Dec 5, 2008 at 4:36 AM, Bruno Marchal [EMAIL PROTECTED]
 wrote:

 On 04 Dec 2008, at 15:58, Abram Demski wrote:

 PS Abram. I think I will have to meditate a bit longer on your
 (difficult) post. You may have a point (hopefully only pedagogical
 :)
 A little bit more commentary may be in order then... I think my point
 may be halfway between pedagogical and serious...

 What I am saying is 

Re: MGA 3

2008-12-05 Thread Stathis Papaioannou

2008/12/1 Abram Demski [EMAIL PROTECTED]:

 Yes, consciousness supervenes on computation, but that computation
 needs to actually take place (meaning, physically). Otherwise, how
 could consciousness supervene on it? Now, in order for a computation
 to be physically instantiated, the physical instantiation needs to
 satisfy a few properties. One of these properties is clearly some sort
 of isomorphism between the computation and the physical instantiation:
 the actual steps of the computation are represented in physical form.
 A less obvious requirement is that the physical computation needs to
 have the proper counterfactuals: if some external force were to modify
 some step in the computation, the computation must progress according
 to the new computational state (as translated by the isomorphism).

So if you destroy the counterfactual behaviour by removing components
that are not utilised, you end up with a recording-equivalent, which
isn't conscious. But what if you destroy the counterfactual behaviour
by another means? For example, if I wear a device that will instantly
kill me if I deviate from a particular behaviour, randomly determined
by the device from moment to moment, but survive, will my
consciousness be diminished as a result? You might say, no, because if
the device were not there I would have been able to handle the
counterfactuals. But then it might also be argued for the first
example that if the unused components had not been removed, the
recording-equivalent would also have been able to handle the
counterfactuals; and you can make this more concrete by having the
extra machinery waiting to be dropped into place in a counterfactual
universe.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Bruno Marchal

On 05 Dec 2008, at 03:50, Jason Resch wrote:



 On Thu, Dec 4, 2008 at 5:19 AM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:


 Hmmm... It means you have still a little problem with step seven. I
 wish we share a computable environment, but we cannot decide this at
 will.  I agree we have empirical evidence that here is such  
 (partially)
 computable environment, and I am willing to say I trust nature for
 this. Yet, the fact is that to predict my next first person experience
 I have to take into account ALL computations which exist in the
 arithmetical platonia or in the universal dovetailing.


 Bruno, I am with you that none of us can decide which of the  
 infinite number of histories contain/compute us; when I talk about a  
 universe I refer to just a single such history.


This is ambiguous. Even the unique history-computation of, let us  
say, the Everett Universal Wave, contains many (perhaps an infinity)  
of cosmic histories. There are there as many Jason than there are  
possible position of your electrons, even for equal energy level, and  
thus same molecular behavior, so that you cannot discern them, except  
in term or relative probabilities of (self) measurement outcomes. But  
then you have many others that we cannot eliminate, because even if  
you are right in assigning a bigger importance to little programs and  
their computations, the big programs occurring in the deployment have  
a role too, mainly due to the impossibility to be aware of delays made  
by the UD. Without the comp equivalent of random phase annihilating  
the aberrantly long path, we have to take them into account, a priori.



 Perhaps you use history to refer only to the computational history  
 that implements the observer's mind where I use it to mean an object  
 which computes the mind of one or more observers in a consistent and  
 fully definable way.


It seems to me that we have to take them all into account, or justify  
why we can throw away the pure aberrant histories. If not, it looks  
like putting infinities and white rabbits under the rug, by decision.  
But then we are cheating with respect of taking the digital  
hypopthesis completely seriously. We could miss a refutation of comp,  
or important consequences. It seems to me.




 What I am not clear on with regards to your position is whether or  
 not you believe most observers (if we could locate them in platonia  
 from a 3rd person view) exist in environments larger than their  
 brains, and likely containing numerous other observers or if you  
 believe the mind is the only thing reified by computation and it is  
 meaningless to discuss the environments they perceive because they  
 don't exist.


Empirically I am rather sure environments plays a key role, yet, this  
remains to be proved. Strictly speaking I would say it is an open comp  
problem.




 The way I see it, using the example of this physical universe only,  
 it is far more probable for a mind to come about from the self- 
 ordering properties of a universe such as this than for there to be  
 a computation where the mind is an initial condition.  The program  
 that implements the physics of this universe is likely to be far  
 smaller than the program that implements our minds, or so my  
 intuition leads me to believe.


Perhaps, but the whole point is that remains to be justify. It is  
*the* problem. If we assume comp, then we have to justify this. No  
doubt little programs play a key role, but the bigger one too, unless  
some destructive probability phenomenon occur. Now, interviewing the  
universal machine gives indeed a shadow of explanation of why such  
destructive phenomenon do occur indeed from the first person (plural)  
points of view of self-observing machine.
I mainly agree with what you want, but we have to explain it.

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Bruno Marchal


On 05 Dec 2008, at 03:56, Russell Standish wrote:


 On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:

 I really don't know. I expect that the mathematical structure, as  
 seen
 from inside, is so big that Platonia cannot have it neither as  
 element
 nor as subpart. (Ah, well, I am aware that this is counter-intuitive,
 but here mathematical logic can help to see the consistency, and the
 quasi necessity with formal version of comp).


 This point rather depends on what Platonia contains. If it contains
 all sets of cardinality 2^{\aleph_0}, then the inside view of the
 deployment will be conatained in it.

I am not sure. In my opinion, to have a platonia capable of describing  
the first person views emerging from the UD entire work, even the  
whole of Cantor Paradise will be too little. Even big cardinals (far  
bigger than 2^(aleph_0)) will be like too constrained shoes. Actually  
I believe that the first person views raised through the deployment  
just escape the whole of human conceivable mathematics. It is big. But  
it is also structured. It could even be structured as a person. I  
don't know.




 I do understand that your concept of Platonia (Arithmetic Realism I
 believe you call it) is a Kronecker-like God made the integers, all
 the rest was made by man, and so what you say would be true of that.


Yes the 3-Platonia can be very little, once we assume comp. But the  
first view inside could be so big that eventually all notion of 1- 
Platonia will happen to be inconsistent. It is for sure unameable (in  
the best case). I discussed this a long time ago with George Levy: the  
first person plenitude is big, very big, incredibly big. Nothing can  
expressed or give an idea of that bigness.

At some point I will explain that the divine intellect of a lobian  
machine as simple as Peano-Arithmetic is really far bigger than the  
God of Peano-Arithmetic. I know it is bizarre (and a bit too  
technical for being addressed right now I guess).

Have a good day,

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Bruno Marchal


On 04 Dec 2008, at 15:58, Abram Demski wrote:


 PS Abram. I think I will have to meditate a bit longer on your
 (difficult) post. You may have a point (hopefully only pedagogical :)

 A little bit more commentary may be in order then... I think my point
 may be halfway between pedagogical and serious...

 What I am saying is that people will come to the argument with some
 vague idea of which computations (or which physical entities) they
 pick out as conscious. They will compare this to the various
 hypotheses that come along during the argument-- MAT, MEC, MAT + MEC,
 Lucky Alice is conscious, Lucky Alice is not conscious, et
 cetera... These notions are necessarily 3rd-person in nature. It seems
 like there is a problem there. Your argument is designed to talk about
 1st-person phenomena.

The whole problem consists, assuming hypotheses, in relating 1-views  
with 3-views.
In UDA, the 1-views are approximated by 1-discourses (personal diary  
notes, memories in the brain, ...). But I do rely on the minimal  
intuition needed to give sense to the willingness of saying yes to a  
digitalist surgeon, and the believe in a comp survival, or a belief in  
the unchanged feeling of my consciousness in such annihilation- 
(re)creation experiences.




 If a 1st-person-perspective is a sort of structure (computational
 and/or physical), what type of structure is it?

The surprise will be: there are none. The 1-views of a machine will  
appears to be already not expressible by the machine. The first and  
third God have no name. Think about Tarski theorem in the comp  
context. A sound machine cannot define the whole notion of truth  
about me.


 If we define it in
 terms of behavior only, then a recording is fine.

We certainly avoid the trap of behaviorism. You can see this as a  
weakness, or as the full strong originality of comp, as I define it.  
We give some sense, albeit undefined, to the word consciousness  
apart from any behavior. But to reason we have to assume some relation  
between consciousness and possible discourses (by machines).


 If we define it in
 terms of inner workings, then a recording is probably not fine, but we
 introduce magical dependence on things that shouldn't matter to
 us... ie, we should not care if we are interacting with a perfectly
 orchestrated recording, so long as to us the result is the same.

 It seems like this is independent of the differences between
 pure-comp / comp+mat.



This is not yet quite clear for me. Perhaps, if you are patient  
enough, you will be able to clarify this along the UDA reasoning which  
I will do slowly with Kim. The key point will be the understanding of  
the ultimate conclusion: exactly like Everett can be said to justify  
correctly the phenomenal collapse of the wave, if comp is assumed, we  
have to justify in a similar way the wave itself. Assuming comp, we  
put ourself in a position where we have to explain why numbers  
develops stable and coherent belief in both mind and matter. We can  
presuppose neither matter, nor mind eventually, except our own  
consciousness, although even consciousness will eventually be reduced  
into our believe in numbers.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Abram Demski

Stathis,

I think I can get around your objection by pointing out that the
structure of counterfactuals is quite different for a recording vs. a
full human who is wired to be killed if they deviate from a recording.
Someone could fairly easily disarm the killing device, whereas it
would be quite difficult to reconstruct the person from the recording
(in fact there is not enough information to do so).

A related way out would be to point out that all the computational
machinery is present in one case (merely disabled), whereas it is
totally absent in the other case.

--Abram

On Fri, Dec 5, 2008 at 3:05 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 2008/12/1 Abram Demski [EMAIL PROTECTED]:

 Yes, consciousness supervenes on computation, but that computation
 needs to actually take place (meaning, physically). Otherwise, how
 could consciousness supervene on it? Now, in order for a computation
 to be physically instantiated, the physical instantiation needs to
 satisfy a few properties. One of these properties is clearly some sort
 of isomorphism between the computation and the physical instantiation:
 the actual steps of the computation are represented in physical form.
 A less obvious requirement is that the physical computation needs to
 have the proper counterfactuals: if some external force were to modify
 some step in the computation, the computation must progress according
 to the new computational state (as translated by the isomorphism).

 So if you destroy the counterfactual behaviour by removing components
 that are not utilised, you end up with a recording-equivalent, which
 isn't conscious. But what if you destroy the counterfactual behaviour
 by another means? For example, if I wear a device that will instantly
 kill me if I deviate from a particular behaviour, randomly determined
 by the device from moment to moment, but survive, will my
 consciousness be diminished as a result? You might say, no, because if
 the device were not there I would have been able to handle the
 counterfactuals. But then it might also be argued for the first
 example that if the unused components had not been removed, the
 recording-equivalent would also have been able to handle the
 counterfactuals; and you can make this more concrete by having the
 extra machinery waiting to be dropped into place in a counterfactual
 universe.


 --
 Stathis Papaioannou

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Abram Demski

Bruno,

Are you asserting this based on published findings concerning
provability logic? If so, I would be very interested in references. If
not, then your results obviously seem publishable :). That is, if you
can show that huge amounts of set theory beyond ZFC emerge from
provability logic in some way...

Anyway, I'd definitely be interested in hearing those ideas.

--Abram

On Fri, Dec 5, 2008 at 4:20 AM, Bruno Marchal [EMAIL PROTECTED] wrote:


 On 05 Dec 2008, at 03:56, Russell Standish wrote:


 On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:

 I really don't know. I expect that the mathematical structure, as
 seen
 from inside, is so big that Platonia cannot have it neither as
 element
 nor as subpart. (Ah, well, I am aware that this is counter-intuitive,
 but here mathematical logic can help to see the consistency, and the
 quasi necessity with formal version of comp).


 This point rather depends on what Platonia contains. If it contains
 all sets of cardinality 2^{\aleph_0}, then the inside view of the
 deployment will be conatained in it.

 I am not sure. In my opinion, to have a platonia capable of describing
 the first person views emerging from the UD entire work, even the
 whole of Cantor Paradise will be too little. Even big cardinals (far
 bigger than 2^(aleph_0)) will be like too constrained shoes. Actually
 I believe that the first person views raised through the deployment
 just escape the whole of human conceivable mathematics. It is big. But
 it is also structured. It could even be structured as a person. I
 don't know.




 I do understand that your concept of Platonia (Arithmetic Realism I
 believe you call it) is a Kronecker-like God made the integers, all
 the rest was made by man, and so what you say would be true of that.


 Yes the 3-Platonia can be very little, once we assume comp. But the
 first view inside could be so big that eventually all notion of 1-
 Platonia will happen to be inconsistent. It is for sure unameable (in
 the best case). I discussed this a long time ago with George Levy: the
 first person plenitude is big, very big, incredibly big. Nothing can
 expressed or give an idea of that bigness.

 At some point I will explain that the divine intellect of a lobian
 machine as simple as Peano-Arithmetic is really far bigger than the
 God of Peano-Arithmetic. I know it is bizarre (and a bit too
 technical for being addressed right now I guess).

 Have a good day,

 Bruno


 http://iridia.ulb.ac.be/~marchal/




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Stathis Papaioannou

2008/12/6 Abram Demski [EMAIL PROTECTED]:

 Stathis,

 I think I can get around your objection by pointing out that the
 structure of counterfactuals is quite different for a recording vs. a
 full human who is wired to be killed if they deviate from a recording.
 Someone could fairly easily disarm the killing device, whereas it
 would be quite difficult to reconstruct the person from the recording
 (in fact there is not enough information to do so).

This seems to be getting away from the simple requirement that the
computer be able to handle counterfactuals. What if the device were
not easy to disarm, but almost impossible to disarm? What if it had
tentacles in every neurone, ready to destroy it if it fired at the
wrong time?

 A related way out would be to point out that all the computational
 machinery is present in one case (merely disabled), whereas it is
 totally absent in the other case.

So you agree that in the case where the extra machinery is waiting to
be dropped into place, consciousness results?


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Abram Demski

Hi Stathis,

 This seems to be getting away from the simple requirement that the
 computer be able to handle counterfactuals. What if the device were
 not easy to disarm, but almost impossible to disarm? What if it had
 tentacles in every neurone, ready to destroy it if it fired at the
 wrong time?

I do think you have a point there. I began by equating counterfactual
structure with cause/effect structure, but now am drifting away from
that... So, can I make the point purely talking about causality?

I still think the answer may be yes...

The causal structure of a recording still looks far different from the
causal structure of a person that happens to follow a recording and
also happens to be wired to a machine that will kill them if they
deviate. Or, even, correct them if they deviate. (Let's go with that
so that I can't point out the simplistic difference a recording will
not die if some external force causes it to deviate.)

1. Realistic malfunctions of a machine playing a recording are far
different from realistic malfunctions of the person-machine-combo. The
person inherits the possible malfunctions of the machine, *plus*
malfunctions in which the machine fails to modify the person's
behavior to match the recording. (A malfunction can be defined in
terms of cause-effect counterfactuals in two ways: first, if we think
that cause/effect is somewhat probabilistic, we will think that any
machine will occasionally malfunction; second, varying external
factors can cause malfunctions.)

2. Even during normal functioning, the cause/effect structure is very
different; the person-combo will have a lot of extra structure, since
it has a functioning brain and a corrective mechanism, neither needed
for the recording.

Also-- the level of the correction matters quite a bit I think. If
only muscle actions are being corrected, the person seems obviously
conscious-- lots of computations ( corresponding causal structure) is
still going on.. If each neuron is corrected, this is not so
intuitively obvious. (I suppose my intuition says that the person
would lose consciousness when the first correction occurred, though
that is silly upon reflection.)

How does that sound?

--Abram

On Fri, Dec 5, 2008 at 7:58 PM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 2008/12/6 Abram Demski [EMAIL PROTECTED]:

 Stathis,

 I think I can get around your objection by pointing out that the
 structure of counterfactuals is quite different for a recording vs. a
 full human who is wired to be killed if they deviate from a recording.
 Someone could fairly easily disarm the killing device, whereas it
 would be quite difficult to reconstruct the person from the recording
 (in fact there is not enough information to do so).

 This seems to be getting away from the simple requirement that the
 computer be able to handle counterfactuals. What if the device were
 not easy to disarm, but almost impossible to disarm? What if it had
 tentacles in every neurone, ready to destroy it if it fired at the
 wrong time?

 A related way out would be to point out that all the computational
 machinery is present in one case (merely disabled), whereas it is
 totally absent in the other case.

 So you agree that in the case where the extra machinery is waiting to
 be dropped into place, consciousness results?


 --
 Stathis Papaioannou

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-05 Thread Abram Demski

Bruno,

Could you possibly link to the conversation with George Levy you refer
to? I did not find it looking on my own.

--Abram

On Fri, Dec 5, 2008 at 4:20 AM, Bruno Marchal [EMAIL PROTECTED] wrote:


 On 05 Dec 2008, at 03:56, Russell Standish wrote:


 On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:

 I really don't know. I expect that the mathematical structure, as
 seen
 from inside, is so big that Platonia cannot have it neither as
 element
 nor as subpart. (Ah, well, I am aware that this is counter-intuitive,
 but here mathematical logic can help to see the consistency, and the
 quasi necessity with formal version of comp).


 This point rather depends on what Platonia contains. If it contains
 all sets of cardinality 2^{\aleph_0}, then the inside view of the
 deployment will be conatained in it.

 I am not sure. In my opinion, to have a platonia capable of describing
 the first person views emerging from the UD entire work, even the
 whole of Cantor Paradise will be too little. Even big cardinals (far
 bigger than 2^(aleph_0)) will be like too constrained shoes. Actually
 I believe that the first person views raised through the deployment
 just escape the whole of human conceivable mathematics. It is big. But
 it is also structured. It could even be structured as a person. I
 don't know.




 I do understand that your concept of Platonia (Arithmetic Realism I
 believe you call it) is a Kronecker-like God made the integers, all
 the rest was made by man, and so what you say would be true of that.


 Yes the 3-Platonia can be very little, once we assume comp. But the
 first view inside could be so big that eventually all notion of 1-
 Platonia will happen to be inconsistent. It is for sure unameable (in
 the best case). I discussed this a long time ago with George Levy: the
 first person plenitude is big, very big, incredibly big. Nothing can
 expressed or give an idea of that bigness.

 At some point I will explain that the divine intellect of a lobian
 machine as simple as Peano-Arithmetic is really far bigger than the
 God of Peano-Arithmetic. I know it is bizarre (and a bit too
 technical for being addressed right now I guess).

 Have a good day,

 Bruno


 http://iridia.ulb.ac.be/~marchal/




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-04 Thread Bruno Marchal

Hi Jason,

Le 03-déc.-08, à 17:20, Jason Resch a écrit :

 On Wed, Dec 3, 2008 at 9:53 AM, Bruno Marchal [EMAIL PROTECTED] 
 wrote:

 and that by virtue of this imposed order, defines relations between 
 particles.  Computation depends on relations, be it electrons in 
 silicon, Chinese with radios or a system of beer cans and ping-pong 
 balls;


 Here you are talking about instantiations of computations relatively 
 to our most probable computations, which have a physical look. But 
 strictly speaking computations are only relation between numbers.


 Bruno,

 Thanks for your reply, I am curious what exactly you mean by the most 
 probable computations going through our state if these computations 
 cannot be part of a larger (shared universe) computation.  


Hmmm... It means you have still a little problem with step seven. I 
wish we share a computable environment, but we cannot decide this at 
will.  I agree we have empirical evidence that here is such (partially) 
computable environment, and I am willing to say I trust nature for 
this. Yet, the fact is that to predict my next first person experience 
I have to take into account ALL computations which exist in the 
arithmetical platonia or in the universal dovetailing.




 Where does the data provided to the senses come from if not from a 
 computation which also includes that of the environment as well?  

You don't know that. The data and their statistics come from all 
computational histories going through my state. The game is to take 
completely seriously the comp hyp, and if it contradicts facts, we will 
abandon it. But that day has not yet come  Until then we have to 
derive the partial computability of our observable enviroment from a 
statistic on all computations made by the UD.


 Also, why does the computation have to be between numbers specifically,

They don't. Sometimes I use the combinators. They have to be finite 
objects, and this comes from the *digital* aspect of the comp. hyp.



 could a program in the deployment that calculates the evolution of a 
 universe

This is something you have to define. If you do it I bet you will find 
a program equivalent to a universal dovetailer, a bit like Everett 
universal quantum wave.



 perform the necessary computations to generate an observer?  

Sure. The problem is that there will be an infinity of program 
generating the same observer, in the same state, and the observer 
cannot know in which computations it belongs. Never? Measurement 
particularizes, but never get singular.



 If they can, then it stands other mathematical objects besides pure 
 turing machines and besides the UD could implement computations 
 capable of generating observers.

Not really. Those objects are internam construction made by programs 
relatively to trheir most probable history.



  I noticed in a previous post of yours you mentioned 'Kleene 
 predicates' as a way of deriving computations from true statements, do 
 you know of any good sources where I could learn more about Kleene 
 predicates?

A very good introduction is the book by N.J. Cutland. See the reference 
in my thesis. There are other books. I will think to make a list with 
some comments. Actually I really love Kleene's original Introduction 
to Metamathematics, but the notations used  are a bit old fashioned.

Hope I am not too short. I am a bit busy today,

Best,

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-04 Thread Bruno Marchal

Brent,

I try to single out where you depart from the comp hyp, to focus on the 
essential. I could add comments later on other paragraphs of your 
posts.

Le 03-déc.-08, à 19:22, Brent Meeker a écrit :

 But there is causality.  The sequence of events in the movie are 
 directly caused
 by the projector, but they have a causal linkage back to Alice and the 
 part of
 her environment that is captured in the movie.  I see no principled 
 reason to
 consider only the immediate cause and not refer back further in the 
 chain of
 causation.

If this were true, I don't see why I could say yes to a doctor for an 
artificial brain. I have to take account of the traceability of all 
part of the artificial brain. You have a problem with the qua 
computatio part of the MEC+MAT hypotheses, I think.
This is coherent with the fact that you have still some shyness with 
the step six, if I remember well. They will be opportunity to come 
back.

I have to go now.

Bruno

PS Abram. I think I will have to meditate a bit longer on your 
(difficult) post. You may have a point (hopefully only pedagogical :)


http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-04 Thread Abram Demski

 PS Abram. I think I will have to meditate a bit longer on your
 (difficult) post. You may have a point (hopefully only pedagogical :)

A little bit more commentary may be in order then... I think my point
may be halfway between pedagogical and serious...

What I am saying is that people will come to the argument with some
vague idea of which computations (or which physical entities) they
pick out as conscious. They will compare this to the various
hypotheses that come along during the argument-- MAT, MEC, MAT + MEC,
Lucky Alice is conscious, Lucky Alice is not conscious, et
cetera... These notions are necessarily 3rd-person in nature. It seems
like there is a problem there. Your argument is designed to talk about
1st-person phenomena.

If a 1st-person-perspective is a sort of structure (computational
and/or physical), what type of structure is it? If we define it in
terms of behavior only, then a recording is fine. If we define it in
terms of inner workings, then a recording is probably not fine, but we
introduce magical dependence on things that shouldn't matter to
us... ie, we should not care if we are interacting with a perfectly
orchestrated recording, so long as to us the result is the same.

It seems like this is independent of the differences between
pure-comp / comp+mat.

--Abram

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-04 Thread Jason Resch
On Thu, Dec 4, 2008 at 5:19 AM, Bruno Marchal [EMAIL PROTECTED] wrote:



 Hmmm... It means you have still a little problem with step seven. I
 wish we share a computable environment, but we cannot decide this at
 will.  I agree we have empirical evidence that here is such (partially)
 computable environment, and I am willing to say I trust nature for
 this. Yet, the fact is that to predict my next first person experience
 I have to take into account ALL computations which exist in the
 arithmetical platonia or in the universal dovetailing.


Bruno, I am with you that none of us can decide which of the infinite number
of histories contain/compute us; when I talk about a universe I refer to
just a single such history.  Perhaps you use history to refer only to the
computational history that implements the observer's mind where I use it to
mean an object which computes the mind of one or more observers in a
consistent and fully definable way.

What I am not clear on with regards to your position is whether or not you
believe most observers (if we could locate them in platonia from a 3rd
person view) exist in environments larger than their brains, and likely
containing numerous other observers or if you believe the mind is the only
thing reified by computation and it is meaningless to discuss the
environments they perceive because they don't exist.

The way I see it, using the example of this physical universe only, it is
far more probable for a mind to come about from the self-ordering properties
of a universe such as this than for there to be a computation where the mind
is an initial condition.  The program that implements the physics of this
universe is likely to be far smaller than the program that implements our
minds, or so my intuition leads me to believe.


   I noticed in a previous post of yours you mentioned 'Kleene
  predicates' as a way of deriving computations from true statements, do
  you know of any good sources where I could learn more about Kleene
  predicates?

 A very good introduction is the book by N.J. Cutland. See the reference
 in my thesis. There are other books. I will think to make a list with
 some comments. Actually I really love Kleene's original Introduction
 to Metamathematics, but the notations used  are a bit old fashioned.


Thanks Bruno, I will look into those.

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-04 Thread Russell Standish

On Wed, Dec 03, 2008 at 04:53:11PM +0100, Bruno Marchal wrote:
 
 I really don't know. I expect that the mathematical structure, as seen  
 from inside, is so big that Platonia cannot have it neither as element  
 nor as subpart. (Ah, well, I am aware that this is counter-intuitive,  
 but here mathematical logic can help to see the consistency, and the  
 quasi necessity with formal version of comp).
 

This point rather depends on what Platonia contains. If it contains
all sets of cardinality 2^{\aleph_0}, then the inside view of the
deployment will be conatained in it.

I do understand that your concept of Platonia (Arithmetic Realism I
believe you call it) is a Kronecker-like God made the integers, all
the rest was made by man, and so what you say would be true of that.

Cheers

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-03 Thread Bruno Marchal

Hi Abram,

On 02 Dec 2008, at 20:33, Abram Demski wrote:


 Bruno,

 I am a bit confused. To me, you said

 Or, you are weakening the physical supervenience
 thesis by appeal to a notion of causality which seems to me a bit
 magical, and contrary to the local functionalism of the
 computationalist.

 This seems to say that the version of MAT that MGA is targeted at does
 not include causal requirements.


MAT is the usual idea that there is a physical world described through  
physical laws. Those capture physical causality, generally under the  
form of differential equations. If there were no causality in physics,  
the very notion of physical supervenience would not make sense. Nor MEC 
+MAT, at the start. Sorry if I have been unclear, but I was  
criticizing only the *magical* causality which is necessary for  
holding both the physical supervenience thesis and the mechanist  
hypothesis, like attribution of prescience to the neurons (in MGA 1),  
or attributing a computational role in inert Material.





 To Günther, you said:

 Do you have different definition for MAT? Do you require causal
 dynamics
 for MAT?


 MAT is very general, but indeed it requires the minimum amount of
 causality so that we can implement a computation in the physical
 world, if not I don't see how we could talk on physical  
 supervenience.

 Does the MAT you are talking about include causal requirements or not?


Of course.




 About your other questions--

 OK, so now you have to disagree with MGA 1. No problem. But would you
 still say yes to the mechanist doctor?  I don't see how, because  
 now
 you appeal to something rather magic like influence in real time of
 inactive material.

 So long as that inert material preserves the correct
 counterfactuals, everything is fine. The only reason things seem
 strange with olympized Alice is because *normally* we do not know in
 advance which path cause and effect will take for something as
 intricate as a conscious entity. The air bags in a car are inert in
 the same way-- many cars never get in a crash, so the air bags remain
 unused. But since we don't know that ahead of time, we want the air
 bags. Similarly, when talking to the mechanist doctor, I will not be
 convinced that a recording will suffice...


Me too. But that remark is out of the context of the argument. If I  
want an artificial brain (MEC) I expect it to handle the  
counterfactuals, because indeed we don't know things in advance. But  
in the context of the proof we were in a situation where we did know  
the things in advance. Suppose that my doctor discovers in my brain  
some hardware build for managing my behavior only in front of  
dinosaurs, like old unused subroutine being only relic of the past,  
then it seems to me that the doctor, in the spirit of mechanist  
functionalism can decide of dropping those subroutine for building me  
a cheaper artificial brain. And that is all we need for the argument  
for going through. Consciousness relies on the computation which  
always kept the right counterfactuals, and never on their relative  
implementations which will only change their relative measures.





 The real question I have to ask to you, Günther and others is this
 one:  does your new supervenience thesis forced the UD to be
 physically executed in a real universe to get the UDA conclusion?

 Yes.

Then it seems to me you are relying on some magical causality attached  
to a magical notion of matter. I don't understand how you can still  
say yes to a doctor with such a notion of mechanism. See above. I  
would no more even trust a Darwinian brain.




 Does MGA, even just as a refutation of naïve mat eliminate the use
 of the concrete UD in UDA?

 No.

 (By the way, I have read UDA now, but have refrained from posting a
 commentary since there has been a great deal of discussion about it on
 this list and I could just be repeating the comments of others...)


Then you can read the answers I have given to the others. It seems to  
me UDA(1..7) does no more pose any problem, except for those who have  
decided to not understand, or believes religioulsy in matter and  
comp. In public forum group you always end up discussing with those  
who like cutting the hairs.




 Also: Günther mentioned SMAT, which actually sounds like the CMAT
 I proposed... so I'll refer to it as SMAT from now on.


I am sorry if I have been unclear, but MAT is taken in a very large  
sense. MAT is the belief in a physical universe obeying physical laws,  
be it quantum, classical, or whatever. Actually, for a  
computationalist (especially after UDA+MGA), MAT seems to be just a  
way to single out one special computations above the others.

Kim Jones has convinced me to explain UDA, and the general idea, a new  
time. It could be an opportunity to let us known your commentaries. To  
be sure, some mathematicians get more easily the point when I  
introduce the arithmetical translation of the UDA. You can study it in  

Re: MGA 3

2008-12-03 Thread Bruno Marchal

On 02 Dec 2008, at 22:24, Brent Meeker wrote:



 
 Alice's brain and body are just local stable artifacts belonging to
 our (most probable) computational history, and making possible for
 Alice consciousness to differentiate through interactions with us,
 relatively to us.

 Bruno

 OK, that clarifies things and it corresponds with my intuition that
 consciousness is relative to an environment.  I can't seem to answer  
 the
 question is MG-Alice conscious yes or no, but I can say she is  
 conscious
 within the movie environment, but not within our environment.  This  
 is similar
 to Stathis asking about consciousness within a rock.  We could say  
 the thermal
 motions of atoms within the rock may compute consciousness, but it  
 is a
 consciousness within the rock environment, not in ours.

Your consciousness is related to all computations going through your  
(current) brain states. I have not find any reason to think that a  
rock implement some consciousness, but if this is the case you have to  
take it into account for the general measure, given that in this case  
the UD will generate the rock computations too.
Now, I don't think there is any consciousness in the movie, even it is  
generated in the UD. There is just no computation or relevant  
physical causality linkable to a computation in a movie.
So consciousness can never be ascribed to anything physical, and thus  
we have reduce the mind body problem into the comp body problem; how  
does the appearance of matter emerge from the (immaterial) execution  
of the platonic deployment.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-03 Thread Bruno Marchal

On 03 Dec 2008, at 05:58, Jason Resch wrote:



 On Sun, Nov 30, 2008 at 11:33 AM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:


 All this is a bit complex because we have to take well into account
 the distinction between

 A computation in the real world,
 A description of a computation in the real world,

 And then most importantly:

 A computation in Platonia
 A description of a computation in Platonia.

 I argue that consciousness supervenes on computation in Platonia. Even
 in Platonia consciousness does not supervene on description of the
 computation, even if those description are 100% precise and correct

 Bruno, this is interesting and I have had similar thoughts of late  
 regarding along this vein.  The trouble is, I don't see how the  
 real world can be differentiated from Platonia.


It is hard to answer this. I think that after UDA+MGA the  
real (physical) world is the sum on all computations going through  
my state or our sharable comp state. With comp, Platonia, 3-Platonia  
(to be sure), can be represented by a tiny part of arithmetic, or just  
by the deployment of the UD. The physical world will be an inside  
construction made by inside machines/numbers. It will appear that such  
an inside view will be much bigger than 3-platonia. Like in Skolem  
paradox, platonia can be rather little from outside, and *very* big  
from inside.




 Just as the UD contains instances of itself, and hence computations  
 within computations,


I guess you mean the deployment. UD is the finite program which  
generates the deployment. At some point we will have to be cautious  
not identifying those two things. But it is OK. And indeed, the  
deployment contains an infinity of deployment which themselves contain  
an infinity off deployment ...



 can't mathematical objects contain mathematical objects?


Some can. Exemples: well, the deployment :)
But many fractals, universal or not, etc. Actually they are more  
included in themselves than element of themselves. But this is a bit  
math tech.



 If so then aren't our actions in this universe just as mathematcally  
 or computationally fundamental as any other instantiation in platonia?


No more after UDA+MGA, or UDA(1...8). Our consciousness is attached'  
to all (relative) instantiation in Platonia. If you make the usual  
static picture of the deployment, a big 3-dimensional or 2-dimensional  
cone, each of our states appears infinitely in a quasi dense way on  
its border. The notion of this universe does no even make a  
clear sense, we can talk only about our most probable histories. And,  
by doing measurement, we never select one history among an infinity,  
we always select an infinity of histories among an infinity of  
histories.




 Platonia might be highly interconnected even fractal and so  
 performing a computation in this universe in a sense hasn't created  
 anything new, but created a link to other identical things which  
 have always been there, and in the timelessness of platonia one  
 can't say which came before, or which is the original or most real.


Yes. Moreover, we are never singular. I think this is the startling  
part which is nevertheless confirmed by QM (Everett).




 After wrestling with block time, the MGA, and computationalism I'm  
 starting to wonder how computations are implemented in a 4  
 dimensional and static mathematical object.


Why do you want to do that? We have to do the contrary: extract the  
physics and the math-physics, from the much simpler (yet non trivial)  
notions of computation and of computation as seen from inside. To be  
sure, it is not even obvious that a notion of block-physical-universe  
will remain possible (I have no idea on this). Our sharable dreams  
glue well locally, but it is an open question to know if the gluing  
can be made global and define an objective general physical reality.


 The best I can come up with is that the mathematical structure is  
 defined by some equation or equations,


I really don't know. I expect that the mathematical structure, as seen  
from inside, is so big that Platonia cannot have it neither as element  
nor as subpart. (Ah, well, I am aware that this is counter-intuitive,  
but here mathematical logic can help to see the consistency, and the  
quasi necessity with formal version of comp).



 and that by virtue of this imposed order, defines relations between  
 particles.  Computation depends on relations, be it electrons in  
 silicon, Chinese with radios or a system of beer cans and ping-pong  
 balls;


Here you are talking about instantiations of computations relatively  
to our most probable computations, which have a physical look. But  
strictly speaking computations are only relation between numbers.




 from the outside there is little or no indication what is going on  
 is forming consciousness, it is only relative from the inside, and  
 since these relations carry state and information across one of the  
 4 dimensions 

Re: MGA 3

2008-12-03 Thread Jason Resch
On Wed, Dec 3, 2008 at 9:53 AM, Bruno Marchal [EMAIL PROTECTED] wrote:


 and that by virtue of this imposed order, defines relations between
 particles.  Computation depends on relations, be it electrons in silicon,
 Chinese with radios or a system of beer cans and ping-pong balls;



 Here you are talking about instantiations of computations relatively to our
 most probable computations, which have a physical look. But strictly
 speaking computations are only relation between numbers.


Bruno,

Thanks for your reply, I am curious what exactly you mean by the most
probable computations going through our state if these computations cannot
be part of a larger (shared universe) computation.  Where does the data
provided to the senses come from if not from a computation which also
includes that of the environment as well?  Also, why does the computation
have to be between numbers specifically, could a program in the deployment
that calculates the evolution of a universe perform the necessary
computations to generate an observer?  If they can, then it stands other
mathematical objects besides pure turing machines and besides the UD could
implement computations capable of generating observers.  I noticed in a
previous post of yours you mentioned 'Kleene predicates' as a way of
deriving computations from true statements, do you know of any good sources
where I could learn more about Kleene predicates?

Thanks,

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-03 Thread Brent Meeker

Bruno Marchal wrote:
 Hi Abram,
 
 On 02 Dec 2008, at 20:33, Abram Demski wrote:
 
 Bruno,

 I am a bit confused. To me, you said

 Or, you are weakening the physical supervenience
 thesis by appeal to a notion of causality which seems to me a bit
 magical, and contrary to the local functionalism of the
 computationalist.
 This seems to say that the version of MAT that MGA is targeted at does
 not include causal requirements.
 
 
 MAT is the usual idea that there is a physical world described through  
 physical laws. Those capture physical causality, generally under the  
 form of differential equations. If there were no causality in physics,  
 the very notion of physical supervenience would not make sense. Nor MEC 
 +MAT, at the start. Sorry if I have been unclear, but I was  
 criticizing only the *magical* causality which is necessary for  
 holding both the physical supervenience thesis and the mechanist  
 hypothesis, like attribution of prescience to the neurons (in MGA 1),  
 or attributing a computational role in inert Material.

This seems to assume there is causality apart from physical causality, but 
there 
is no causality in logic or mathematics (except in a metaphorical, I might say 
magical, sense).  So I don't see that Gunther is relying on anything magical.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-03 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 02 Dec 2008, at 22:24, Brent Meeker wrote:
 
 

 
 Alice's brain and body are just local stable artifacts belonging to  
 our (most probable) computational history, and making possible for  
 Alice consciousness to differentiate through interactions with us,  
 relatively to us.

 Bruno

 OK, that clarifies things and it corresponds with my intuition that
 consciousness is relative to an environment.  I can't seem to answer the
 question is MG-Alice conscious yes or no, but I can say she is 
 conscious
 within the movie environment, but not within our environment.  This is 
 similar
 to Stathis asking about consciousness within a rock.  We could say the 
 thermal
 motions of atoms within the rock may compute consciousness, but it is a
 consciousness within the rock environment, not in ours.
 
 Your consciousness is related to all computations going through your 
 (current) brain states. I have not find any reason to think that a rock 
 implement some consciousness, but if this is the case you have to take 
 it into account for the general measure, given that in this case the UD 
 will generate the rock computations too.
 Now, I don't think there is any consciousness in the movie, even it is 
 generated in the UD. There is just no computation or relevant physical 
 causality linkable to a computation in a movie.

But there is causality.  The sequence of events in the movie are directly 
caused 
by the projector, but they have a causal linkage back to Alice and the part of 
her environment that is captured in the movie.  I see no principled reason to 
consider only the immediate cause and not refer back further in the chain of 
causation.


 So consciousness can never be ascribed to anything physical, 

Doesn't your argument imply the opposite?  Consciousness can only be ascribed 
to 
physical things because consciousness is computation and computation requires 
causal links and causality if a physical relation.

Brent

and thus we 
 have reduce the mind body problem into the comp body problem; how does 
 the appearance of matter emerge from the (immaterial) execution of the 
 platonic deployment.
 
 Bruno
 
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-02 Thread Bruno Marchal


On 02 Dec 2008, at 01:05, Abram Demski wrote:


 Bruno,

 It sounds like what you are saying in this reply is that my version of
 COMP+MAT is consistent, but counter to your intuition (because you
 cannot see how consciousness could be attached to physical stuff).

I have no problem a priori in attaching consciousness to physical  
stuff. I do have problem when MEC + MAT forces me to attach  
consciousness to an empty machine (with no physical activity) together  
with inert material.




 If
 this is the case, then it sounds like MGA only works for specific
 versions of MAT-- say, versions of MAT that claim consciousness hinges
 only on the matter, not on the causal relationships.

On the contrary. I want consciousness related to the causal  
relationship. But with MEC the causal relationship are in the  
computations. The thought experiment shows that the physical  
implementation plays the role of making them able to manifest  
relatively to us, but are not responsible for their existence.


 In other words,
 what Günther called NMAT. So you need a different argument against--
 let's call it CMAT, for causal MAT. The olympization argument only
 works if COMP+CMAT can be shown to imply the removability of inert
 matter... which I don't think it can, because that inert matter here
 has a causal role to play in the counterfactuals, and is therefore
 essential to the physical computation.

OK, so now you have to disagree with MGA 1. No problem. But would you  
still say yes to the mechanist doctor?  I don't see how, because now  
you appeal to something rather magic like influence in real time of  
inactive material. Or, you are weakening the physical supervenience  
thesis by appeal to a notion of causality which seems to me a bit  
magical, and contrary to the local functionalism of the  
computationalist.

The real question I have to ask to you, Günther and others is this  
one:  does your new supervenience thesis forced the UD to be  
physically executed in a real universe to get the UDA conclusion?  
Does MGA, even just as a refutation of naïve mat eliminate the use  
of the concrete UD in UDA?

It is true that by weakening MEC or MAT, the reasoning doesn't go  
through, but it seems to me the conclusion goes with any primitive  
stuff view of MAT or Matter activity to which we could attach  
consciousness through causal links. Once you begin to define matter  
through causal links, and this keeping comp, and linking the  
experience to those causal relation, perhaps made in other time at  
other occasion, you are not a long way from the comp supervenience.  
But if you don't see this, I guess the conversation will continue.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-02 Thread Bruno Marchal


On 02 Dec 2008, at 03:33, Brent Meeker wrote:


 Bruno Marchal wrote:

 On 01 Dec 2008, at 03:25, Russell Standish wrote:

 On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:
 I am speaking as someone unconvinced that MGA2 implies an
 absurdity. MGA2 implies that the consciousness is supervening on  
 the
 stationary film.

 ?  I could agree, but is this not absurd enough, given MEC and the
 definition of the physical superveneience thesis;
 It is, prima facie, no more absurd than consciousness supervening  
 on a
 block universe.

 A block universe is nondynamic by definition. But looked at  
 another
 way, (ie from the inside) it is dynamic. It neatly illustrates why
 consciousness can supervene on a stationary film (because it is
 stationary when viewed from the inside).
 OK, but then you clearly change the physical supervenience thesis.

 How so? The stationary film is a physical object, I would have
 thought.


 I don't understand this. The physical supervenience thesis associate
 consciousness AT (x,t) to a computational state AT (x,t).

 Stated this way seems to assume that the causal relations between  
 the states are
 irrelevant, only the states matter.


Ah, please, add the delta again (see my previews post). I did wrote  
(dx,dt), but Anna thought it was infinitesimal. It could be fuzzy  
deltas or whatever you want. Unless you attach your consciousness,  
from here and now,  to the whole block multiverse, the reasoning will  
go through, assuming of course that the part of the multiverse, on  
which you attach your mind, is Turing emulable (MEC).





 The idea is
 that consciousness can be created in real time by the physical
 running of a computation (viewed of not in a block universe).

 Well we're pretty sure that brains do this.

Well, my point is that for believing this, you have to abandon the MEC  
hypothesis, perhaps in a manner like Searle or Penrose. Consciousness  
would be the product of some non Turing emulable chemical reactions.  
But if everything in the brain (or the genralized brain) is turing  
emulable, then the reasoning (uda+mga) is supposed to explain why  
consciousness (an immaterial thing) is related only to the computation  
made by the brain, but not the brain itself nor to its physical  
activity during the physical implementation. Your locally physical  
brain just makes higher the probability that your consciousness  
remains entangled with mine (and others).





 With the stationary film, this does not make sense. Alice experience
 of a dream is finite and short, the film lasts as long as you want. I
 think I see what you are doing: you take the stationary film as an
 incarnation of a computation in Platonia. In that sense you can
 associate the platonic experience of Alice to it, but this is a
 different physical supervenience thesis. And I argue that even this
 cannot work, because the movie does not capture a computation.

 I was thinking along the same lines.  But then the question is what  
 does capture
 a computation.  Where in the thought experiments, starting with  
 natural Alice
 and ending with a pictures of Alice's brain states, did we lose  
 computation?  Is
 it important that the sequence be time rather than space or some  
 other order?
 Is it the loss causal relations or counterfactuality?


We  lose a computation relatively to us when the computation is not  
executed by a stable (relatively to us) universal machine nearby, be  
it a cell, a brain, a natural or artificial universal computer.

In the case of the movie, it is no so bad. Consciousness does not  
supervene on the movie or its projection, but the movie can be used as  
a backup of Alice's state. We can re-project a frame, of that movie,  
on a functionally well working Boolean optical graph, and Alice will  
be back ... with us.

Of course the computations themselves, and their many possible  
differentiations, are already in Platonia (= in the solution of the  
universal Diophantine equation, in the processing of the UD, or  
perhaps in the Mandelbrot set).

Alice's brain and body are just local stable artifacts belonging to  
our (most probable) computational history, and making possible for  
Alice consciousness to differentiate through interactions with us,  
relatively to us.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-02 Thread Bruno Marchal

Hi Günther,


On 01 Dec 2008, at 22:53, Günther Greindl wrote:


 Hi Bruno,

 but no! Then we wouldn't have a substrate anymore.
 Oh( That is not true! We still have the projector and the film. We  
 can
 project the movie in the air or directly in your eyes.

 Ok I see now where our intuitions differ (always the problem with
 thought experiment) - but maybe we can clear this up and see where it
 leads...


OK.





 it is really something people have to meditate. I could have conclude
 in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to  
 take
 people seriously when they argue that the consciousness of Alice
 supervenes on a movie of its brain activity. There is no causality,
 nor computations, during the *projection* of the movie.

 If that is how you see MAT (you require causality) - then I would also
 agree - MGA 2 shows absurdity.

Well I require at least a minimum of physical causality to implement  
physically the computational causality (which incarnates platonic  
relation existing among numbers).

MAT presupposes anything primitively material and causal of course.  
Remember that I am using Materialism and  physicalism (and naturalsim)  
as synonymous, because the argument is very general. The (naïve) idea  
is that the brain *does* compute something when you dream, for  
example, and that it is the physical causality which is responsible  
for the implementation of the computation.



 Alice's
 experience is related to ALL computations going through those states,
 not to descriptions of those states which can been made and collected
 in other histories. Locally it makes sense to ascribe *that*
 consciousness when you have the mean to interpret (through some
 universal machine) her computational states.

 That is already part of your theory (UDA and all) (as I understand  
 it),
 but not included already in COMP or in MAT.


Not at all. This could be confusing for those who don't know UDA. Once  
MEC+MAT is shown to be incompatible, we then chose MEC and thus  
abandon MAT.
(why? just for not going out of the range of my working hypothesis, ok)

With MEC, there is no more physical supervenience thesis, on the kind  
compatible with MEC. But we keep MEC, so we have to continue to  
related consciousness with the computation, right? We do no more have  
a notion of physical computation, so we attach consciousness to the  
computation itself, LIKE it has already been done in the UDA, except  
that we don't need no more to run the UD.





 [Consciousness of (x,t)] is never [physical states] at (x,t)

 For me, the above expresses the essence of (naive) MAT - let's call  
 it
 NMAT.

 So, clearly:
 NMAT: [Consciousness of (x,t)] supervenes on [physical states] at  
 (x,t)

 And on physical states only! Not on the causal relations of these  
 states
 (block universe view).


You are perhaps taking me too much literally here. It is just  
difficult,  lengthy and confusing to make a precise definition of the  
physical supervenience which would work for the different views of the  
universe.
The physical supervenience thesis just says that 1) there is a  
physical universe, 2) it can compute, and consciousness requires some  
special local computations made *in* that universe.





 Your argument goes like this:
 it is:
 [Consciousness of (x,t)] is always all computational states (in the  
 UD
 °) corresponding to that experience. (It is an indexical view of
 reality).

 And I share it IF we can show that MAT+MEC is inconsistent. But I am  
 not
 convinced yet.

 For me, the essence of MEC (COMP) is this:

 COMP: there is a level at which a person can be substituted at a  
 digital
 level (we don't have to go down to infinity), and where this digital
 description is enough to reconsitute this person elsewhere and  
 elsewhen,
 independent of substrate.


 NMAT additionally requires that the substrate for COMP be some
 mysterious substance, and not only a platonic relation.


Not so mysterious. It just seems to require some particular  
computations. The physical one. People are used to think about it in  
term of waves or particles, or field, geometrical dynamical object.  
They believe those are particulars (which become mysterious only with  
comp, but a priori with Mat they are rather natural);





 My intuition tells me this can't be - we have to drop either MEC or  
 NMAT.

 But MGA 3, when dropping the boolean gates, violates NMAT, because:
 NMAT: [Consciousness of (x,t)] supervenes on [physical states] at  
 (x,t)

 And the physical states relevant where the _states of the boolean  
 graph_
 (the movie projector was just the lucky cosmic ray).

 Do you have different definition for MAT? Do you require causal  
 dynamics
 for MAT?


MAT is very general, but indeed it requires the minimum amount of  
causality so that we can implement a computation in the physical  
world, if not I don't see how we could talk on physical supervenience.





 The problem with NMAT as I define

Re: MGA 3

2008-12-02 Thread Abram Demski

Bruno,

I am a bit confused. To me, you said

Or, you are weakening the physical supervenience
 thesis by appeal to a notion of causality which seems to me a bit
 magical, and contrary to the local functionalism of the
 computationalist.

This seems to say that the version of MAT that MGA is targeted at does
not include causal requirements.

To Günther, you said:

 Do you have different definition for MAT? Do you require causal
 dynamics
 for MAT?


 MAT is very general, but indeed it requires the minimum amount of
 causality so that we can implement a computation in the physical
 world, if not I don't see how we could talk on physical supervenience.

Does the MAT you are talking about include causal requirements or not?

About your other questions--

 OK, so now you have to disagree with MGA 1. No problem. But would you
 still say yes to the mechanist doctor?  I don't see how, because now
 you appeal to something rather magic like influence in real time of
 inactive material.

So long as that inert material preserves the correct
counterfactuals, everything is fine. The only reason things seem
strange with olympized Alice is because *normally* we do not know in
advance which path cause and effect will take for something as
intricate as a conscious entity. The air bags in a car are inert in
the same way-- many cars never get in a crash, so the air bags remain
unused. But since we don't know that ahead of time, we want the air
bags. Similarly, when talking to the mechanist doctor, I will not be
convinced that a recording will suffice...

 The real question I have to ask to you, Günther and others is this
 one:  does your new supervenience thesis forced the UD to be
 physically executed in a real universe to get the UDA conclusion?

Yes.

 Does MGA, even just as a refutation of naïve mat eliminate the use
 of the concrete UD in UDA?

No.

(By the way, I have read UDA now, but have refrained from posting a
commentary since there has been a great deal of discussion about it on
this list and I could just be repeating the comments of others...)

Also: Günther mentioned SMAT, which actually sounds like the CMAT
I proposed... so I'll refer to it as SMAT from now on.

--Abram

On Tue, Dec 2, 2008 at 12:18 PM, Bruno Marchal [EMAIL PROTECTED] wrote:


 On 02 Dec 2008, at 01:05, Abram Demski wrote:


 Bruno,

 It sounds like what you are saying in this reply is that my version of
 COMP+MAT is consistent, but counter to your intuition (because you
 cannot see how consciousness could be attached to physical stuff).

 I have no problem a priori in attaching consciousness to physical
 stuff. I do have problem when MEC + MAT forces me to attach
 consciousness to an empty machine (with no physical activity) together
 with inert material.




 If
 this is the case, then it sounds like MGA only works for specific
 versions of MAT-- say, versions of MAT that claim consciousness hinges
 only on the matter, not on the causal relationships.

 On the contrary. I want consciousness related to the causal
 relationship. But with MEC the causal relationship are in the
 computations. The thought experiment shows that the physical
 implementation plays the role of making them able to manifest
 relatively to us, but are not responsible for their existence.


 In other words,
 what Günther called NMAT. So you need a different argument against--
 let's call it CMAT, for causal MAT. The olympization argument only
 works if COMP+CMAT can be shown to imply the removability of inert
 matter... which I don't think it can, because that inert matter here
 has a causal role to play in the counterfactuals, and is therefore
 essential to the physical computation.

 OK, so now you have to disagree with MGA 1. No problem. But would you
 still say yes to the mechanist doctor?  I don't see how, because now
 you appeal to something rather magic like influence in real time of
 inactive material. Or, you are weakening the physical supervenience
 thesis by appeal to a notion of causality which seems to me a bit
 magical, and contrary to the local functionalism of the
 computationalist.

 The real question I have to ask to you, Günther and others is this
 one:  does your new supervenience thesis forced the UD to be
 physically executed in a real universe to get the UDA conclusion?
 Does MGA, even just as a refutation of naïve mat eliminate the use
 of the concrete UD in UDA?

 It is true that by weakening MEC or MAT, the reasoning doesn't go
 through, but it seems to me the conclusion goes with any primitive
 stuff view of MAT or Matter activity to which we could attach
 consciousness through causal links. Once you begin to define matter
 through causal links, and this keeping comp, and linking the
 experience to those causal relation, perhaps made in other time at
 other occasion, you are not a long way from the comp supervenience.
 But if you don't see this, I guess the conversation will continue.

 Bruno

 http://iridia.ulb.ac.be/~marchal/




 



Re: MGA 3

2008-12-02 Thread Abram Demski

Günther,

Why does MGA 2 show that SMAT + MEC is inconsistent?

The way I see it, SMAT + MEC should say that a recording of Alice does
not count as conscious, because it lacks the proper causal structure
(or equivalently, the proper counterfactual behavior).

--Abram

On Mon, Dec 1, 2008 at 4:53 PM, Günther Greindl
[EMAIL PROTECTED] wrote:

 Hi Bruno,

 but no! Then we wouldn't have a substrate anymore.
 Oh( That is not true! We still have the projector and the film. We can
 project the movie in the air or directly in your eyes.

 Ok I see now where our intuitions differ (always the problem with
 thought experiment) - but maybe we can clear this up and see where it
 leads...

 it is really something people have to meditate. I could have conclude
 in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to take
 people seriously when they argue that the consciousness of Alice
 supervenes on a movie of its brain activity. There is no causality,
 nor computations, during the *projection* of the movie.

 If that is how you see MAT (you require causality) - then I would also
 agree - MGA 2 shows absurdity.

Alice's
 experience is related to ALL computations going through those states,
 not to descriptions of those states which can been made and collected
 in other histories. Locally it makes sense to ascribe *that*
 consciousness when you have the mean to interpret (through some
 universal machine) her computational states.

 That is already part of your theory (UDA and all) (as I understand it),
 but not included already in COMP or in MAT.

 [Consciousness of (x,t)] is never [physical states] at (x,t)

 For me, the above expresses the essence of (naive) MAT - let's call it
 NMAT.

 So, clearly:
 NMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)

 And on physical states only! Not on the causal relations of these states
 (block universe view).

 Your argument goes like this:
 it is:
 [Consciousness of (x,t)] is always all computational states (in the UD
 °) corresponding to that experience. (It is an indexical view of
 reality).

 And I share it IF we can show that MAT+MEC is inconsistent. But I am not
 convinced yet.

 For me, the essence of MEC (COMP) is this:

 COMP: there is a level at which a person can be substituted at a digital
 level (we don't have to go down to infinity), and where this digital
 description is enough to reconsitute this person elsewhere and elsewhen,
 independent of substrate.


 NMAT additionally requires that the substrate for COMP be some
 mysterious substance, and not only a platonic relation.

 My intuition tells me this can't be - we have to drop either MEC or NMAT.

 But MGA 3, when dropping the boolean gates, violates NMAT, because:
 NMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)

 And the physical states relevant where the _states of the boolean graph_
 (the movie projector was just the lucky cosmic ray).

 Do you have different definition for MAT? Do you require causal dynamics
 for MAT?

 The problem with NMAT as I define it raises the issue as in the Putnam
 paper - does every rock implement every finite state-automaton?

 Chalmers makes the move to implementation, so introduces causal dynamics.

 So, sophisticated MAT would probably be:
 SMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)
 over a timespan delta(t) _if_ sufficiently complex causal dynamics are
 at work during this timespan relating the physical states.


 Then I would say: MGA 2 (already) shows that SMAT+MEC are not
 compatible. No need for MGA 3.

 For NMAT+MEC (which is problematic for other reasons) MGA 3 is not
 convincing.

 Would you agree with this?

 Cheers,
 Günther

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-02 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 02 Dec 2008, at 03:33, Brent Meeker wrote:
 
 Bruno Marchal wrote:
 On 01 Dec 2008, at 03:25, Russell Standish wrote:

 On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:
 I am speaking as someone unconvinced that MGA2 implies an
 absurdity. MGA2 implies that the consciousness is supervening on  
 the
 stationary film.
 ?  I could agree, but is this not absurd enough, given MEC and the
 definition of the physical superveneience thesis;
 It is, prima facie, no more absurd than consciousness supervening  
 on a
 block universe.

 A block universe is nondynamic by definition. But looked at  
 another
 way, (ie from the inside) it is dynamic. It neatly illustrates why
 consciousness can supervene on a stationary film (because it is
 stationary when viewed from the inside).
 OK, but then you clearly change the physical supervenience thesis.

 How so? The stationary film is a physical object, I would have
 thought.

 I don't understand this. The physical supervenience thesis associate
 consciousness AT (x,t) to a computational state AT (x,t).
 Stated this way seems to assume that the causal relations between  
 the states are
 irrelevant, only the states matter.
 
 
 Ah, please, add the delta again (see my previews post). I did wrote  
 (dx,dt), but Anna thought it was infinitesimal. It could be fuzzy  
 deltas or whatever you want. Unless you attach your consciousness,  
 from here and now,  to the whole block multiverse, the reasoning will  
 go through, assuming of course that the part of the multiverse, on  
 which you attach your mind, is Turing emulable (MEC).
 
 
 

 The idea is
 that consciousness can be created in real time by the physical
 running of a computation (viewed of not in a block universe).
 Well we're pretty sure that brains do this.
 
 Well, my point is that for believing this, you have to abandon the MEC  
 hypothesis, perhaps in a manner like Searle or Penrose. Consciousness  
 would be the product of some non Turing emulable chemical reactions.  
 But if everything in the brain (or the genralized brain) is turing  
 emulable, then the reasoning (uda+mga) is supposed to explain why  
 consciousness (an immaterial thing) is related only to the computation  
 made by the brain, but not the brain itself nor to its physical  
 activity during the physical implementation. Your locally physical  
 brain just makes higher the probability that your consciousness  
 remains entangled with mine (and others).
 
 

 With the stationary film, this does not make sense. Alice experience
 of a dream is finite and short, the film lasts as long as you want. I
 think I see what you are doing: you take the stationary film as an
 incarnation of a computation in Platonia. In that sense you can
 associate the platonic experience of Alice to it, but this is a
 different physical supervenience thesis. And I argue that even this
 cannot work, because the movie does not capture a computation.
 I was thinking along the same lines.  But then the question is what  
 does capture
 a computation.  Where in the thought experiments, starting with  
 natural Alice
 and ending with a pictures of Alice's brain states, did we lose  
 computation?  Is
 it important that the sequence be time rather than space or some  
 other order?
 Is it the loss causal relations or counterfactuality?
 
 
 We  lose a computation relatively to us when the computation is not  
 executed by a stable (relatively to us) universal machine nearby, be  
 it a cell, a brain, a natural or artificial universal computer.
 
 In the case of the movie, it is no so bad. Consciousness does not  
 supervene on the movie or its projection, but the movie can be used as  
 a backup of Alice's state. We can re-project a frame, of that movie,  
 on a functionally well working Boolean optical graph, and Alice will  
 be back ... with us.
 
 Of course the computations themselves, and their many possible  
 differentiations, are already in Platonia (= in the solution of the  
 universal Diophantine equation, in the processing of the UD, or  
 perhaps in the Mandelbrot set).
 
 Alice's brain and body are just local stable artifacts belonging to  
 our (most probable) computational history, and making possible for  
 Alice consciousness to differentiate through interactions with us,  
 relatively to us.
 
 Bruno

OK, that clarifies things and it corresponds with my intuition that 
consciousness is relative to an environment.  I can't seem to answer the 
question is MG-Alice conscious yes or no, but I can say she is conscious 
within the movie environment, but not within our environment.  This is similar 
to Stathis asking about consciousness within a rock.  We could say the thermal 
motions of atoms within the rock may compute consciousness, but it is a 
consciousness within the rock environment, not in ours.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the 

Re: MGA 3

2008-12-02 Thread Jason Resch
On Sun, Nov 30, 2008 at 11:33 AM, Bruno Marchal [EMAIL PROTECTED] wrote:



 All this is a bit complex because we have to take well into account
 the distinction between

 A computation in the real world,
 A description of a computation in the real world,

 And then most importantly:

 A computation in Platonia
 A description of a computation in Platonia.

 I argue that consciousness supervenes on computation in Platonia. Even
 in Platonia consciousness does not supervene on description of the
 computation, even if those description are 100% precise and correct


Bruno, this is interesting and I have had similar thoughts of late regarding
along this vein.  The trouble is, I don't see how the real world can be
differentiated from Platonia.  Just as the UD contains instances of itself,
and hence computations within computations, can't mathematical objects
contain mathematical objects?  If so then aren't our actions in this
universe just as mathematcally or computationally fundamental as any other
instantiation in platonia?  Platonia might be highly interconnected even
fractal and so performing a computation in this universe in a sense hasn't
created anything new, but created a link to other identical things which
have always been there, and in the timelessness of platonia one can't say
which came before, or which is the original or most real.

After wrestling with block time, the MGA, and computationalism I'm starting
to wonder how computations are implemented in a 4 dimensional and static
mathematical object.  The best I can come up with is that the mathematical
structure is defined by some equation or equations, and that by virtue of
this imposed order, defines relations between particles.  Computation
depends on relations, be it electrons in silicon, Chinese with radios or a
system of beer cans and ping-pong balls; from the outside there is little or
no indication what is going on is forming consciousness, it is only relative
from the inside, and since these relations carry state and information
across one of the 4 dimensions of the universe we end up with DNA and brains
which record and process information in sequence, or so it appears to us
being trapped on within this equation defined in platonia.

In the case of a movie that is in this physical world, no mathematical
equation defines progression between frames and there is no conveyance of
information, the alteration of one frame does not affect any other which
would not be the case nor possible with a timeless mathematical object.

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-01 Thread Bruno Marchal


On 30 Nov 2008, at 19:14, Günther Greindl wrote:


 Hello Bruno,

 I must admit you have completely lost me with MGA 3.

 With MGA 1 and 2, I would say that, with MEC+MAT, also the the
 projection of the movie (and Lucky Alice in 1) are conscious - because
 it supervenes on the physical activity.

 MEC says: it's the computation that counts, not the substrate.

 MAT says: we need some substrate to perform a computation. In MGA 1  
 and
 2 we have substrates (neurons or optical boolean graph that performs  
 the
 computation).

 Now in MGA 3 you say:

 Now, consider the projection of the movie of the activity of Alice's
 brain, the movie graph.
 Is it necessary that someone look at that movie? Certainly not.

 Agreed.

 Is it necessary to have a screen? Well, the range of activity here is
 just one dynamical description of one computation. Suppose we make a
 hole in the screen. What goes in and out of that hole is exactly the
 same, with the hole and without the hole. For that unique activity,  
 the
 hole in the screen is functionally equivalent to the subgraph which  
 the
 hole removed.

 We can remove those optical boolean nodes which are not relevant for  
 the
  caterpillar dream

 Clearly we can make a hole as large as the screen, so no
 need for a screen.

 but no! Then we wouldn't have a substrate anymore.


Oh( That is not true! We still have the projector and the film. We can  
project the movie in the air or directly in your eyes.
I agree for this for this when the film itself is made empty, but then  
I can recover a conterfactually correct computation by adding inert  
material!




 You are dropping MAT
 at this step,

No. Only when I got that Alice's consciousness supervene on the empty  
film (with or without inert material).



 not leading MEC+MAT to a contradiction.

 But this reasoning goes through if we make the hole in the film  
 itself.
 Reconsider the image on the screen: with a hole in the film itself,  
 you
 get a hole in the movie, but everything which enters and go out  
 of the
 hole remains the same, for that (unique) range of activity.  The  
 hole
 has trivially the same functionality than the subgraph functionality
 whose special behavior was described by the film. And this is true  
 for
 any subparts, so we can remove the entire film itself.

 We can talk about this part after I understand why you can drop our
 optical boolean network *grin*


it is really something people have to meditate. I could have conclude  
in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to take  
people seriously when they argue that the consciousness of Alice  
supervenes on a movie of its brain activity. There is no causality,  
nor computations, during the *projection* of the movie. Alice's  
experience is related to ALL computations going through those states,  
not to descriptions of those states which can been made and collected  
in other histories. Locally it makes sense to ascribe *that*  
consciousness when you have the mean to interpret (through some  
universal machine) her computational states.

[Consciousness of (x,t)] is never [physical states] at (x,t)

it is:

[Consciousness of (x,t)] is always all computational states (in the UD 
°) corresponding to that experience. (It is an indexical view of  
reality).

And computational states can be defined by true platonic relation  
between numbers. (The usual way is done with Kleene predicate).

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-01 Thread Bruno Marchal

Hi Abram,


On 30 Nov 2008, at 19:17, Abram Demski wrote:


 Bruno,

 No, she cannot be conscious that she is partially conscious in this
 case, because the scenario is set up such that she does everything as
 if she were fully conscious-- only the counterfactuals change. But, if
 someone tested those counterfactuals by doing something that the
 recording didn't account for, then she may or may not become conscious
 of the fact of her partial consciousness-- in that case it would be
 very much like brain damage.


A very serious brain damage!




 Anyway, yes, I am admitting that the film of the graph lacks
 counterfactuals and is therefore not conscious.


OK.



 My earlier splitting
 of the argument into an argument about (1) and a separate argument
 against (2) was perhaps a bit silly, because the objection to (2) went
 far enough back that it was also an objection to (1). I split the
 argument like that just because I saw an independent flaw in the
 reasoning of (1)... anyway...

 Basically, I am claiming that there is a version of COMP+MAT that MGA
 is not able to derive a contradiction from. The version goes something
 like this:

 Yes, consciousness supervenes on computation, but that computation
 needs to actually take place (meaning, physically). Otherwise, how
 could consciousness supervene on it?


Yes but with UDA the contrary happens. Even if a material world, the  
question becomes:  how could consciousness remain attached on this  
matter.
(It is simpler to understand this issue by supposing some concrete  
universal deployment in the real universe, and this provides the  
motivation for MGA. the concreteness of the UD is a red herring.

You seem to forget that the MAT mind-body problem is not solved. I  
mean this is what all experts in the field agree on. To invoke matter  
to have something on which consciousness can supervene on, seems to me  
a gap explanation. It introduces more mystery than needed.


 Now, in order for a computation
 to be physically instantiated, the physical instantiation needs to
 satisfy a few properties. One of these properties is clearly some sort
 of isomorphism between the computation and the physical instantiation:
 the actual steps of the computation are represented in physical form.
 A less obvious requirement is that the physical computation needs to
 have the proper counterfactuals: if some external force were to modify
 some step in the computation, the computation must progress according
 to the new computational state (as translated by the isomorphism).

You will be led to difficulties, like giving a computational role to  
inert material. It is ok, because it saves the counterfactual (and  
thus MEC), but on the price of attributing a flow of conscious  
experience (in real time) to inert material. I can't swallow that,  
especially if the motivation is going back to the unsolved problems of  
mind, matter and their relations.

By dropping MAT, we have an explanation of consciousness or of the  
reason why numbers, due to their true relations with many other  
numbers, can develop from inside stable (from their views) believes on  
reality and realities including, evidences can be found, physical  
realities. Numbers, or combinators, etc.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-01 Thread Bruno Marchal


On 01 Dec 2008, at 03:25, Russell Standish wrote:


 On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:

 I am speaking as someone unconvinced that MGA2 implies an
 absurdity. MGA2 implies that the consciousness is supervening on the
 stationary film.


 ?  I could agree, but is this not absurd enough, given MEC and the
 definition of the physical superveneience thesis;

 It is, prima facie, no more absurd than consciousness supervening on a
 block universe.


 A block universe is nondynamic by definition. But looked at another
 way, (ie from the inside) it is dynamic. It neatly illustrates why
 consciousness can supervene on a stationary film (because it is
 stationary when viewed from the inside).

 OK, but then you clearly change the physical supervenience thesis.


 How so? The stationary film is a physical object, I would have  
 thought.


I don't understand this. The physical supervenience thesis associate  
consciousness AT (x,t) to a computational state AT (x,t). The idea is  
that consciousness can be created in real time by the physical  
running of a computation (viewed of not in a block universe).

With the stationary film, this does not make sense. Alice experience  
of a dream is finite and short, the film lasts as long as you want. I  
think I see what you are doing: you take the stationary film as an  
incarnation of a computation in Platonia. In that sense you can  
associate the platonic experience of Alice to it, but this is a  
different physical supervenience thesis. And I argue that even this  
cannot work, because the movie does not capture a computation. The  
universal interpreter is lacking. It could even correspond to another  
experience, if the graph was a movie of another sort of computer, for  
example with NAND substituted for the NOR.








 The film, however does need
 to be sufficiently rich, and also needs to handle counterfactuals
 (unlike the usual sort of movie we see which has only one plot).


 OK. Such a film could be said to be a computation. Of course you are
 not talking about a stationary thing, which, be it physical or
 immaterial, cannot handle counterfactuals.


 If true, then a block universe could not represent the
 Multiverse. Maybe so, but I think a lot of people might be surprised
 at this one.


I am not sure I can give sense to an expression like the multiverse  
or the block universe can or cannot handle counterfactuals. They  
have no inputs, nor outputs.


Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-01 Thread Günther Greindl

Hi Bruno,

 but no! Then we wouldn't have a substrate anymore.
 Oh( That is not true! We still have the projector and the film. We can  
 project the movie in the air or directly in your eyes.

Ok I see now where our intuitions differ (always the problem with 
thought experiment) - but maybe we can clear this up and see where it 
leads...

 it is really something people have to meditate. I could have conclude  
 in the absurdity of MAT (with MEC) at MGA 2. It is hard for me to take  
 people seriously when they argue that the consciousness of Alice  
 supervenes on a movie of its brain activity. There is no causality,  
 nor computations, during the *projection* of the movie. 

If that is how you see MAT (you require causality) - then I would also 
agree - MGA 2 shows absurdity.

Alice's  
 experience is related to ALL computations going through those states,  
 not to descriptions of those states which can been made and collected  
 in other histories. Locally it makes sense to ascribe *that*  
 consciousness when you have the mean to interpret (through some  
 universal machine) her computational states.

That is already part of your theory (UDA and all) (as I understand it), 
but not included already in COMP or in MAT.

 [Consciousness of (x,t)] is never [physical states] at (x,t)

For me, the above expresses the essence of (naive) MAT - let's call it 
NMAT.

So, clearly:
NMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)

And on physical states only! Not on the causal relations of these states 
(block universe view).

Your argument goes like this:
 it is:
 [Consciousness of (x,t)] is always all computational states (in the UD 
 °) corresponding to that experience. (It is an indexical view of  
 reality).

And I share it IF we can show that MAT+MEC is inconsistent. But I am not 
convinced yet.

For me, the essence of MEC (COMP) is this:

COMP: there is a level at which a person can be substituted at a digital 
level (we don't have to go down to infinity), and where this digital 
description is enough to reconsitute this person elsewhere and elsewhen, 
independent of substrate.


NMAT additionally requires that the substrate for COMP be some 
mysterious substance, and not only a platonic relation.

My intuition tells me this can't be - we have to drop either MEC or NMAT.

But MGA 3, when dropping the boolean gates, violates NMAT, because:
NMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t)

And the physical states relevant where the _states of the boolean graph_ 
(the movie projector was just the lucky cosmic ray).

Do you have different definition for MAT? Do you require causal dynamics 
for MAT?

The problem with NMAT as I define it raises the issue as in the Putnam 
paper - does every rock implement every finite state-automaton?

Chalmers makes the move to implementation, so introduces causal dynamics.

So, sophisticated MAT would probably be:
SMAT: [Consciousness of (x,t)] supervenes on [physical states] at (x,t) 
over a timespan delta(t) _if_ sufficiently complex causal dynamics are 
at work during this timespan relating the physical states.


Then I would say: MGA 2 (already) shows that SMAT+MEC are not 
compatible. No need for MGA 3.

For NMAT+MEC (which is problematic for other reasons) MGA 3 is not 
convincing.

Would you agree with this?

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-01 Thread Abram Demski

Bruno,

It sounds like what you are saying in this reply is that my version of
COMP+MAT is consistent, but counter to your intuition (because you
cannot see how consciousness could be attached to physical stuff). If
this is the case, then it sounds like MGA only works for specific
versions of MAT-- say, versions of MAT that claim consciousness hinges
only on the matter, not on the causal relationships. In other words,
what Günther called NMAT. So you need a different argument against--
let's call it CMAT, for causal MAT. The olympization argument only
works if COMP+CMAT can be shown to imply the removability of inert
matter... which I don't think it can, because that inert matter here
has a causal role to play in the counterfactuals, and is therefore
essential to the physical computation.

--Abram

On Mon, Dec 1, 2008 at 11:20 AM, Bruno Marchal [EMAIL PROTECTED] wrote:

 Hi Abram,


 On 30 Nov 2008, at 19:17, Abram Demski wrote:


 Bruno,

 No, she cannot be conscious that she is partially conscious in this
 case, because the scenario is set up such that she does everything as
 if she were fully conscious-- only the counterfactuals change. But, if
 someone tested those counterfactuals by doing something that the
 recording didn't account for, then she may or may not become conscious
 of the fact of her partial consciousness-- in that case it would be
 very much like brain damage.


 A very serious brain damage!




 Anyway, yes, I am admitting that the film of the graph lacks
 counterfactuals and is therefore not conscious.


 OK.



 My earlier splitting
 of the argument into an argument about (1) and a separate argument
 against (2) was perhaps a bit silly, because the objection to (2) went
 far enough back that it was also an objection to (1). I split the
 argument like that just because I saw an independent flaw in the
 reasoning of (1)... anyway...

 Basically, I am claiming that there is a version of COMP+MAT that MGA
 is not able to derive a contradiction from. The version goes something
 like this:

 Yes, consciousness supervenes on computation, but that computation
 needs to actually take place (meaning, physically). Otherwise, how
 could consciousness supervene on it?


 Yes but with UDA the contrary happens. Even if a material world, the
 question becomes:  how could consciousness remain attached on this
 matter.
 (It is simpler to understand this issue by supposing some concrete
 universal deployment in the real universe, and this provides the
 motivation for MGA. the concreteness of the UD is a red herring.

 You seem to forget that the MAT mind-body problem is not solved. I
 mean this is what all experts in the field agree on. To invoke matter
 to have something on which consciousness can supervene on, seems to me
 a gap explanation. It introduces more mystery than needed.


 Now, in order for a computation
 to be physically instantiated, the physical instantiation needs to
 satisfy a few properties. One of these properties is clearly some sort
 of isomorphism between the computation and the physical instantiation:
 the actual steps of the computation are represented in physical form.
 A less obvious requirement is that the physical computation needs to
 have the proper counterfactuals: if some external force were to modify
 some step in the computation, the computation must progress according
 to the new computational state (as translated by the isomorphism).

 You will be led to difficulties, like giving a computational role to
 inert material. It is ok, because it saves the counterfactual (and
 thus MEC), but on the price of attributing a flow of conscious
 experience (in real time) to inert material. I can't swallow that,
 especially if the motivation is going back to the unsolved problems of
 mind, matter and their relations.

 By dropping MAT, we have an explanation of consciousness or of the
 reason why numbers, due to their true relations with many other
 numbers, can develop from inside stable (from their views) believes on
 reality and realities including, evidences can be found, physical
 realities. Numbers, or combinators, etc.

 Bruno

 http://iridia.ulb.ac.be/~marchal/




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-12-01 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 01 Dec 2008, at 03:25, Russell Standish wrote:
 
 On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:
 I am speaking as someone unconvinced that MGA2 implies an
 absurdity. MGA2 implies that the consciousness is supervening on the
 stationary film.

 ?  I could agree, but is this not absurd enough, given MEC and the
 definition of the physical superveneience thesis;
 It is, prima facie, no more absurd than consciousness supervening on a
 block universe.

 A block universe is nondynamic by definition. But looked at another
 way, (ie from the inside) it is dynamic. It neatly illustrates why
 consciousness can supervene on a stationary film (because it is
 stationary when viewed from the inside).
 OK, but then you clearly change the physical supervenience thesis.

 How so? The stationary film is a physical object, I would have  
 thought.
 
 
 I don't understand this. The physical supervenience thesis associate  
 consciousness AT (x,t) to a computational state AT (x,t). 

Stated this way seems to assume that the causal relations between the states 
are 
irrelevant, only the states matter.

The idea is  
 that consciousness can be created in real time by the physical  
 running of a computation (viewed of not in a block universe).

Well we're pretty sure that brains do this.

 
 With the stationary film, this does not make sense. Alice experience  
 of a dream is finite and short, the film lasts as long as you want. I  
 think I see what you are doing: you take the stationary film as an  
 incarnation of a computation in Platonia. In that sense you can  
 associate the platonic experience of Alice to it, but this is a  
 different physical supervenience thesis. And I argue that even this  
 cannot work, because the movie does not capture a computation. 

I was thinking along the same lines.  But then the question is what does 
capture 
a computation.  Where in the thought experiments, starting with natural Alice 
and ending with a pictures of Alice's brain states, did we lose computation?  
Is 
it important that the sequence be time rather than space or some other order? 
Is it the loss causal relations or counterfactuality?

Brent

The  
 universal interpreter is lacking. It could even correspond to another  
 experience, if the graph was a movie of another sort of computer, for  
 example with NAND substituted for the NOR.
 
 
 
 
 

 The film, however does need
 to be sufficiently rich, and also needs to handle counterfactuals
 (unlike the usual sort of movie we see which has only one plot).

 OK. Such a film could be said to be a computation. Of course you are
 not talking about a stationary thing, which, be it physical or
 immaterial, cannot handle counterfactuals.

 If true, then a block universe could not represent the
 Multiverse. Maybe so, but I think a lot of people might be surprised
 at this one.
 
 
 I am not sure I can give sense to an expression like the multiverse  
 or the block universe can or cannot handle counterfactuals. They  
 have no inputs, nor outputs.
 
 
 Bruno
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Russell Standish

On Sat, Nov 29, 2008 at 10:11:30AM +0100, Bruno Marchal wrote:
 
 
 On 28 Nov 2008, at 10:46, Russell Standish wrote:
 
 
  On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
  MGA 3
 
  ...
 
  But this reasoning goes through if we make the hole in the film
  itself. Reconsider the image on the screen: with a hole in the film
  itself, you get a hole in the movie, but everything which enters  
  and
  go out of the hole remains the same, for that (unique) range of
  activity.  The hole has trivially the same functionality than the
  subgraph functionality whose special behavior was described by the
  film. And this is true for any subparts, so we can remove the entire
  film itself.
 
 
  I don't think this step follows at all. Consciousness may supervene on
  the stationary unprojected film,
 
 This, I don't understand. And, btw, if that is true, then the physical  
 supervenience thesis is already wrong. The
 physical supervenience thesis asks that consciousness is associated in  
 real time and space with the activity of some machine (with MEC).

I am speaking as someone unconvinced that MGA2 implies an
absurdity. MGA2 implies that the consciousness is supervening on the
stationary film.

BTW - I don't think the film is conscious by virtue of the
counterfactuals issue, but that's a whole different story. And
Olympization doesn't work, unless we rule out the multiverse.

 
  Why does the physical supervenience require that all instantiations of
  a consciousness be dynamic? Surely, it suffices that some are?
 
 
 What do you mean by an instantiation of a dynamical process which is  
 not dynamic. Even a block universe describe a dynamical process, or a  
 variety of dynamical processes.
 

A block universe is nondynamic by definition. But looked at another
way, (ie from the inside) it is dynamic. It neatly illustrates why
consciousness can supervene on a stationary film (because it is
stationary when viewed from the inside). The film, however does need
to be sufficiently rich, and also needs to handle counterfactuals
(unlike the usual sort of movie we see which has only one plot).

 
 
 
 
  c) Eliminate the hypothesis there is a concrete deployment in the
  seventh step of the UDA. Use UDA(1...7) to define properly the
  computationalist supervenience thesis. Hint: reread the remarks  
  above.
 
  I have no problems with this conclusion. However, we cannot eliminate
  supervenience on phenomenal physics, n'est-ce pas?
 
 We cannot eliminate supervenience of consciousness on what we take as  
 other persons indeed. Of course phenomenal physics is a first person  
 subjective creation, and it helps to entangle our (abstract)  
 computational histories. That is the role of a brain. It does not  
 create consciousness, it does only make higher the probability for  
 that consciousness to be able to manifest itself relatively to other  
 consciousness. But consciousness can rely, with MEC, only to the  
 abstract computation.
 

The problem is that eliminating the brain from phenomenal experience
makes that experience even more highly probable than without. This is
the Occam catastrophe I mention in my book. Obviously this contradicts
experience. 

Therefore I conclude that supervenience on a phenomenal physical brain
is necessary for consciousness. I speculate a bit that this may be due
to self-awareness, but don't have a good argument for it. It is the
elephant in the room with respect to pure MEC theories.

 Sorry for being a bit short, I have to go,
 
 Bruno
 
 
 
 
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 
 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Bruno Marchal


On 30 Nov 2008, at 04:23, Brent Meeker wrote:


 Bruno Marchal wrote:

 On 29 Nov 2008, at 15:56, Abram Demski wrote:

 Bruno,

 The argument was more of the type : removal of unnecessay and
 unconscious or unintelligent parts. Those parts have just no
 perspective. If they have some perpective playing arole in Alice's
 consciousness, it would mean we have not well chosen the  
 substitution
 level. You are reintroducing some consciousness on the elementary
 parts, here, I think.

 The problem would not be with removing individual elementary parts  
 and
 replacing them with functionally equivalent pieces; this obviously
 preserves the whole. Rather with removing whole subgraphs and
 replacing them with equivalent pieces. As Alice-in-the-cave is
 supposed to show, this can remove consciousness, at least in the  
 limit
 when the entire movie is replaced...


 The limit is not relevant. I agrre that if you remove Alice, you
 remove any possibility for Alice to manifest herself in your most
 probable histories. The problem is that in the range activity of the
 projected movie, removing a part of the graph change nothing. It
 changes only the probability of recoevering Alice from her history  
 in,
 again, your most probable history.

 Isn't this reliance on probable histories assuming some physical  
 theory that is
 no in evidence?



Not at all. I have defined history by a computation as see from a  
first person (plural or not).
Of course, well I guess I should insist on that perhaps, by  
computation I always mean the mathematical object; It makes sense only  
with respect to to some universal machine, and I have chosen  
elementary arithmetic as the primitive one.

Although strictly speaking the notion of computable is an epistemic  
notion, it happens that Church thesis makes it equivalent with purely  
mathematical notion, and this is used for making the notion of  
probable history a purely mathematical notion, (once we got a  
mathematical notion of first person, but this is simple in the thought  
experience (memory, diary ..., and a bit more subtle in the interview  
(AUDA)).

A difficulty, in those post correspondences, is that I am reasoning  
currently with MEC and MAT, just to get the contradiction, but in many  
(most) posts I reason only with MEC (having abandon MAT).
After UDA, you can already understand that physical has to be  
equivalent with probable history  for those who followed the whole  
UDA+MGA. physical has to refer the most probable (and hopefully)  
sharable relative computational history.
This is already the case with just UDA, if you assume both the  
existence of a physical universe and of a concrete UD running in  
that concrete universe. MGA is designed to eliminate the assumption of  
a physical universe and of the concrete UD.






 IThere are no physical causal link
 between the experience attributed to the physical computation and the
 causal history of projecting a movie.

 But there is a causal history for the creation of the movie - it's a  
 recording
 of Alice's brain functions which were causally related to her  
 physical world.



Assuming MEC+MAT you are right indeed. But the causal history of the  
creation of the movie, is not the same computation or causal chain  
than the execution of Alice's mind and Alice's brain during her  
original dream. If you make abstraction of that difference, it means  
you already don't accept the physical supervenience thesis, or, again,  
you are introducing magical knowledge in the elementary part running  
the computation.
You can only forget the difference of those two computations by  
abstracting from the physical part of the story. This means you are  
using exclusively the computational supervenience. MGA should make  
clear (but OK, I warned MGA is subtle) that the consciousness has to  
be related to the genuine causality or history. But it is that very  
genuineness that physics can accidentally reproduced in a non genuine  
way, like the brain movie projection, making the physical  
supervenience absurd.

It seems to me quasi obvious that it is ridiculous to attribute  
consciousness to the physical events of projecting the movie of a  
brain. That movie gives a pretty detailed description of the  
computations, but there is just no computation, nor even genuine  
causal relation between the states. Even one frame is not a genuine  
physical computational states. Only a relative description of it. In a  
cartoon, if you see someone throwing a ball on a window, the  
description of the broken glass are not caused by the description of  
someone throwing a ball. And nothing changes, for the moment of the  
projection of the movie, if the cartoon has been made from a real  
similar filmed situation.
To attribute consciousness to the stationary (non projected)  
contradict immediately the supervenience thesis of course.

All this is a bit complex because we have to take well into account  
the distinction between


Re: MGA 3

2008-11-30 Thread Bruno Marchal
Abram,


 My answer would have to be, no, she lacks the necessary counterfactual
 behaviors during that time.

? The film of the graph lacks also the counterfactuals.



 And, moreover, if only part of the brain
 were being run by a recording


... which lacks the counterfactual, ...


 then she would lack only some
 counterfactuals,


I don't understand. The recording lacks all the counterfactuals. You  
can recover them from inert material, true, but this is true for the  
empty graph too (both in dream and awake situations).



 and so she would count as partially conscious.


Hmmm  Can she be conscious that she is partially conscious? I mean  
is it like after we drink alcohol or something?


Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Bruno Marchal

On 30 Nov 2008, at 11:57, Russell Standish wrote:


 On Sat, Nov 29, 2008 at 10:11:30AM +0100, Bruno Marchal wrote:


 On 28 Nov 2008, at 10:46, Russell Standish wrote:


 On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
 MGA 3

 ...

 But this reasoning goes through if we make the hole in the film
 itself. Reconsider the image on the screen: with a hole in the film
 itself, you get a hole in the movie, but everything which enters
 and
 go out of the hole remains the same, for that (unique) range of
 activity.  The hole has trivially the same functionality than the
 subgraph functionality whose special behavior was described by the
 film. And this is true for any subparts, so we can remove the  
 entire
 film itself.


 I don't think this step follows at all. Consciousness may  
 supervene on
 the stationary unprojected film,

 This, I don't understand. And, btw, if that is true, then the  
 physical
 supervenience thesis is already wrong. The
 physical supervenience thesis asks that consciousness is associated  
 in
 real time and space with the activity of some machine (with MEC).

 I am speaking as someone unconvinced that MGA2 implies an
 absurdity. MGA2 implies that the consciousness is supervening on the
 stationary film.


?  I could agree, but is this not absurd enough, given MEC and the  
definition of the physical superveneience thesis;




 BTW - I don't think the film is conscious by virtue of the
 counterfactuals issue, but that's a whole different story. And
 Olympization doesn't work, unless we rule out the multiverse.


 Why does the physical supervenience require that all  
 instantiations of
 a consciousness be dynamic? Surely, it suffices that some are?


 What do you mean by an instantiation of a dynamical process which is
 not dynamic. Even a block universe describe a dynamical process, or a
 variety of dynamical processes.


 A block universe is nondynamic by definition. But looked at another
 way, (ie from the inside) it is dynamic. It neatly illustrates why
 consciousness can supervene on a stationary film (because it is
 stationary when viewed from the inside).

OK, but then you clearly change the physical supervenience thesis.


 The film, however does need
 to be sufficiently rich, and also needs to handle counterfactuals
 (unlike the usual sort of movie we see which has only one plot).


OK. Such a film could be said to be a computation. Of course you are  
not talking about a stationary thing, which, be it physical or  
immaterial, cannot handle counterfactuals.


 The problem is that eliminating the brain from phenomenal experience
 makes that experience even more highly probable than without. This is
 the Occam catastrophe I mention in my book. Obviously this contradicts
 experience.

 Therefore I conclude that supervenience on a phenomenal physical brain
 is necessary for consciousness.


It is vague enough so that I can interpret it favorably through MEC.


Bruno




 I speculate a bit that this may be due
 to self-awareness, but don't have a good argument for it. It is the
 elephant in the room with respect to pure MEC theories.

 Sorry for being a bit short, I have to go,

 Bruno




 http://iridia.ulb.ac.be/~marchal/





 -- 

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics   
 UNSW SYDNEY 2052   [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 

 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Günther Greindl

Hello Bruno,

I must admit you have completely lost me with MGA 3.

With MGA 1 and 2, I would say that, with MEC+MAT, also the the 
projection of the movie (and Lucky Alice in 1) are conscious - because 
it supervenes on the physical activity.

MEC says: it's the computation that counts, not the substrate.

MAT says: we need some substrate to perform a computation. In MGA 1 and 
2 we have substrates (neurons or optical boolean graph that performs the 
computation).

Now in MGA 3 you say:

 Now, consider the projection of the movie of the activity of Alice's 
 brain, the movie graph.
 Is it necessary that someone look at that movie? Certainly not. 

Agreed.

 Is it necessary to have a screen? Well, the range of activity here is 
 just one dynamical description of one computation. Suppose we make a 
 hole in the screen. What goes in and out of that hole is exactly the 
 same, with the hole and without the hole. For that unique activity, the 
 hole in the screen is functionally equivalent to the subgraph which the 
 hole removed. 

We can remove those optical boolean nodes which are not relevant for the 
  caterpillar dream

 Clearly we can make a hole as large as the screen, so no
 need for a screen.

but no! Then we wouldn't have a substrate anymore. You are dropping MAT 
at this step, not leading MEC+MAT to a contradiction.

 But this reasoning goes through if we make the hole in the film itself. 
 Reconsider the image on the screen: with a hole in the film itself, you 
 get a hole in the movie, but everything which enters and go out of the 
 hole remains the same, for that (unique) range of activity.  The hole 
 has trivially the same functionality than the subgraph functionality 
 whose special behavior was described by the film. And this is true for 
 any subparts, so we can remove the entire film itself.

We can talk about this part after I understand why you can drop our 
optical boolean network *grin*


Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Abram Demski

Bruno,

No, she cannot be conscious that she is partially conscious in this
case, because the scenario is set up such that she does everything as
if she were fully conscious-- only the counterfactuals change. But, if
someone tested those counterfactuals by doing something that the
recording didn't account for, then she may or may not become conscious
of the fact of her partial consciousness-- in that case it would be
very much like brain damage.

Anyway, yes, I am admitting that the film of the graph lacks
counterfactuals and is therefore not conscious. My earlier splitting
of the argument into an argument about (1) and a separate argument
against (2) was perhaps a bit silly, because the objection to (2) went
far enough back that it was also an objection to (1). I split the
argument like that just because I saw an independent flaw in the
reasoning of (1)... anyway...

Basically, I am claiming that there is a version of COMP+MAT that MGA
is not able to derive a contradiction from. The version goes something
like this:

Yes, consciousness supervenes on computation, but that computation
needs to actually take place (meaning, physically). Otherwise, how
could consciousness supervene on it? Now, in order for a computation
to be physically instantiated, the physical instantiation needs to
satisfy a few properties. One of these properties is clearly some sort
of isomorphism between the computation and the physical instantiation:
the actual steps of the computation are represented in physical form.
A less obvious requirement is that the physical computation needs to
have the proper counterfactuals: if some external force were to modify
some step in the computation, the computation must progress according
to the new computational state (as translated by the isomorphism).

--Abram

On Sun, Nov 30, 2008 at 12:51 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
 Abram,

 My answer would have to be, no, she lacks the necessary counterfactual
 behaviors during that time.

 ? The film of the graph lacks also the counterfactuals.


 And, moreover, if only part of the brain
 were being run by a recording

 ... which lacks the counterfactual, ...

 then she would lack only some
 counterfactuals,

 I don't understand. The recording lacks all the counterfactuals. You can
 recover them from inert material, true, but this is true for the empty graph
 too (both in dream and awake situations).


 and so she would count as partially conscious.

 Hmmm  Can she be conscious that she is partially conscious? I mean is it
 like after we drink alcohol or something?

 Bruno

 http://iridia.ulb.ac.be/~marchal/



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Günther Greindl

Bruno,

I have reread MGA 2 and would like to add the following:

We have the

optical boolean graph: OBG - this computes alice's dream.
we make a movie of this computation.


Now we run again, but in OBG some nodes do not make the computation 
correctly, BUT the movie _triggers_ the nodes, so in the end, the 
computation is performed.

So, with MEC+MAT and ALL NODES broken, I say this:

a) If the OBG nodes MALFUNCTION, but their function is subsituted with 
the movie (on/off), it is conscious.

b) If the OBG is broken that in a way that all nodes are not active 
anymore (no on/off, no signal passing), then no consciousness.



I think we can split the intuitions along these lines: if you assume 
that consciousness depends on activity along the vertices, then Alice is 
conscious neither in a nor in b, and then indeed I see why already MGA 2 
leads to a problem with MEC+MAT.

But if I think that consciousness supervenes only on the correct 
lighting up of the nodes (not the vertices!! - I don't need causality 
then, only the correct order), than a) would be conscious, b) not, and 
MGA 3 does not work I you take away my OBG (with the node intuition)!

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Kory Heath


On Nov 30, 2008, at 10:14 AM, Günther Greindl wrote:
 I must admit you have completely lost me with MGA 3.

I still find the whole thing easier to grasp when presented in terms  
of cellular automata.

Let's say we have a computer program that starts with a large but  
finite 2D grid of bits, and then iterates the rules to some CA  
(Conway's Life, Critters, whatever) on that grid a large but finite  
number of times, and stores all of the resulting computations in  
memory, so that we have a 3D block universe in memory. And lets say  
that the resulting block universe contains patterns that MECH-MAT  
would say are conscious.

If we believe that consciousness supervenes on the physical act of  
playing back the data in our block universe like a movie, then we  
have a problem. Because before we play back the movie, we can fill any  
portions of the block universe we want with zeros. So then our played  
back movie can contain conscious creatures who are walking around  
with (say) zeros where their visual cortexes should be, or their high- 
level brain functions should be, etc. In other words, we have a fading  
qualia problem (which we have also called a partial zombie problem  
in these threads).

I find the argument compelling as far as it goes. But I'm not  
convinced that all or most actual, real-world mechanist-materialists  
believe that consciousness supervenes on the physical act of playing  
back the stored computations. Bruno indicates that it must, by the  
logical definitions of MECH and MAT. This just makes me feel like I  
don't really understand the logical definitions of MECH and MAT.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Russell Standish

On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:
 
  I am speaking as someone unconvinced that MGA2 implies an
  absurdity. MGA2 implies that the consciousness is supervening on the
  stationary film.
 
 
 ?  I could agree, but is this not absurd enough, given MEC and the  
 definition of the physical superveneience thesis;

It is, prima facie, no more absurd than consciousness supervening on a
block universe.

 
  A block universe is nondynamic by definition. But looked at another
  way, (ie from the inside) it is dynamic. It neatly illustrates why
  consciousness can supervene on a stationary film (because it is
  stationary when viewed from the inside).
 
 OK, but then you clearly change the physical supervenience thesis.
 

How so? The stationary film is a physical object, I would have thought.

 
  The film, however does need
  to be sufficiently rich, and also needs to handle counterfactuals
  (unlike the usual sort of movie we see which has only one plot).
 
 
 OK. Such a film could be said to be a computation. Of course you are  
 not talking about a stationary thing, which, be it physical or  
 immaterial, cannot handle counterfactuals.
 

If true, then a block universe could not represent the
Multiverse. Maybe so, but I think a lot of people might be surprised
at this one.

 
  The problem is that eliminating the brain from phenomenal experience
  makes that experience even more highly probable than without. This is
  the Occam catastrophe I mention in my book. Obviously this contradicts
  experience.
 
  Therefore I conclude that supervenience on a phenomenal physical brain
  is necessary for consciousness.
 
 
 It is vague enough so that I can interpret it favorably through MEC.
 

That is my point - physical supervenience (aka materialism) is not
only not contradicted by MEC (aka COMP), but in fact is necessary for
to even work. Only what I call naive physicalism,
(aka the need for a concrete instantiation of a computer running the
UD) is contradicted by MEC.

What _is_ interesting is that not all philosophers distinguish between
physicalism and materialism. David Chalmers does not, but Michael
Lockwood does, for instance. Much of this revolves around the
ontological status of emergence.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Bruno Marchal

Abram,

On 29 Nov 2008, at 04:49, Abram Demski wrote:


 Bruno,

 I have done some thinking, and decided that I don't think this last
 step of the argument works for me. You provided two arguments, and so
 I provide two refutations.

 1. (argument by removal of unnecessary parts): Suppose Alice lives in
 a cave all her life, with bread and water tossed down keeping her
 alive, but nobody ever checking to see that she eats it; to the
 outside world, she is functionally unnecessary. But from Alice's point
 of view, she is not functionally removable, nor are the other things
 in the cave that the outside world knows nothing about. The point is,
 we need to be careful about labeling things functionally removable; we
 need to ask from whose perspective?. A believer in MAT who accepted
 the consciousness of the movie could claim that such an error is being
 made.


The argument was more of the type : removal of unnecessay and  
unconscious or unintelligent parts. Those parts have just no  
perspective. If they have some perpective playing arole in Alice's  
consciousness, it would mean we have not well chosen the substitution  
level. You are reintroducing some consciousness on the elementary  
parts, here, I think.





 2. (argument by spreading movie in space instead of time): Here I need
 to go back further in the argument... I still think the objection
 about hypotheticals (ie counterfactuals) works just fine. :)


Then you think that if someone is conscious with some brain, which for  
some reason, does never use some neurons, could loose consciousness  
when that never used neuron is removed?
If that were true, how could still be confident with an artificial  
digital brain. You may be right, but the MEC hypothesis would be put  
in doubt.

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Bruno Marchal


On 28 Nov 2008, at 23:20, Abram Demski wrote:


 Hi Bruno,

 So, basically, you are saying that I'm offering an alternative
 argument against materialism, correct?

 It seems to me you were going in that direction, yes.


 Well, *I* was suggesting that we run up against the problem of time in
 *either* direction (physical reality / mathematical reality); so the
 real problem would be a naive view of time, rather than COMP + MAT.
 But, you are probably right: the problem really only applies to MAT.
 On the other hand, I might try to take up the argument again after
 reading UDA. :)

 With the MEC hypothesis, a believer in comp go to hell.  (Where a
 believerin , is someone who takes p for granted).
 Comp, is like self-consistency, a self-observing machine can guess  
 it,
 hope it, (or fear it), but can never take it for granted. It *is*
 theological. No machine can prove its theology, but Löbian machine  
 can
 study the complete theology of more simple Löbian machines, find the
 invariant for the consistent extensions, and lift it to themselves,
 keeping consistency by consciously being aware that this is has to
 be taken as an interrogation, it is not for granted, so that saying
 yes to the doctor needs an act of faith, and never can be imposed.
 (Of course we can argue biology has already bet on it).

 Yes, this is fundamentally interesting :).

 Maudlin shows that for a special computation, which supports in time
 some consciousness (by using the (physical) supervenience thesis),  
 you
 can build a device doing the same computation with much less physical
 activity, actually with almost no physical activity at all. The
 natural reply is that such a machine has no more the right
 counterfactual behavior. Then Maudlin shows that you can render the
 counterfactual correctness to such machine by adding, what will be  
 for
 the special computation, just inert material.
 But this give to inert material something which plays no role, or
 would give prescience to elementary material in computations; from
 which you can conclude that MEC and MAT does not works well together.

 I am not sure this convinces me. If the inert material is useful to
 the computation in the counterfactual situations, then it is useful,
 cannot be removed.


Yes but with MAT, the inert material has no use in the particular  
instantiation we have chosen. If it play a role, it cannot be in  
virtue of the MEC hypothesis *together* with the MAT hypothesis. If  
not, it means you already make consciousness supervening on the  
abstract computation the pieces of materials instantiate  
accidentally here and now, not really on the physical process  
implementing that computation.
Feel free to criticize 








 Abram, are you aware that Godel's incompleteness follows easily (=
 in few lines) from Church thesis? Not the second theorem, but the
 first, even a stronger form of the first.

 No, I do not know that one.


I will have the occasion to explain if I decide to make the UDA  
beginning by step seven.


Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Bruno Marchal


On 28 Nov 2008, at 10:46, Russell Standish wrote:


 On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
 MGA 3

 ...

 But this reasoning goes through if we make the hole in the film
 itself. Reconsider the image on the screen: with a hole in the film
 itself, you get a hole in the movie, but everything which enters  
 and
 go out of the hole remains the same, for that (unique) range of
 activity.  The hole has trivially the same functionality than the
 subgraph functionality whose special behavior was described by the
 film. And this is true for any subparts, so we can remove the entire
 film itself.


 I don't think this step follows at all. Consciousness may supervene on
 the stationary unprojected film,

This, I don't understand. And, btw, if that is true, then the physical  
supervenience thesis is already wrong. The
physical supervenience thesis asks that consciousness is associated in  
real time and space with the activity of some machine (with MEC).

 but if you start making holes in it,
 you will eventually get a film on a nonconscious entity. At some
 point, the consciousness is no longer supervening on the film (but may
 well be supervening on other films that haven't been so adulterated,
 or on running machines or whatever...

 Does Alice's dream supervene (in real time and space) on the
 projection of the empty movie?


 No.


 2)

 I give now what is perhaps a simpler argument

 A projection of a movie is a relative phenomenon. On the planet 247a,
 nearby in the galaxy, they don't have screen. The film pellicle is as
 big as a screen, and they make the film passing behind a stroboscope
 at the right frequency in front of the public. But on planet 247b,
 movies are only for travellers! They dress their film, as big as  
 those
 on planet 247a, in their countries all along their train rails with a
 lamp besides each frames, which is nice because from the train,
 through its speed, you get the usual 24 frames per second. But we
 already accepted that such movie does not need to be observed, the
 train can be empty of people. Well the train does not play any role,
 and what remains is the static film with a lamp behind each frame.  
 Are
 the lamps really necessaries? Of course not, all right? So now we are
 obliged to accept that the consciousness of Alice during the
 projection of the movie supervenes of something completely inert in
 time and space. This contradicts the *physical*  supervenience  
 thesis.


 But the physics that Alice experiences will be fully dynamic. She will
 experience time, and non-inert processes that she is supervening on.

 Why does the physical supervenience require that all instantiations of
 a consciousness be dynamic? Surely, it suffices that some are?


What do you mean by an instantiation of a dynamical process which is  
not dynamic. Even a block universe describe a dynamical process, or a  
variety of dynamical processes.





 c) Eliminate the hypothesis there is a concrete deployment in the
 seventh step of the UDA. Use UDA(1...7) to define properly the
 computationalist supervenience thesis. Hint: reread the remarks  
 above.

 I have no problems with this conclusion. However, we cannot eliminate
 supervenience on phenomenal physics, n'est-ce pas?

We cannot eliminate supervenience of consciousness on what we take as  
other persons indeed. Of course phenomenal physics is a first person  
subjective creation, and it helps to entangle our (abstract)  
computational histories. That is the role of a brain. It does not  
create consciousness, it does only make higher the probability for  
that consciousness to be able to manifest itself relatively to other  
consciousness. But consciousness can rely, with MEC, only to the  
abstract computation.

Sorry for being a bit short, I have to go,

Bruno




http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Abram Demski

Bruno,

 The argument was more of the type : removal of unnecessay and
 unconscious or unintelligent parts. Those parts have just no
 perspective. If they have some perpective playing arole in Alice's
 consciousness, it would mean we have not well chosen the substitution
 level. You are reintroducing some consciousness on the elementary
 parts, here, I think.


The problem would not be with removing individual elementary parts and
replacing them with functionally equivalent pieces; this obviously
preserves the whole. Rather with removing whole subgraphs and
replacing them with equivalent pieces. As Alice-in-the-cave is
supposed to show, this can remove consciousness, at least in the limit
when the entire movie is replaced...



 Then you think that if someone is conscious with some brain, which for
 some reason, does never use some neurons, could loose consciousness
 when that never used neuron is removed?
 If that were true, how could still be confident with an artificial
 digital brain. You may be right, but the MEC hypothesis would be put
 in doubt.


I am thinking of it as being the same as someone having knowledge
which they never actually use. Suppose that the situation is so
extreme that if we removed the neurons involved in that knowledge, we
will not alter the person's behavior; yet, we will have removed the
knowledge. Similarly, if the behavior of Alice in practice comes from
a recording, yet a dormant conscious portion is continually ready to
intervene if needed, then removing that dormant portion removes her
consciousness.

--Abram

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 28 Nov 2008, at 10:46, Russell Standish wrote:
 
 On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
 MGA 3
 ...

 But this reasoning goes through if we make the hole in the film
 itself. Reconsider the image on the screen: with a hole in the film
 itself, you get a hole in the movie, but everything which enters  
 and
 go out of the hole remains the same, for that (unique) range of
 activity.  The hole has trivially the same functionality than the
 subgraph functionality whose special behavior was described by the
 film. And this is true for any subparts, so we can remove the entire
 film itself.

 I don't think this step follows at all. Consciousness may supervene on
 the stationary unprojected film,
 
 This, I don't understand. And, btw, if that is true, then the physical  
 supervenience thesis is already wrong. The
 physical supervenience thesis asks that consciousness is associated in  
 real time and space with the activity of some machine (with MEC).

Then assuming MEC requires some definition of activity and consciousness may 
cease when there is no activity of the required kind.

Brent


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Bruno Marchal


On 29 Nov 2008, at 15:56, Abram Demski wrote:


 Bruno,

 The argument was more of the type : removal of unnecessay and
 unconscious or unintelligent parts. Those parts have just no
 perspective. If they have some perpective playing arole in Alice's
 consciousness, it would mean we have not well chosen the substitution
 level. You are reintroducing some consciousness on the elementary
 parts, here, I think.


 The problem would not be with removing individual elementary parts and
 replacing them with functionally equivalent pieces; this obviously
 preserves the whole. Rather with removing whole subgraphs and
 replacing them with equivalent pieces. As Alice-in-the-cave is
 supposed to show, this can remove consciousness, at least in the limit
 when the entire movie is replaced...


The limit is not relevant. I agrre that if you remove Alice, you  
remove any possibility for Alice to manifest herself in your most  
probable histories. The problem is that in the range activity of the  
projected movie, removing a part of the graph change nothing. It  
changes only the probability of recoevering Alice from her history in,  
again, your most probable history. IThere are no physical causal link  
between the experience attributed to the physical computation and the  
causal history of projecting a movie. The incremental removing of  
the graph hilighted the lack of causality in the movie. Perhaps not in  
the best clearer way, apparently. Perhaps I should have done the case  
of a non dream. I will come back on this.






 Then you think that if someone is conscious with some brain, which  
 for
 some reason, does never use some neurons, could loose consciousness
 when that never used neuron is removed?
 If that were true, how could still be confident with an artificial
 digital brain. You may be right, but the MEC hypothesis would be put
 in doubt.


 I am thinking of it as being the same as someone having knowledge
 which they never actually use. Suppose that the situation is so
 extreme that if we removed the neurons involved in that knowledge, we
 will not alter the person's behavior; yet, we will have removed the
 knowledge. Similarly, if the behavior of Alice in practice comes from
 a recording, yet a dormant conscious portion is continually ready to
 intervene if needed, then removing that dormant portion removes her
 consciousness.


You should definitely do the removing of the graph in the non-dream  
situation. Let us do it.
Let us take a situation without complex inputs. Let us imagine Alice  
is giving a conference in a big room, so, as input she is just blinded  
by some projector, + some noise, and she makes a talk on Astronomy (to  
fix the things). Now from 8h30 to 8H45 pm, she has just no brain, she  
get the motor info from a projected recording of a previous *perfect  
dream* of that conference, dream done the night before, or send from  
Platonia (possible in principle). Then, by magic, to simplify, at 8h45  
she get back the original brain, which by optical means inherits the  
stage at the end of the conference in that perfect dream. I ask you,  
would you say Alice was a zombie, during the conference?

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Bruno Marchal


On 29 Nov 2008, at 18:49, Brent Meeker wrote:

 This, I don't understand. And, btw, if that is true, then the  
 physical
 supervenience thesis is already wrong. The
 physical supervenience thesis asks that consciousness is associated  
 in
 real time and space with the activity of some machine (with MEC).

 Then assuming MEC requires some definition of activity and  
 consciousness may
 cease when there is no activity of the required kind.


We require a notion of physical activity related to a computation for  
having MEC *and* the supervenience thesis.
With MEC alone, we abandon MAT, the computational supervenience thesis  
will have to make any notion of physical causality a statistically   
emerging pattern from (hopefully sharable) first person (plural)  
points of view.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Abram Demski

Bruno,

My answer would have to be, no, she lacks the necessary counterfactual
behaviors during that time. And, moreover, if only part of the brain
were being run by a recording then she would lack only some
counterfactuals, and so she would count as partially conscious.

--Abram

On Sat, Nov 29, 2008 at 3:12 PM, Bruno Marchal [EMAIL PROTECTED] wrote:


 On 29 Nov 2008, at 15:56, Abram Demski wrote:


 Bruno,

 The argument was more of the type : removal of unnecessay and
 unconscious or unintelligent parts. Those parts have just no
 perspective. If they have some perpective playing arole in Alice's
 consciousness, it would mean we have not well chosen the substitution
 level. You are reintroducing some consciousness on the elementary
 parts, here, I think.


 The problem would not be with removing individual elementary parts and
 replacing them with functionally equivalent pieces; this obviously
 preserves the whole. Rather with removing whole subgraphs and
 replacing them with equivalent pieces. As Alice-in-the-cave is
 supposed to show, this can remove consciousness, at least in the limit
 when the entire movie is replaced...


 The limit is not relevant. I agrre that if you remove Alice, you
 remove any possibility for Alice to manifest herself in your most
 probable histories. The problem is that in the range activity of the
 projected movie, removing a part of the graph change nothing. It
 changes only the probability of recoevering Alice from her history in,
 again, your most probable history. IThere are no physical causal link
 between the experience attributed to the physical computation and the
 causal history of projecting a movie. The incremental removing of
 the graph hilighted the lack of causality in the movie. Perhaps not in
 the best clearer way, apparently. Perhaps I should have done the case
 of a non dream. I will come back on this.






 Then you think that if someone is conscious with some brain, which
 for
 some reason, does never use some neurons, could loose consciousness
 when that never used neuron is removed?
 If that were true, how could still be confident with an artificial
 digital brain. You may be right, but the MEC hypothesis would be put
 in doubt.


 I am thinking of it as being the same as someone having knowledge
 which they never actually use. Suppose that the situation is so
 extreme that if we removed the neurons involved in that knowledge, we
 will not alter the person's behavior; yet, we will have removed the
 knowledge. Similarly, if the behavior of Alice in practice comes from
 a recording, yet a dormant conscious portion is continually ready to
 intervene if needed, then removing that dormant portion removes her
 consciousness.


 You should definitely do the removing of the graph in the non-dream
 situation. Let us do it.
 Let us take a situation without complex inputs. Let us imagine Alice
 is giving a conference in a big room, so, as input she is just blinded
 by some projector, + some noise, and she makes a talk on Astronomy (to
 fix the things). Now from 8h30 to 8H45 pm, she has just no brain, she
 get the motor info from a projected recording of a previous *perfect
 dream* of that conference, dream done the night before, or send from
 Platonia (possible in principle). Then, by magic, to simplify, at 8h45
 she get back the original brain, which by optical means inherits the
 stage at the end of the conference in that perfect dream. I ask you,
 would you say Alice was a zombie, during the conference?

 Bruno

 http://iridia.ulb.ac.be/~marchal/




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-29 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 29 Nov 2008, at 15:56, Abram Demski wrote:
 
 Bruno,

 The argument was more of the type : removal of unnecessay and
 unconscious or unintelligent parts. Those parts have just no
 perspective. If they have some perpective playing arole in Alice's
 consciousness, it would mean we have not well chosen the substitution
 level. You are reintroducing some consciousness on the elementary
 parts, here, I think.

 The problem would not be with removing individual elementary parts and
 replacing them with functionally equivalent pieces; this obviously
 preserves the whole. Rather with removing whole subgraphs and
 replacing them with equivalent pieces. As Alice-in-the-cave is
 supposed to show, this can remove consciousness, at least in the limit
 when the entire movie is replaced...
 
 
 The limit is not relevant. I agrre that if you remove Alice, you  
 remove any possibility for Alice to manifest herself in your most  
 probable histories. The problem is that in the range activity of the  
 projected movie, removing a part of the graph change nothing. It
 changes only the probability of recoevering Alice from her history in,  
 again, your most probable history. 

Isn't this reliance on probable histories assuming some physical theory that is 
no in evidence?

IThere are no physical causal link  
 between the experience attributed to the physical computation and the  
 causal history of projecting a movie. 

But there is a causal history for the creation of the movie - it's a recording 
of Alice's brain functions which were causally related to her physical world.

The incremental removing of  
 the graph hilighted the lack of causality in the movie. 

It seems to me there is still a causal chain - it is indirected via creating 
the 
movie.

Brent

Perhaps not in  
 the best clearer way, apparently. Perhaps I should have done the case  
 of a non dream. I will come back on this.
 
 


 Then you think that if someone is conscious with some brain, which  
 for
 some reason, does never use some neurons, could loose consciousness
 when that never used neuron is removed?
 If that were true, how could still be confident with an artificial
 digital brain. You may be right, but the MEC hypothesis would be put
 in doubt.

 I am thinking of it as being the same as someone having knowledge
 which they never actually use. Suppose that the situation is so
 extreme that if we removed the neurons involved in that knowledge, we
 will not alter the person's behavior; yet, we will have removed the
 knowledge. Similarly, if the behavior of Alice in practice comes from
 a recording, yet a dormant conscious portion is continually ready to
 intervene if needed, then removing that dormant portion removes her
 consciousness.
 
 
 You should definitely do the removing of the graph in the non-dream  
 situation. Let us do it.
 Let us take a situation without complex inputs. Let us imagine Alice  
 is giving a conference in a big room, so, as input she is just blinded  
 by some projector, + some noise, and she makes a talk on Astronomy (to  
 fix the things). Now from 8h30 to 8H45 pm, she has just no brain, she  
 get the motor info from a projected recording of a previous *perfect  
 dream* of that conference, dream done the night before, or send from  
 Platonia (possible in principle). Then, by magic, to simplify, at 8h45  
 she get back the original brain, which by optical means inherits the  
 stage at the end of the conference in that perfect dream. I ask you,  
 would you say Alice was a zombie, during the conference?
 
 Bruno
 
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-28 Thread Abram Demski

Hi Bruno,

 So, basically, you are saying that I'm offering an alternative
 argument against materialism, correct?

 It seems to me you were going in that direction, yes.


Well, *I* was suggesting that we run up against the problem of time in
*either* direction (physical reality / mathematical reality); so the
real problem would be a naive view of time, rather than COMP + MAT.
But, you are probably right: the problem really only applies to MAT.
On the other hand, I might try to take up the argument again after
reading UDA. :)

 With the MEC hypothesis, a believer in comp go to hell.  (Where a
 believerin , is someone who takes p for granted).
 Comp, is like self-consistency, a self-observing machine can guess it,
 hope it, (or fear it), but can never take it for granted. It *is*
 theological. No machine can prove its theology, but Löbian machine can
 study the complete theology of more simple Löbian machines, find the
 invariant for the consistent extensions, and lift it to themselves,
 keeping consistency by consciously being aware that this is has to
 be taken as an interrogation, it is not for granted, so that saying
 yes to the doctor needs an act of faith, and never can be imposed.
 (Of course we can argue biology has already bet on it).

Yes, this is fundamentally interesting :).

 Maudlin shows that for a special computation, which supports in time
 some consciousness (by using the (physical) supervenience thesis), you
 can build a device doing the same computation with much less physical
 activity, actually with almost no physical activity at all. The
 natural reply is that such a machine has no more the right
 counterfactual behavior. Then Maudlin shows that you can render the
 counterfactual correctness to such machine by adding, what will be for
 the special computation, just inert material.
 But this give to inert material something which plays no role, or
 would give prescience to elementary material in computations; from
 which you can conclude that MEC and MAT does not works well together.

I am not sure this convinces me. If the inert material is useful to
the computation in the counterfactual situations, then it is useful,
cannot be removed.


 Abram, are you aware that Godel's incompleteness follows easily (=
 in few lines) from Church thesis? Not the second theorem, but the
 first, even a stronger form of the first.

No, I do not know that one.

 --Abram

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-28 Thread Abram Demski

Bruno,

I have done some thinking, and decided that I don't think this last
step of the argument works for me. You provided two arguments, and so
I provide two refutations.

1. (argument by removal of unnecessary parts): Suppose Alice lives in
a cave all her life, with bread and water tossed down keeping her
alive, but nobody ever checking to see that she eats it; to the
outside world, she is functionally unnecessary. But from Alice's point
of view, she is not functionally removable, nor are the other things
in the cave that the outside world knows nothing about. The point is,
we need to be careful about labeling things functionally removable; we
need to ask from whose perspective?. A believer in MAT who accepted
the consciousness of the movie could claim that such an error is being
made.

2. (argument by spreading movie in space instead of time): Here I need
to go back further in the argument... I still think the objection
about hypotheticals (ie counterfactuals) works just fine. :)

--Abram

On Wed, Nov 26, 2008 at 4:09 AM, Bruno Marchal [EMAIL PROTECTED] wrote:
 MGA 3

 It is the last MGA !

 I realize MGA is complete, as I thought it was, but I was doubting this
 recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
 Maudlin 1989 is an independent argument of the 1988 Toulouse argument (which
 I present here).
 Note that Maudlin's very interesting Olympization technic  can be used to
 defeat a wrong form of MGA 3, that is, a wrong argument for the assertion
 that  the movie cannot be conscious. (the argument that the movie lacks the
 counterfactual). Below are hopefully correct (if not very simple) argument.
 ( I use Maudlin sometimes when people gives this non correct form of MGA 3,
 and this is probably what makes me think Maudlin has to be used, at some
 point).



 MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
 luckiness feature of the MGA 1 experiment was a red herring. We can
 construct, from MEC+COMP, an home made lucky rays generator, and use it at
 will. If we accept both digital mechanism, in particular Dennet's principle
 that neurons have no intelligence, still less prescience, and this
  *together with* the supervenience principle; we have to accept that Alice
 conscious dream experience supervenes on the projection of her brain
 activity movie.

 Let us show now that Alice consciousness *cannot* supervene on that
 *physical* movie projection.


 I propose two (deductive) arguments.

 1)

 Mechanism implies the following tautological functionalist principle: if,
 for some range of activity, a system does what it is supposed to do, and
 this before and after a change is made in its constitution, then the change
 does not change what the system is supposed to do, for that range of
 activity.
 Example:
 - A car is supposed to broken but only if the driver is faster than 90
 miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is
 supposed to do what she is supposed to do, with respect of its range of
 activity defined by Pepe Pepito.
 - Claude bought a 1000 thousand processors computer. One day he realized
 that he used only 990 processors, for his type of activity, so he decided to
 get rid of those 10 useless processors. And indeed the machine will satisfy
 Claude ever.

 - Alice has (again) a math exam. Theoreticians have correctly predict that
 in this special circumstance, she will never use neurons X, Y and Z.  Now
 Alice go (again, again) to this exam in the same condition, but with the
 neurons X, Y, Z removed. Again, not only will she behaved like if she
 succeed her exam, but her consciousness, with both MEC *and* MAT still
 continue.
 The idea is that if something is not useful, for an active process to go on,
 for some range of activity, then you can remove it, for that range of
 activity.

 OK?

 Now, consider the projection of the movie of the activity of Alice's brain,
 the movie graph.
 Is it necessary that someone look at that movie? Certainly not. No more than
 it is needed that someone is look at your reconstitution in Moscow for you
 to be conscious in Moscow after a teleportation. All right? (with MEC
 assumed of course).
 Is it necessary to have a screen? Well, the range of activity here is just
 one dynamical description of one computation. Suppose we make a hole in the
 screen. What goes in and out of that hole is exactly the same, with the hole
 and without the hole. For that unique activity, the hole in the screen is
 functionally equivalent to the subgraph which the hole removed. Clearly we
 can make a hole as large as the screen, so no need for a screen.
 But this reasoning goes through if we make the hole in the film itself.
 Reconsider the image on the screen: with a hole in the film itself, you get
 a hole in the movie, but everything which enters and go out of the hole
 remains the same, for that (unique) range of activity.  The hole has
 trivially the same functionality than the subgraph

Re: MGA 3

2008-11-27 Thread Abram Demski

Bruno,

It seems to me that this runs head-on into the problem of the
definition of time...

Here is my argument; I am sure there will be disagreement with it.

Supposing that Alice's consciousness is spread out over the movie
billboards next to the train track, there is no longer a normal
temporal relationship between mental moments. There must merely be a
time-like relationship, which Alice experiences as time. But, then,
we are saying that wherever a logical relationship exists that is
time-like, there is subjective time for those inside the time-like
relationship.

Now, what might constitute a time-like relationship? I see several
alternatives, but none seem satisfactory.

At any given moment, all we can be directly aware of is that one
moment. If we remember the past, that is because at the present moment
our brain has those memories; we don't know if they really came from
the past. What would it mean to put moments in a series? It changes
nothing essential about the moment itself; we can remove the past,
because it adds nothing.

The connection between moments doesn't seem like a physical
connection; the notion is non-explanatory, since if there were such a
physical connection we could remove it without altering the individual
moments, therefore not altering our memories, and our subjective
experience of time. Similarly, can it be a logical relationship? Is it
the structure of a single moment that connects it to the next? How
would this be? Perhaps we require that there is some function (a
physics) from one moment to the next? But, this does not exactly
allow for things like relativity in which there is no single universal
clock. Of course, relativity could be simulated, creating a universe
that was run be a universal clock but whose internal facts did not
depend on which universal clock, exactly, the simulation was run from.
My problem is, I suppose, that any particular definition of timelike
relationship seems too arbitrary. As another example, should any
probabilistic elements be allowed into physics? In this case, we don't
have a function any more, but a relation-- perhaps a relation of
weighted transitions. But how would this relation make any difference
from inside the universe?

--Abram

On Wed, Nov 26, 2008 at 4:09 AM, Bruno Marchal [EMAIL PROTECTED] wrote:
 MGA 3

 It is the last MGA !

 I realize MGA is complete, as I thought it was, but I was doubting this
 recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
 Maudlin 1989 is an independent argument of the 1988 Toulouse argument (which
 I present here).
 Note that Maudlin's very interesting Olympization technic  can be used to
 defeat a wrong form of MGA 3, that is, a wrong argument for the assertion
 that  the movie cannot be conscious. (the argument that the movie lacks the
 counterfactual). Below are hopefully correct (if not very simple) argument.
 ( I use Maudlin sometimes when people gives this non correct form of MGA 3,
 and this is probably what makes me think Maudlin has to be used, at some
 point).



 MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
 luckiness feature of the MGA 1 experiment was a red herring. We can
 construct, from MEC+COMP, an home made lucky rays generator, and use it at
 will. If we accept both digital mechanism, in particular Dennet's principle
 that neurons have no intelligence, still less prescience, and this
  *together with* the supervenience principle; we have to accept that Alice
 conscious dream experience supervenes on the projection of her brain
 activity movie.

 Let us show now that Alice consciousness *cannot* supervene on that
 *physical* movie projection.


 I propose two (deductive) arguments.

 1)

 Mechanism implies the following tautological functionalist principle: if,
 for some range of activity, a system does what it is supposed to do, and
 this before and after a change is made in its constitution, then the change
 does not change what the system is supposed to do, for that range of
 activity.
 Example:
 - A car is supposed to broken but only if the driver is faster than 90
 miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is
 supposed to do what she is supposed to do, with respect of its range of
 activity defined by Pepe Pepito.
 - Claude bought a 1000 thousand processors computer. One day he realized
 that he used only 990 processors, for his type of activity, so he decided to
 get rid of those 10 useless processors. And indeed the machine will satisfy
 Claude ever.

 - Alice has (again) a math exam. Theoreticians have correctly predict that
 in this special circumstance, she will never use neurons X, Y and Z.  Now
 Alice go (again, again) to this exam in the same condition, but with the
 neurons X, Y, Z removed. Again, not only will she behaved like if she
 succeed her exam, but her consciousness, with both MEC *and* MAT still
 continue.
 The idea is that if something is not useful, for an active process to go

Re: MGA 3

2008-11-27 Thread Brent Meeker

Abram Demski wrote:
 Bruno,
 
 It seems to me that this runs head-on into the problem of the
 definition of time...
 
 Here is my argument; I am sure there will be disagreement with it.
 
 Supposing that Alice's consciousness is spread out over the movie
 billboards next to the train track, there is no longer a normal
 temporal relationship between mental moments. There must merely be a
 time-like relationship, which Alice experiences as time. But, then,
 we are saying that wherever a logical relationship exists that is
 time-like, there is subjective time for those inside the time-like
 relationship.
 
 Now, what might constitute a time-like relationship? I see several
 alternatives, but none seem satisfactory.
 
 At any given moment, all we can be directly aware of is that one
 moment. If we remember the past, that is because at the present moment
 our brain has those memories; we don't know if they really came from
 the past. What would it mean to put moments in a series? It changes
 nothing essential about the moment itself; we can remove the past,
 because it adds nothing.

You raise some good points.  I think the crux of the problem comes from 
chopping 
a process up into moments and assuming that these infinitesimal, frozen 
slices 
preserve all that is necessary for time.  It is essentially the same as 
assuming 
there is a subsitution level below which we can ignore causality and just 
talk 
about states.  It seems like a obvious idea, but it is contrary to quantum 
mechanics and unitary evolution under the Schrodinger equation which was the 
basis for the whole idea of a multiverse and everything happens.


 
 The connection between moments doesn't seem like a physical
 connection; the notion is non-explanatory, since if there were such a
 physical connection we could remove it without altering the individual
 moments, therefore not altering our memories, and our subjective
 experience of time. 

How do we know that?  Memories and brain processes are distributed and 
parallel, 
which means there are spacelike separated parts of the process - and neural 
signals are orders of magnitude slower than light.

Brent

Similarly, can it be a logical relationship? Is it
 the structure of a single moment that connects it to the next? How
 would this be? Perhaps we require that there is some function (a
 physics) from one moment to the next? But, this does not exactly
 allow for things like relativity in which there is no single universal
 clock. Of course, relativity could be simulated, creating a universe
 that was run be a universal clock but whose internal facts did not
 depend on which universal clock, exactly, the simulation was run from.
 My problem is, I suppose, that any particular definition of timelike
 relationship seems too arbitrary. As another example, should any
 probabilistic elements be allowed into physics? In this case, we don't
 have a function any more, but a relation-- perhaps a relation of
 weighted transitions. But how would this relation make any difference
 from inside the universe?
 
 --Abram
 
 On Wed, Nov 26, 2008 at 4:09 AM, Bruno Marchal [EMAIL PROTECTED] wrote:
 MGA 3

 It is the last MGA !

 I realize MGA is complete, as I thought it was, but I was doubting this
 recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
 Maudlin 1989 is an independent argument of the 1988 Toulouse argument (which
 I present here).
 Note that Maudlin's very interesting Olympization technic  can be used to
 defeat a wrong form of MGA 3, that is, a wrong argument for the assertion
 that  the movie cannot be conscious. (the argument that the movie lacks the
 counterfactual). Below are hopefully correct (if not very simple) argument.
 ( I use Maudlin sometimes when people gives this non correct form of MGA 3,
 and this is probably what makes me think Maudlin has to be used, at some
 point).



 MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
 luckiness feature of the MGA 1 experiment was a red herring. We can
 construct, from MEC+COMP, an home made lucky rays generator, and use it at
 will. If we accept both digital mechanism, in particular Dennet's principle
 that neurons have no intelligence, still less prescience, and this
  *together with* the supervenience principle; we have to accept that Alice
 conscious dream experience supervenes on the projection of her brain
 activity movie.

 Let us show now that Alice consciousness *cannot* supervene on that
 *physical* movie projection.


 I propose two (deductive) arguments.

 1)

 Mechanism implies the following tautological functionalist principle: if,
 for some range of activity, a system does what it is supposed to do, and
 this before and after a change is made in its constitution, then the change
 does not change what the system is supposed to do, for that range of
 activity.
 Example:
 - A car is supposed to broken but only if the driver is faster than 90
 miles/h. Pepe Pepito NEVER drives faster than 80

Re: MGA 3

2008-11-27 Thread Bruno Marchal
 transitions. But how would this relation make any difference
 from inside the universe?



We are supported by infinity(*) of computations.  We can only bet on  
our most probable histories, above our level of constitution. Those  
historie which can multiplie themselves from below, and thus in front  
of pure probabilistic event (noise) can win the measure game (on the  
computations or the OMs). The question is: can we explain from MEC, as  
we have too, why, as we can see empirically, the probabilities can  
also be subtracted. To we get here too classical mechanics in the  
limit? Open problem of course.

Now the crazy thing is that we can already (thanks to Gödel, Löb,  
Solovay ...) interview a (Lobian) universal machine on that subject,  
and she gives a shadow of reason (and guess!) why indeed subtraction  
occurs. And thanks to the Solovay split between G (the provable part  
of self-reference, and G*, the true but unprovable part of self- 
reference, some intensional variant of G and G* split temselves into  
the sharable physics (indeteminate quanta) and unsharable physics (the  
qualia? the perceptible field, what you can only be the one to confim:  
a bit like being the one in Moscow after a self-duplication experiment).


Bruno


(*)  Even infinitIES, from the third person point of view on the  
first person points of view. Hmmm do you know the first person comp  
indeterminacy? (step 3 of UDA).
(**) The second BIG discovery being the quantum computer !  (don't  
hesitate to use grain salts if it helps to swallows what I say). of  
course nature made those discoveries before us. Well, with MEC we  
have to consider that elementary arithmetic did those discoveries  
even out of time and space.






 On Wed, Nov 26, 2008 at 4:09 AM, Bruno Marchal [EMAIL PROTECTED]  
 wrote:
 MGA 3

 It is the last MGA !

 I realize MGA is complete, as I thought it was, but I was doubting  
 this
 recently. We don't need to refer to Maudlin, and MGA 4 is not  
 necessary.
 Maudlin 1989 is an independent argument of the 1988 Toulouse  
 argument (which
 I present here).
 Note that Maudlin's very interesting Olympization technic  can be  
 used to
 defeat a wrong form of MGA 3, that is, a wrong argument for the  
 assertion
 that  the movie cannot be conscious. (the argument that the movie  
 lacks the
 counterfactual). Below are hopefully correct (if not very simple)  
 argument.
 ( I use Maudlin sometimes when people gives this non correct form  
 of MGA 3,
 and this is probably what makes me think Maudlin has to be used, at  
 some
 point).



 MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
 luckiness feature of the MGA 1 experiment was a red herring. We can
 construct, from MEC+COMP, an home made lucky rays generator, and  
 use it at
 will. If we accept both digital mechanism, in particular Dennet's  
 principle
 that neurons have no intelligence, still less prescience, and this
 *together with* the supervenience principle; we have to accept that  
 Alice
 conscious dream experience supervenes on the projection of her brain
 activity movie.

 Let us show now that Alice consciousness *cannot* supervene on that
 *physical* movie projection.


 I propose two (deductive) arguments.

 1)

 Mechanism implies the following tautological functionalist  
 principle: if,
 for some range of activity, a system does what it is supposed to  
 do, and
 this before and after a change is made in its constitution, then  
 the change
 does not change what the system is supposed to do, for that range of
 activity.
 Example:
 - A car is supposed to broken but only if the driver is faster than  
 90
 miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the  
 car is
 supposed to do what she is supposed to do, with respect of its  
 range of
 activity defined by Pepe Pepito.
 - Claude bought a 1000 thousand processors computer. One day he  
 realized
 that he used only 990 processors, for his type of activity, so he  
 decided to
 get rid of those 10 useless processors. And indeed the machine will  
 satisfy
 Claude ever.

 - Alice has (again) a math exam. Theoreticians have correctly  
 predict that
 in this special circumstance, she will never use neurons X, Y and  
 Z.  Now
 Alice go (again, again) to this exam in the same condition, but  
 with the
 neurons X, Y, Z removed. Again, not only will she behaved like if she
 succeed her exam, but her consciousness, with both MEC *and* MAT  
 still
 continue.
 The idea is that if something is not useful, for an active process  
 to go on,
 for some range of activity, then you can remove it, for that range of
 activity.

 OK?

 Now, consider the projection of the movie of the activity of  
 Alice's brain,
 the movie graph.
 Is it necessary that someone look at that movie? Certainly not. No  
 more than
 it is needed that someone is look at your reconstitution in Moscow  
 for you
 to be conscious in Moscow after a teleportation. All right? (with MEC
 assumed

Re: MGA 3

2008-11-27 Thread Abram Demski
 particular definition of timelike
 relationship seems too arbitrary.

 There is a big difference between first person non sharable time, and
 sharable local (clock measurable) time. The first you experience, the second
 you guess, and you guess it only from an implicit bet on your own

 consistency. It makes a big modal difference.



 As another example, should any
 probabilistic elements be allowed into physics? In this case, we don't
 have a function any more, but a relation-- perhaps a relation of
 weighted transitions. But how would this relation make any difference
 from inside the universe?


 We are supported by infinity(*) of computations.  We can only bet on our
 most probable histories, above our level of constitution. Those historie
 which can multiplie themselves from below, and thus in front of pure
 probabilistic event (noise) can win the measure game (on the computations or
 the OMs). The question is: can we explain from MEC, as we have too, why, as
 we can see empirically, the probabilities can also be subtracted. To we
 get here too classical mechanics in the limit? Open problem of course.
 Now the crazy thing is that we can already (thanks to Gödel, Löb, Solovay
 ...) interview a (Lobian) universal machine on that subject, and she gives a
 shadow of reason (and guess!) why indeed subtraction occurs. And thanks to
 the Solovay split between G (the provable part of self-reference, and G*,
 the true but unprovable part of self-reference, some intensional variant of
 G and G* split temselves into the sharable physics (indeteminate quanta) and
 unsharable physics (the qualia? the perceptible field, what you can only be
 the one to confim: a bit like being the one in Moscow after a
 self-duplication experiment).

 Bruno

 (*)  Even infinitIES, from the third person point of view on the first
 person points of view. Hmmm do you know the first person comp indeterminacy?
 (step 3 of UDA).
 (**) The second BIG discovery being the quantum computer !  (don't hesitate
 to use grain salts if it helps to swallows what I say). of course nature
 made those discoveries before us. Well, with MEC we have to consider that
 elementary arithmetic did those discoveries even out of time and space.





 On Wed, Nov 26, 2008 at 4:09 AM, Bruno Marchal [EMAIL PROTECTED] wrote:

 MGA 3

 It is the last MGA !

 I realize MGA is complete, as I thought it was, but I was doubting this

 recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.

 Maudlin 1989 is an independent argument of the 1988 Toulouse argument (which

 I present here).

 Note that Maudlin's very interesting Olympization technic  can be used to

 defeat a wrong form of MGA 3, that is, a wrong argument for the assertion

 that  the movie cannot be conscious. (the argument that the movie lacks the

 counterfactual). Below are hopefully correct (if not very simple) argument.

 ( I use Maudlin sometimes when people gives this non correct form of MGA 3,

 and this is probably what makes me think Maudlin has to be used, at some

 point).



 MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the

 luckiness feature of the MGA 1 experiment was a red herring. We can

 construct, from MEC+COMP, an home made lucky rays generator, and use it at

 will. If we accept both digital mechanism, in particular Dennet's principle

 that neurons have no intelligence, still less prescience, and this

 *together with* the supervenience principle; we have to accept that Alice

 conscious dream experience supervenes on the projection of her brain

 activity movie.

 Let us show now that Alice consciousness *cannot* supervene on that

 *physical* movie projection.


 I propose two (deductive) arguments.

 1)

 Mechanism implies the following tautological functionalist principle: if,

 for some range of activity, a system does what it is supposed to do, and

 this before and after a change is made in its constitution, then the change

 does not change what the system is supposed to do, for that range of

 activity.

 Example:

 - A car is supposed to broken but only if the driver is faster than 90

 miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is

 supposed to do what she is supposed to do, with respect of its range of

 activity defined by Pepe Pepito.

 - Claude bought a 1000 thousand processors computer. One day he realized

 that he used only 990 processors, for his type of activity, so he decided to

 get rid of those 10 useless processors. And indeed the machine will satisfy

 Claude ever.

 - Alice has (again) a math exam. Theoreticians have correctly predict that

 in this special circumstance, she will never use neurons X, Y and Z.  Now

 Alice go (again, again) to this exam in the same condition, but with the

 neurons X, Y, Z removed. Again, not only will she behaved like if she

 succeed her exam, but her consciousness, with both MEC *and* MAT still

 continue.

 The idea is that if something is not useful, for an active

MGA 3

2008-11-26 Thread Bruno Marchal
MGA 3

It is the last MGA !

I realize MGA is complete, as I thought it was, but I was doubting  
this recently. We don't need to refer to Maudlin, and MGA 4 is not  
necessary.
Maudlin 1989 is an independent argument of the 1988 Toulouse argument  
(which I present here).
Note that Maudlin's very interesting Olympization technic  can be  
used to defeat a wrong form of MGA 3, that is, a wrong argument for  
the assertion that  the movie cannot be conscious. (the argument that  
the movie lacks the counterfactual). Below are hopefully correct (if  
not very simple) argument. ( I use Maudlin sometimes when people gives  
this non correct form of MGA 3, and this is probably what makes me  
think Maudlin has to be used, at some point).



MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the  
luckiness feature of the MGA 1 experiment was a red herring. We can  
construct, from MEC+COMP, an home made lucky rays generator, and use  
it at will. If we accept both digital mechanism, in particular  
Dennet's principle that neurons have no intelligence, still less  
prescience, and this  *together with* the supervenience principle; we  
have to accept that Alice conscious dream experience supervenes on the  
projection of her brain activity movie.

Let us show now that Alice consciousness *cannot* supervene on that  
*physical* movie projection.



I propose two (deductive) arguments.

1)

Mechanism implies the following tautological functionalist principle:  
if, for some range of activity, a system does what it is supposed to  
do, and this before and after a change is made in its constitution,  
then the change does not change what the system is supposed to do, for  
that range of activity.
Example:
- A car is supposed to broken but only if the driver is faster than 90  
miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car  
is supposed to do what she is supposed to do, with respect of its  
range of activity defined by Pepe Pepito.
- Claude bought a 1000 thousand processors computer. One day he  
realized that he used only 990 processors, for his type of activity,  
so he decided to get rid of those 10 useless processors. And indeed  
the machine will satisfy Claude ever.

- Alice has (again) a math exam. Theoreticians have correctly predict  
that in this special circumstance, she will never use neurons X, Y and  
Z.  Now Alice go (again, again) to this exam in the same condition,  
but with the neurons X, Y, Z removed. Again, not only will she behaved  
like if she succeed her exam, but her consciousness, with both MEC  
*and* MAT still continue.
The idea is that if something is not useful, for an active process to  
go on, for some range of activity, then you can remove it, for that  
range of activity.

OK?

Now, consider the projection of the movie of the activity of Alice's  
brain, the movie graph.
Is it necessary that someone look at that movie? Certainly not. No  
more than it is needed that someone is look at your reconstitution in  
Moscow for you to be conscious in Moscow after a teleportation. All  
right? (with MEC assumed of course).
Is it necessary to have a screen? Well, the range of activity here is  
just one dynamical description of one computation. Suppose we make a  
hole in the screen. What goes in and out of that hole is exactly the  
same, with the hole and without the hole. For that unique activity,  
the hole in the screen is functionally equivalent to the subgraph  
which the hole removed. Clearly we can make a hole as large as the  
screen, so no need for a screen.
But this reasoning goes through if we make the hole in the film  
itself. Reconsider the image on the screen: with a hole in the film  
itself, you get a hole in the movie, but everything which enters and  
go out of the hole remains the same, for that (unique) range of  
activity.  The hole has trivially the same functionality than the  
subgraph functionality whose special behavior was described by the  
film. And this is true for any subparts, so we can remove the entire  
film itself.

Does Alice's dream supervene (in real time and space) on the  
projection of the empty movie?

Remark.
1° Of course, this argument can be sum up by saying that the movie  
lacks causality between its parts so that it cannot really be said  
that it computes any thing, at least physically. The movie is just an  
ordered record of computational states. This is neither a physical  
computation, nor an (immaterial) computation where the steps follows  
relatively to some universal machine. It is just a description of a  
computation, already existing in the Universal Deployment.
2° Note this: If we take into consideration the relative destiny of  
Alice, and supposing one day her brain broke down completely, she has  
more chance to survive through holes in the screen than to the  
holes in the film. The film contains the relevant information to  
reconstitute Alice from her brain description, contained

Re: MGA 3

2008-11-26 Thread Michael Rosefield
There's a quote you might like, by Korzybski: That which makes no
difference _is_ no difference.

--
- Did you ever hear of The Seattle Seven?
- Mmm.
- That was me... and six other guys.


2008/11/26 Bruno Marchal [EMAIL PROTECTED]

 MGA 3

 It is the last MGA !

 I realize MGA is complete, as I thought it was, but I was doubting this
 recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
 Maudlin 1989 is an independent argument of the 1988 Toulouse argument
 (which I present here). Note that Maudlin's very interesting Olympization
 technic  can be used to defeat a wrong form of MGA 3, that is, a wrong
 argument for the assertion that  the movie cannot be conscious. (the
 argument that the movie lacks the counterfactual). Below are hopefully
 correct (if not very simple) argument. ( I use Maudlin sometimes when people
 gives this non correct form of MGA 3, and this is probably what makes me
 think Maudlin has to be used, at some point).



 MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
 luckiness feature of the MGA 1 experiment was a red herring. We can
 construct, from MEC+COMP, an home made lucky rays generator, and use it at
 will. If we accept both digital mechanism, in particular Dennet's principle
 that neurons have no intelligence, still less prescience, and this
  *together with* the supervenience principle; we have to accept that Alice
 conscious dream experience supervenes on the projection of her brain
 activity movie.

 Let us show now that Alice consciousness *cannot* supervene on that
 *physical* movie projection.



 I propose two (deductive) arguments.

 1)

 Mechanism implies the following tautological functionalist principle: if,
 for some range of activity, a system does what it is supposed to do, and
 this before and after a change is made in its constitution, then the change
 does not change what the system is supposed to do, for that range of
 activity.
 Example:
 - A car is supposed to broken but only if the driver is faster than 90
 miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is
 supposed to do what she is supposed to do, with respect of its range of
 activity defined by Pepe Pepito.
 - Claude bought a 1000 thousand processors computer. One day he realized
 that he used only 990 processors, for his type of activity, so he decided to
 get rid of those 10 useless processors. And indeed the machine will satisfy
 Claude ever.

 - Alice has (again) a math exam. Theoreticians have correctly predict that
 in this special circumstance, she will never use neurons X, Y and Z.  Now
 Alice go (again, again) to this exam in the same condition, but with the
 neurons X, Y, Z removed. Again, not only will she behaved like if she
 succeed her exam, but her consciousness, with both MEC *and* MAT still
 continue.
 The idea is that if something is not useful, for an active process to go
 on, for some range of activity, then you can remove it, for that range of
 activity.

 OK?

 Now, consider the projection of the movie of the activity of Alice's brain,
 the movie graph.
 Is it necessary that someone look at that movie? Certainly not. No more
 than it is needed that someone is look at your reconstitution in Moscow for
 you to be conscious in Moscow after a teleportation. All right? (with MEC
 assumed of course).
 Is it necessary to have a screen? Well, the range of activity here is just
 one dynamical description of one computation. Suppose we make a hole in the
 screen. What goes in and out of that hole is exactly the same, with the hole
 and without the hole. For that unique activity, the hole in the screen is
 functionally equivalent to the subgraph which the hole removed. Clearly we
 can make a hole as large as the screen, so no need for a screen.
 But this reasoning goes through if we make the hole in the film itself.
 Reconsider the image on the screen: with a hole in the film itself, you get
 a hole in the movie, but everything which enters and go out of the hole
 remains the same, for that (unique) range of activity.  The hole has
 trivially the same functionality than the subgraph functionality whose
 special behavior was described by the film. And this is true for any
 subparts, so we can remove the entire film itself.

 Does Alice's dream supervene (in real time and space) on the projection of
 the empty movie?

 Remark.
 1° Of course, this argument can be sum up by saying that the movie lacks
 causality between its parts so that it cannot really be said that it
 computes any thing, at least physically. The movie is just an ordered record
 of computational states. This is neither a physical computation, nor an
 (immaterial) computation where the steps follows relatively to some
 universal machine. It is just a description of a computation, already
 existing in the Universal Deployment.
 2° Note this: If we take into consideration the relative destiny of Alice,
 and supposing one day her brain broke