Re: Consciousness is information?

2010-01-14 Thread Bruno Marchal
Just to be fair with Torgny, it seems he changed his mind on  
ultrafinitism:



On 10 May 2009, at 19:05, Torgny Tholerus wrote:


2009/5/8 Torgny Tholerus tor...@dsv.su.se:

I was an ultrafinitist before, but I have changed my mind.  Now I  
accept
that you can say that the natural numbers are unlimited.  I only  
deny
actual infinities.  The set of all natural numbers are always  
finite,
but you can always increase the set of all natural number by  
adding more

natural numbers to it.



Bruno
http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: Consciousness is information?

2009-06-14 Thread Bruno Marchal


Many people believe something like

objectivity = serious, truth, rationality etc.
subjectivity = not serious, childish, unscientific, irrational

when truth is  (if I can say, to be short):

subjectivity = what you cannot doubt, what you know, truth
objectivity = hypothetical, theoretical, but sharable, learnable and  
refutable. You have to doubt the theories, and you have to take them  
seriously so as to clarify them, and doubt them even more,  up to  
their replacement.

Now, that confusion is made greater when you begin to make objective  
(doubtable)  theories on subjectivity (undoubtable).

It is good to keep always in mind that

subjectivity = undoubtable (not improvable)
objectivity = doubtable (improvable)

This can be made more precise in the language and theorems/non- 
theorems of the universal machine which introspects herself. This is  
really the AUDA thing  Plato, Descartes and Popper have grasped  
similar things, imo.

Bruno


On 14 Jun 2009, at 03:40, David Nyman wrote:


 On Apr 24, 4:39 pm, Bruno Marchal marc...@ulb.ac.be wrote:

 Any content of consciousness can be an illusion. Consciousness itself
 cannot, because without consciousness there is no more illusion at  
 all.

 - just catching up with the thread, but I feel compelled to comment
 that this is beautifully and clearly put.  Why does this insight
 escape so many whose grasp of logic in other respects seems quite
 adequate?  The word 'illusion' is often brandished in a scarily
 'eliminative' way, but those who do so seem quite
 'unconscious' (ironically) that the subtle knife they wield for this
 excision is precisely that which they seek to excise!

 David

 On 24 Apr 2009, at 06:14, Kelly wrote:



 On Apr 22, 12:24 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 So for that to be a plausible scenario we have to
 say that a person at a particular instant in time can be fully
 described by some set of data.

 Not fully. I agree with Brent that you need an interpreter to make
 that person manifest herself in front of you. A bit like a CD, you
 will need a player to get the music.

 It seems to me that consciousness is the self-interpretion of
 information.  David Chalmers has a good line:  Experience is
 information from the inside; physics is information from the  
 outside.

 First person experience and third person experiment. Glad to hear
 Chalmers accept this at last.
 In UDA, inside/outside are perfectly well defined in a pure third
 person way: inside (first person) = memories annihilated and
 reconstructed in classical teleportation, outside = the view outside
 the teleporter. In AUDA I use the old classical definition by Plato  
 in
 the Theaetetus.



 I still don't see what an interpreter adds, except to satisfy the
 intuition that something is happening that produces
 consciousness.  Which I think is an attempt to reintroduce time.

 I don't think so. The only time needed is the discrete order on the
 natural numbers. An interpreter is needed to play the role of the
 person who gives some content to the information handled through his
 local brain.   (For this I need also addition and multiplication).



 But I don't see any advantage of this view over the idea that
 conscious states just exist as a type of platonic form (as Brent
 mentioned earlier).

 The advantage is that we have the tools to derive physics in a way
 which is enough precise for testing the comp hypothesis. Physics has
 became a branch of computer's psychology or theology.

 At any given instant that I'm awake, I'm
 conscious of SOMETHING.

 To predict something, the difficulty is to relate that consciousness
 to its computational histories. Physics is given by a measure of
 probability on those comp histories.

 And I'm conscious of it by virtue of my
 mental state at that instant.  In the materialist view, my mental
 state is just the state of the particles of my brain at that
 instant.

 Which cannot be maintained with the comp hyp. Your consciousness is  
 an
 abstract type related to all computations going through your current
 state.



 But I say that what this really means is that my mental state is  
 just
 the information represented by the particles of my brain at that
 instant.  And that if you transfer that information to a computer  
 and
 run a simulation that updates that information appropriately, my
 consciousness will continue in that computer simulation,  
 regardless of
 the hardware (digital computer, mechanical computer, massively
 parallel or single processor, etc) or algorithmic details of that
 computer simulation.

 OK. But it is a very special form of information. Consciousness is
 really the qualia associated to your belief in some reality. It is a
 bet on self-consistency: it speed up your reaction time relatively to
 your most probable histories.



 But, what is information?  I think it has nothing to do with  
 physical
 storage or instantiation.  I think it has an existence seperate from
 that.  A platonic 

Re: Consciousness is information?

2009-06-13 Thread David Nyman

On Apr 24, 4:39 pm, Bruno Marchal marc...@ulb.ac.be wrote:

 Any content of consciousness can be an illusion. Consciousness itself
 cannot, because without consciousness there is no more illusion at all.

- just catching up with the thread, but I feel compelled to comment
that this is beautifully and clearly put.  Why does this insight
escape so many whose grasp of logic in other respects seems quite
adequate?  The word 'illusion' is often brandished in a scarily
'eliminative' way, but those who do so seem quite
'unconscious' (ironically) that the subtle knife they wield for this
excision is precisely that which they seek to excise!

David

 On 24 Apr 2009, at 06:14, Kelly wrote:



  On Apr 22, 12:24 pm, Bruno Marchal marc...@ulb.ac.be wrote:
  So for that to be a plausible scenario we have to
  say that a person at a particular instant in time can be fully
  described by some set of data.

  Not fully. I agree with Brent that you need an interpreter to make
  that person manifest herself in front of you. A bit like a CD, you
  will need a player to get the music.

  It seems to me that consciousness is the self-interpretion of
  information.  David Chalmers has a good line:  Experience is
  information from the inside; physics is information from the outside.

 First person experience and third person experiment. Glad to hear  
 Chalmers accept this at last.
 In UDA, inside/outside are perfectly well defined in a pure third  
 person way: inside (first person) = memories annihilated and  
 reconstructed in classical teleportation, outside = the view outside  
 the teleporter. In AUDA I use the old classical definition by Plato in  
 the Theaetetus.



  I still don't see what an interpreter adds, except to satisfy the
  intuition that something is happening that produces
  consciousness.  Which I think is an attempt to reintroduce time.

 I don't think so. The only time needed is the discrete order on the  
 natural numbers. An interpreter is needed to play the role of the  
 person who gives some content to the information handled through his  
 local brain.   (For this I need also addition and multiplication).



  But I don't see any advantage of this view over the idea that
  conscious states just exist as a type of platonic form (as Brent
  mentioned earlier).

 The advantage is that we have the tools to derive physics in a way  
 which is enough precise for testing the comp hypothesis. Physics has  
 became a branch of computer's psychology or theology.

  At any given instant that I'm awake, I'm
  conscious of SOMETHING.

 To predict something, the difficulty is to relate that consciousness  
 to its computational histories. Physics is given by a measure of  
 probability on those comp histories.

  And I'm conscious of it by virtue of my
  mental state at that instant.  In the materialist view, my mental
  state is just the state of the particles of my brain at that
  instant.

 Which cannot be maintained with the comp hyp. Your consciousness is an  
 abstract type related to all computations going through your current  
 state.



  But I say that what this really means is that my mental state is just
  the information represented by the particles of my brain at that
  instant.  And that if you transfer that information to a computer and
  run a simulation that updates that information appropriately, my
  consciousness will continue in that computer simulation, regardless of
  the hardware (digital computer, mechanical computer, massively
  parallel or single processor, etc) or algorithmic details of that
  computer simulation.

 OK. But it is a very special form of information. Consciousness is  
 really the qualia associated to your belief in some reality. It is a  
 bet on self-consistency: it speed up your reaction time relatively to  
 your most probable histories.



  But, what is information?  I think it has nothing to do with physical
  storage or instantiation.  I think it has an existence seperate from
  that.  A platonic existence.  And since the information that
  represents my brain exists platonically, then the information for
  every possible brain (including variations of my brain) should also
  exist platonically.

 You make the same error than those who confuse a universal dovetailer  
 with a counting algorithm or the babel library. The sequence:

 0, 1, 2, 3, 4, ... , or 0 1 10 11 100 101 110 111 go through all  
 description of all information, but it lacks the infinitely subtle  
 redundancy contained in the space of all computations (the universal  
 dovetaling). You work in N, succ, you lack addition and  
 multiplication, needed for having a notion of interpreter or universal  
 machine, the key entity capable of giving content to its information  
 structure. This is needed to have a coherent internal interpretation  
 of computerland.



  Conscious experience is more the content, or the interpretation of
  that information, made by a person or by a universal 

Re: Consciousness is information?

2009-06-04 Thread Bruno Marchal
Hi Jesse,

On 01 May 2009, at 19:36, Jesse Mazer wrote:


 I found a paper on the Mandelbrot set and computability, I  
 understand very little but maybe Bruno would be able to follow it:

 http://arxiv.org/abs/cs.CC/0604003

 The same author has a shorter outline or slides for a presentation  
 on this subject at 
 http://www.cs.swan.ac.uk/cie06/files/d37/PHP_MandelbrotCiE2006Swansea_Jul2006.pdf
  
  and at the end he asks the question If M (Mandelbrot set) not Q- 
 computable, can the Halting Problem be reduced to determining  
 membership of (intersection of M and Q^2), i.e. how powerful a  
 'hypercomputer' is the Mandelbrot set? I believe Q^2 here just  
 refers to the set of all possible pairs of rational numbers. Maybe  
 by reducing the Halting Problem he means that for any Turing  
 machine + input, there might be some rule that would translate it  
 into a pair of rational numbers such that the computation will halt  
 iff the pair is included in the Mandelbrot set? Whatever he means,  
 it sounds like he's saying it's an open question...

 Jesse

 
 
  On Thu, Apr 30, 2009 at 10:35 AM, Bruno Marchal  
 marc...@ulb.ac.be wrote:
 
 
  The mathematical Universal Dovetailer, the splashed universal  
 Turing
  Machine, the rational Mandelbrot set, or any creative sets in the
  sense of Emil Post, does all computations. Really all, with Church
  thesis. This is a theorem in math. The rock? Show me just the 30  
 first
  steps of a computation of square-root(2). ...
 
  Bruno,
 
  I am interested about your statement regarding the Mandelbrot set
  implementing all computations, could you elaborate on this?


So, indeed the conjecture I made on the Mandelbrot Set concerns the  
decidability-on-the-rationals of the set M intersected with QXQ. And  
it is indeed still an open problem. Actually my question is the  
creativity (in the sense of Post) of M, and this would mean that you  
can reduce the halting problem of any Turing machine into a problem of  
membership of a rational complex number a+bi (a, b, in Q) to M. There  
would be one fixed algorithm transforming any computable problem on N  
into such a membership problem. If the solution is positive, then the  
Mandelbrot Set would be a compact representation of a Universal  
Dovetailing. Also, this would entail the existence of interesting  
relationship between classical computability theory and the theory of  
Chaos on the reals. The universality in chaos phenomenon (Feigenbaum)  
would be related to the Turing Universality. Also, each of us would  
be, in a sense, distributed densely on the boundary of M, and each  
little Mandelbrot would represent the third person projection view of  
each of our first person plenitude. That would be cute, mainly for  
the pedagogy of the UD, but also, it would made it possible to borrow  
mathematical tools from chaos theory theory for the pursue of deriving  
physics from numbers.
Not everything is clear for me in Potgieter paper, probably a result  
of my incomptence, but it is very interesting. Thanks for the link.

Did I give you the link of the last, impressive M-zoom by phaumann?  
Look at it with the high quality option + full screen, if you are  
patient enough. Love it!
http://www.youtube.com/watch?v=x6DD1k4BAUgfeature=channel_page

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-06-04 Thread Bruno Marchal


On 03 Jun 2009, at 20:11, Jason Resch wrote:


 On Fri, May 22, 2009 at 4:37 PM, Bruno Marchal marc...@ulb.ac.be  
 wrote:

 Do you believe if we create a computer in this physical
 universe that it could be made conscious,

 But a computer is never conscious, nor is a brain. Only a person is
 conscious, and a computer or a brain can only make it possible for a
 person to be conscious relatively to another computer. So your
 question is ambiguous.
 It is not my brain which is conscious, it is me who is conscious. My
 brain appears to make it possible for my consciousness to manifest
 itself relatively to you. Remember that we are supposed to no more
 count on the physical supervenience thesis.
 It remains locally correct to attribute a consciousness through a
 brain or a body to a person we judged succesfully implemented locally
 in some piece of matter (like when we say yes to a doctor).  But the
 piece of matter is not the subject of the consciousness. It is only
 the abstract person or program who is the subject of  
 consciousness.
 To say a brain is conscious consists in doing Searle's'mistake when  
 he
 confused levels of computations in the Chinese room, as well seen
 already by Hofstadter and Dennett in Mind's I.



 Thanks for your response, if I understand you correctly, you are
 saying that if we run a simulation of a mind, we are not creating
 consciousness, only adding an additional instantiation to a mind which
 already has an infinity of indeterminable instantiations.  Is that
 right?


Yes, you are right. When you implement an emulation of a mind, you are  
just adding such an instanciation relatively to you. Of course you are  
not adding anything in Platonia.





 Does this imply that it is impossible to create a simulation of a mind
 that finds it lives in an environment without uncertainty?


That is correct.



  If so is
 it because even if the physical laws in one instantiation may be
 certain, where some of the infinite number of computations that all
 instantiate that mind may diverge and in particular which one that
 mind will find itself in is not knowable?

Yes. I will come back on this in the seven step thread.




 The consequence being that all observers everywhere live in QM-like
 environments?

Absolutely. We can consider that we live in an infinity of  
computations, but we cannot distinguish them ... until they  
differentiate sufficiently so that they are in principle  
distinguishable (like being in Washington or being in Moscow). This  
entails that below our substitution level
what can be observed depends directly on some average on an infinity  
of computations. The quantum-like aspect of nature is, in that  
sense, a consequence of digitalism in the cognitive science. The  
classical, and computational, aspect of physics remains the hard  
things to derive.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-06-04 Thread Jason Resch

On Thu, Jun 4, 2009 at 9:29 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 03 Jun 2009, at 20:11, Jason Resch wrote:


 On Fri, May 22, 2009 at 4:37 PM, Bruno Marchal marc...@ulb.ac.be
 wrote:

 Do you believe if we create a computer in this physical
 universe that it could be made conscious,

 But a computer is never conscious, nor is a brain. Only a person is
 conscious, and a computer or a brain can only make it possible for a
 person to be conscious relatively to another computer. So your
 question is ambiguous.
 It is not my brain which is conscious, it is me who is conscious. My
 brain appears to make it possible for my consciousness to manifest
 itself relatively to you. Remember that we are supposed to no more
 count on the physical supervenience thesis.
 It remains locally correct to attribute a consciousness through a
 brain or a body to a person we judged succesfully implemented locally
 in some piece of matter (like when we say yes to a doctor).  But the
 piece of matter is not the subject of the consciousness. It is only
 the abstract person or program who is the subject of
 consciousness.
 To say a brain is conscious consists in doing Searle's'mistake when
 he
 confused levels of computations in the Chinese room, as well seen
 already by Hofstadter and Dennett in Mind's I.



 Thanks for your response, if I understand you correctly, you are
 saying that if we run a simulation of a mind, we are not creating
 consciousness, only adding an additional instantiation to a mind which
 already has an infinity of indeterminable instantiations.  Is that
 right?


 Yes, you are right. When you implement an emulation of a mind, you are
 just adding such an instanciation relatively to you. Of course you are
 not adding anything in Platonia.



But is the computer emulating the mind not also a platonic object?  If
the computer simulation does not count toward anything then what is
the point of saying yes to the doctor, or to pursue mind uploading
technology as a method to obtain immortality and escape eternal aging
as QM-immortality would predict?




 Does this imply that it is impossible to create a simulation of a mind
 that finds it lives in an environment without uncertainty?


 That is correct.



  If so is
 it because even if the physical laws in one instantiation may be
 certain, where some of the infinite number of computations that all
 instantiate that mind may diverge and in particular which one that
 mind will find itself in is not knowable?

 Yes. I will come back on this in the seven step thread.




 The consequence being that all observers everywhere live in QM-like
 environments?

 Absolutely. We can consider that we live in an infinity of
 computations, but we cannot distinguish them ... until they
 differentiate sufficiently so that they are in principle
 distinguishable (like being in Washington or being in Moscow). This
 entails that below our substitution level
 what can be observed depends directly on some average on an infinity
 of computations. The quantum-like aspect of nature is, in that
 sense, a consequence of digitalism in the cognitive science. The
 classical, and computational, aspect of physics remains the hard
 things to derive.


Interesting, I am curious is there some relationship between ones
substitution level and where one will find the QM uncertainty?  If all
observers live in uncertain environments, and it took us this long to
discover QM behavior, I imagine for some observers it could be much
harder or much easier to find this uncertainty level.

What do you think controls how deep one must look to see the QM
behavior first hand?  I suppose it might also be related to the
complexity of one's observer moment; the more information one takes in
from the environment and has in memory the lower the level the
uncertainty should be.  A God like mind that knew the position of
every particle in the universe in which it lived might not have any
uncertainty, but of course the mind couldn't encode everything about
itself...

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-06-03 Thread Jason Resch

On Fri, May 22, 2009 at 4:37 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 Do you believe if we create a computer in this physical
 universe that it could be made conscious,

 But a computer is never conscious, nor is a brain. Only a person is
 conscious, and a computer or a brain can only make it possible for a
 person to be conscious relatively to another computer. So your
 question is ambiguous.
 It is not my brain which is conscious, it is me who is conscious. My
 brain appears to make it possible for my consciousness to manifest
 itself relatively to you. Remember that we are supposed to no more
 count on the physical supervenience thesis.
 It remains locally correct to attribute a consciousness through a
 brain or a body to a person we judged succesfully implemented locally
 in some piece of matter (like when we say yes to a doctor).  But the
 piece of matter is not the subject of the consciousness. It is only
 the abstract person or program who is the subject of consciousness.
 To say a brain is conscious consists in doing Searle's'mistake when he
 confused levels of computations in the Chinese room, as well seen
 already by Hofstadter and Dennett in Mind's I.



Thanks for your response, if I understand you correctly, you are
saying that if we run a simulation of a mind, we are not creating
consciousness, only adding an additional instantiation to a mind which
already has an infinity of indeterminable instantiations.  Is that
right?

Does this imply that it is impossible to create a simulation of a mind
that finds it lives in an environment without uncertainty?  If so is
it because even if the physical laws in one instantiation may be
certain, where some of the infinite number of computations that all
instantiate that mind may diverge and in particular which one that
mind will find itself in is not knowable?

The consequence being that all observers everywhere live in QM-like
environments?

Thanks, I look forward to your reply.

Jason


 or do you count all
 appearance of matter to be only a description of a computation and not
 capable of true computation?

 appearance of matter is a qualia. It does not describe anything but
 is a subjective experience, which may correspond to something stable
 and reflecting the existence of a computation (in Platonia) capable to
 manifest itself relatively to you.


 Do you believe that the only real
 computation exists platonically and this is the only source of
 conscious experience?

 Computations and their relative implementations exist only in
 platonia, yes. But even in Platonia, they exist in multiple relative
 version, all defined eventually through many multiple relations
 between numbers.


  If so I find this confusing, as could there not
 be multiple levels?

 But they are multiple levels of computations in Platonia or
 Arithmetic. Even a huge number of them. That is why we have to take
 into account the first person indeterminacies.




 For example would a platonic turing machine
 simulating another turing machine, simulating a mind be consicous?


 A 3-machine is never conscious. A 3-entity is never conscious. Only a
 person is. First person can only be associated with the infinities of
 computations computing them in Platonia.




  If
 so, how does that differ from a platonic turing machine simulating a
 physical reality with matter, simulating a mind?


 You will have to introduce a magical (assuming comp) selection
 principle for attaching, in a persistent way, a mind to that physical
 reality simulation. The mind can only be attached to an infinity of
 such relative simulations, and this is why if that mind look at itself
 below its substitution level he will find a trace of those
 computations. Comp says you have to make the statistic on all the
 computations. So the Physical has to be a sum on all those computations.
 That such computations statistically interfere is not so difficult to
 show. That the comp interference gives the apparent quantum one is not
 yet discarded.

 I think you are not taking sufficiently into account the first person
 (hopefully plural) indeterminacy in front of the universal dovetailer,
 (or arithmetic) which defined the space of all computations.

 Does this help a bit?


 Bruno


 http://iridia.ulb.ac.be/~marchal/




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-06-02 Thread Bruno Marchal


On 02 Jun 2009, at 18:46, Kelly Harmon wrote:





 First, in the multiplication experience, the question of your choice
 is not addressed, nor needed.
 The question is really: what will happen to you. You give the right
 answer above.


 You're saying that there are no low probability worlds?  Or only that
 they're outnumbered by the high probability worlds?

The last. Low probability world exists but not only it is rare to  
access them, but it super-rare to remain in them, well, if comp  
succeeds!




 I guess I'm not clear on what you're getting at with this pixel
 thought-experiment.

The UD is the many-world, or many-histories. The 2^big movies  
multiplication is a tiny trivial part of the UD, and being  
immaterialist you should understand that we are doing all the time  
this thought experiment. If we don't succeed in justifying why  
things look normal, comp has to be abandoned. We have to explain why  
the computational histories win when the UD plays the trick of  
generating a continuum of non computational histories. The  
computational histories which will win are those who entangled with  
the non computational histories so as to make normality inherited by  
the computational one. Somehow.






 Have you understand UDA1-6?, because I think most get those steps. I
 will soon explain in all details UDA-7, which is not entirely  
 obvious.
 If you take your own philosophy seriously, you don't need UDA8. But  
 it
 can be useful to convince others, of the necessity of that
 philosophy, once we bet on the comp hyp.


 I think I have a good grasp of 1 through 6.

Cool, I am just explaining UDA-7, in all details, from scratch.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-31 Thread Bruno Marchal


On 29 May 2009, at 18:53, Kelly Harmon wrote:


 On Thu, May 28, 2009 at 3:49 PM, Bruno Marchal marc...@ulb.ac.be  
 wrote:

 What do you thing is the more probable events that you will live  
 which
 one is the more probable? What is your most rational choice among

 So if nothing is riding on the outcome of my choice, then it seems
 rational to choose the option that will make me right in the most
 futures, which is option 6, white noise.


What a relief ...





 If there's one world for
 each unique pattern of pixels,

It is more or less explicit in the ideal protocol of the experience.



 then most of those worlds will be
 white noise worlds, and making the choice that makes me right in the
 most worlds seems as rational as anything else.

Perfect.





 Though, if there is something significant riding on whether I choose
 correctly or not, then I have to decide what is most important to me:
 minimizing my suffering in the worlds where I'm wrong, or maximizing
 my gains in the worlds where I'm right.

 If there isn't significant suffering likely in the losing worlds, then
 I will be much more likely to base my decision on the observed or
 calculated probabilities, as Papineau suggests.

OK. It is not incompatible.




 BUT, if there is significant suffering likely in the worlds where I
 lose, I might very well focus making a choice that will minimize that
 suffering.  In which case I will generally not base much of my
 decision on the probabilities, since it is my view that all outcomes
 occur.

?



 However, going a little further, this assumes that I only make one
 bet.  As I mentioned before, I think that I will make all possible
 bets.

Before the multiplication? I don't see how you could, here and
now, decide to do 2^(16180*1*60*90*24)  bets.

I am not asking your quantum or comp counterparts. The question is asked
to *the* Kelly to which I send this post.






 So, even if I make the safe suffering-minimizing bet in this
 branch, I know that in a closely related branch I will make the risky
 gain-maximizing bet and say to hell with the Kellys in the losing
 worlds.

You are hard with yourself, I mean with your selves ...


 So I know that even if I make the safe bet, there's another Kelly two
 worlds over making the risky bet, which will result in a Kelly
 suffering the consequences of losing over there anyway.  So maybe I'll
 say, screw it, and make the risky bet myself.


You could as well put your hand in the fire directly.





 Ultimately, it doesn't matter.  Every Kelly in every situation with
 every history is actualized.  So my subjective feeling that I am
 making choices is irrelevant.  Every choice is going to get made, so
 my choice is really just me taking my place in the continuum of
 Kellys.

First, in the multiplication experience, the question of your choice  
is not addressed, nor needed.
The question is really: what will happen to you. You give the right  
answer above.







 And I am asking you, here and now, what do you expect the most
 probable experience you will feel tomorrow, when I will do that
 experiment.

 So to speak of expectations is to appeal to my single world
 intuitions.  But we know that intuition isn't a reliable guide, since
 there are many aspects of reality that are unintuitive.  So I think
 the fact that I have an intuitive expectation that things will happen
 a certain way, and only that way, is neither here nor there.


We can get counter-intuitive results only by starting with our  
intuition, and we have to succeed in making those basic intuition very  
solid, if we want to be able to make clear the counter-intuitive  
consequences. If not, we can't progress at all, and we lose the  
opportunity to abandon our wrong theories.

Common sense is the ONLY tool to go beyond common sense.

Have you understand UDA1-6?, because I think most get those steps. I  
will soon explain in all details UDA-7, which is not entirely obvious.
If you take your own philosophy seriously, you don't need UDA8. But it  
can be useful to convince others, of the necessity of that  
philosophy, once we bet on the comp hyp.

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-29 Thread Bruno Marchal
Hi Marty,


On 29 May 2009, at 02:32, m.a. wrote:

 Bruno,
 Thank you for this detailed reply. May I pose one follow- 
 up question? Is the universal dovetailer some sort of God/Machine  
 that is mathematical like the rest of creation but separate from it  
 and of a higher order of purpose?


The universal dovetailer (UD) is a program. A finite piece of code,  
which, when executed, generates all programs, in all possible  
programming languages, and which also executes all those programs, by  
dovetailing on those executions. In that sense the UD is just a  
program among all programs. When it runs (platonistically or not) it  
generates itself, and executes itself, an infinity of times.

I will explain this in all details to Kim. It is not a trivial  
subject, and the more you know about the diagonalization technic, the  
more you are amazed that the UD can exist. But its existence is a  
consequence of simple axioms defining addition and multiplication of  
the natural numbers. Its universal character is a consequence of  
Church's thesis, which is needed for accepting the generality of  
incompleteness and limitation theorems.




 If so, is there an explanation for its existence that doesn't  
 exclude a deity?


You can explain the existence of the UD without invoking any deity.  
But this does not exclude any (non naïve or literal) deity.

Then, if you are willing to define deities by non turing  
emulable (mathematical) subject or objects, like actual infinities,  
then, even machines (like us, with comp) cannot NOT invoke deities  
when trying to learn some truth about just the numbers and the  
machines. We need even a transfinite ladder of deities to grasp more  
and more the machine's abilities.

The opposition between science and religion is a red herring. Science  
is opposed only to authoritative arguments. The confusion comes from  
the fact that many religions, including some form of atheism, are  
based on authoritative arguments, apparently as a consequence of their  
temporal institutionalization.

But real, ideal perhaps, science leads only to modesty and respect,  
especially in regard with fundamental question.

Science cannot have definite answers on fundamental questions, it can  
only enlarge the awe, the astonishment.
Science cannot kill the mystery, but it can clean it better and better  
from the superstitions and the fake mysteries, generally brought by  
the fear sellers and the egocentric manipulators.

If you follow the explanation to Kim, there will be a point where you  
will understand that science is really what breaks down all possible  
form of reductive or reductionist explanation. This can explain why  
the pseudo religious authoritarians are used to fight against science,  
and against freedom.

Comp superficially looks like a reductionism, but it is the most  
powerful vaccine against reductionism.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-29 Thread John Mikes
Bruno, and List:

I think Marty's question brought us to the point when everybody's pesonal
belief (worldview? mindset?) comes into consideration. Some base the
totality upon the (consciousness(?) provided) Platonsim, numbers, even
physical world - view, others upon mysticism, religion, diverse phylosophies
(ontologies) and a vast variety of mixtures among all these and others.
To base anything 'upon' consciousness has the prerequisite of such,
preceeding (even if reversely denied) the theory in favor FOR the generation
of such. We (asumed that we are) are 'thinking' out our theories HOW to
think out anything.
*If there was never a physical world...*  can be asked in a negative
connotation only by an extreme solipsist
...*comp and numbers alone created such a universe and then created people
to experience it*
rather: generated in their experience the idea of 'comp and numbers'? (which
is universe-related) AFTER they were created by what they created.
...*how did [such] purposefulness and intentionality get into pure comp?
*did it indeed? isn't comp as anything reasonable, deterministic in the
sense that relations provide relations? that no relations occur if only
unrelated (random?) elements come into play? (Isn't THAT also a human
idea by the darn consciousness?)

I planed to illustrate my basis and presently developed best own belief
system, but it is not of general interest and I don't want to persuade
(convert? seduce?) anybody to similar position.

Peace!

John Mikes



On Thu, May 28, 2009 at 9:41 AM, m.a. marty...@bellsouth.net wrote:

  *Bruno,*
 * If there was never a physical world to which living
 creatures adapted after millions of years  and which after further
 eons prompted the evolution of consciousness, do we conclude that comp and
 numbers alone created such a universe and then created people to experience
 it...all through the chance combinations of numbers? Are we saying that
 monkeys on typewriters authored everything we see about us?  If so, how did
 such purposefulness and intentionality get into pure comp?   *
 *
 marty a.*
   **
 **
 **
 **
 **
 **
 - Original Message - From: Kelly Harmon harmon...@gmail.com
 To: everything-list@googlegroups.com
 Sent: Thursday, May 28, 2009 3:02 AM
 Subject: Re: Consciousness is information?

 
  On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal marc...@ulb.ac.be
 wrote:
 
  Since you told me that you accept comp, after all, and do no more
  oppose it to your view, I think we agree, at least on many things.
  Indeed you agree with the hypothesis, and your philosophy appears to
  be a consequence of the hypothesis.
 
  Excellent!
 
 
  It remains possible that we have a disagreement concerning the
  probability, and this has some importance, because it is the use of
  probability (or credibility) which makes the consequences of comp
  testable. More in the comment below.
 
  So my only problem with the usual view of probability is that it
  doesn't seem to me to emerge naturally from a platonic theory of
  conscious.  Is your proposal something that would conceivably be
  arrived at by a rational observer in one of the (supposedly) rare
  worlds where white rabbits are common?   Does it have features that
  would lead one to predict the absence of white rabbits, or does it
  just offer a way to explain their absence after the fact?
 
  As I mentioned before, assuming computationalism it seems to me that
  it is theoretically possible to create a computer simulation that
  would manifest any imaginable conscious entity observing any
  imaginable world, including schizophrenic beings observing
  psychedelic realities.  So, then further assuming Platonism, all of
  these strange experiences should exist in Platonia.  Along with all
  possible normal experiences.
 
  I don't see any obvious, non-ad hoc mechanism to eliminate or
  minimize strange experiences relative to normal experiences, and I
  don't think adding one is justified just for that purpose, or even
  necessary since an unconstrained platonic theory does have the obvious
  virtue of saying that there will always be Kellys like myself who have
  never seen white rabbits.
 
  As for your earlier questions about how you should bet, I have two
 responses.
 
  First that there exists a Bruno who will make every possible bet.
  One particular Bruno will make his bet on a whim, while another Bruno
  will do so only after long consideration, and yet another will make a
  wild bet in a fit of madness.  Each Bruno will feel like he made a
  choice, but actually all possible Brunos exist, so all possible bets
  are made, for all possible subjectively felt reasons.
 
  Second, and probably more helpfully, I'll quote this paper
  (http://www.kcl.ac.uk/content/1/c6/04/17/78/manymindsandprobs.doc) by
  David Papineau, which sounds reasonable to me:
 
  But many minds theorists can respond that the logic of statistical
  inference is just the same on their view

Re: Consciousness is information?

2009-05-29 Thread Kelly Harmon

On Thu, May 28, 2009 at 3:49 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 What do you thing is the more probable events that you will live which
 one is the more probable? What is your most rational choice among

So if nothing is riding on the outcome of my choice, then it seems
rational to choose the option that will make me right in the most
futures, which is option 6, white noise.  If there's one world for
each unique pattern of pixels, then most of those worlds will be
white noise worlds, and making the choice that makes me right in the
most worlds seems as rational as anything else.

Though, if there is something significant riding on whether I choose
correctly or not, then I have to decide what is most important to me:
minimizing my suffering in the worlds where I'm wrong, or maximizing
my gains in the worlds where I'm right.

If there isn't significant suffering likely in the losing worlds, then
I will be much more likely to base my decision on the observed or
calculated probabilities, as Papineau suggests.

BUT, if there is significant suffering likely in the worlds where I
lose, I might very well focus making a choice that will minimize that
suffering.  In which case I will generally not base much of my
decision on the probabilities, since it is my view that all outcomes
occur.

However, going a little further, this assumes that I only make one
bet.  As I mentioned before, I think that I will make all possible
bets.  So, even if I make the safe suffering-minimizing bet in this
branch, I know that in a closely related branch I will make the risky
gain-maximizing bet and say to hell with the Kellys in the losing
worlds.

So I know that even if I make the safe bet, there's another Kelly two
worlds over making the risky bet, which will result in a Kelly
suffering the consequences of losing over there anyway.  So maybe I'll
say, screw it, and make the risky bet myself.

Ultimately, it doesn't matter.  Every Kelly in every situation with
every history is actualized.  So my subjective feeling that I am
making choices is irrelevant.  Every choice is going to get made, so
my choice is really just me taking my place in the continuum of
Kellys.


 And I am asking you, here and now, what do you expect the most
 probable experience you will feel tomorrow, when I will do that
 experiment.

So to speak of expectations is to appeal to my single world
intuitions.  But we know that intuition isn't a reliable guide, since
there are many aspects of reality that are unintuitive.  So I think
the fact that I have an intuitive expectation that things will happen
a certain way, and only that way, is neither here nor there.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-29 Thread m.a.
Bruno, I feel very much in tune with your definition of science, so I'll trudge 
along with Kim as far as the UD allows me to follow the reasoning. m.a.




  - Original Message - 
  From: Bruno Marchal 
  To: everything-list@googlegroups.com 
  Sent: Friday, May 29, 2009 6:59 AM
  Subject: Re: Consciousness is information?


  Hi Marty,




  On 29 May 2009, at 02:32, m.a. wrote:


Bruno,
Thank you for this detailed reply. May I pose one follow-up 
question? Is the universal dovetailer some sort of God/Machine that is 
mathematical like the rest of creation but separate from it and of a higher 
order of purpose? 




  The universal dovetailer (UD) is a program. A finite piece of code, which, 
when executed, generates all programs, in all possible programming languages, 
and which also executes all those programs, by dovetailing on those executions. 
In that sense the UD is just a program among all programs. When it runs 
(platonistically or not) it generates itself, and executes itself, an infinity 
of times.


  I will explain this in all details to Kim. It is not a trivial subject, and 
the more you know about the diagonalization technic, the more you are amazed 
that the UD can exist. But its existence is a consequence of simple axioms 
defining addition and multiplication of the natural numbers. Its universal 
character is a consequence of Church's thesis, which is needed for accepting 
the generality of incompleteness and limitation theorems.








If so, is there an explanation for its existence that doesn't exclude a 
deity?




  You can explain the existence of the UD without invoking any deity. But this 
does not exclude any (non naïve or literal) deity. 


  Then, if you are willing to define deities by non turing emulable 
(mathematical) subject or objects, like actual infinities, then, even machines 
(like us, with comp) cannot NOT invoke deities when trying to learn some truth 
about just the numbers and the machines. We need even a transfinite ladder of 
deities to grasp more and more the machine's abilities.


  The opposition between science and religion is a red herring. Science is 
opposed only to authoritative arguments. The confusion comes from the fact that 
many religions, including some form of atheism, are based on authoritative 
arguments, apparently as a consequence of their temporal institutionalization. 


  But real, ideal perhaps, science leads only to modesty and respect, 
especially in regard with fundamental question.


  Science cannot have definite answers on fundamental questions, it can only 
enlarge the awe, the astonishment. 
  Science cannot kill the mystery, but it can clean it better and better from 
the superstitions and the fake mysteries, generally brought by the fear sellers 
and the egocentric manipulators.


  If you follow the explanation to Kim, there will be a point where you will 
understand that science is really what breaks down all possible form of 
reductive or reductionist explanation. This can explain why the pseudo 
religious authoritarians are used to fight against science, and against freedom.


  Comp superficially looks like a reductionism, but it is the most powerful 
vaccine against reductionism.


  Bruno




  http://iridia.ulb.ac.be/~marchal/







  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-28 Thread Kelly Harmon

On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Since you told me that you accept comp, after all, and do no more
 oppose it to your view, I think we agree, at least on many things.
 Indeed you agree with the hypothesis, and your philosophy appears to
 be a consequence of the hypothesis.

Excellent!


 It remains possible that we have a disagreement concerning the
 probability, and this has some importance, because it is the use of
 probability (or credibility) which makes the consequences of comp
 testable. More in the comment below.

So my only problem with the usual view of probability is that it
doesn't seem to me to emerge naturally from a platonic theory of
conscious.  Is your proposal something that would conceivably be
arrived at by a rational observer in one of the (supposedly) rare
worlds where white rabbits are common?   Does it have features that
would lead one to predict the absence of white rabbits, or does it
just offer a way to explain their absence after the fact?

As I mentioned before, assuming computationalism it seems to me that
it is theoretically possible to create a computer simulation that
would manifest any imaginable conscious entity observing any
imaginable world, including schizophrenic beings observing
psychedelic realities.  So, then further assuming Platonism, all of
these strange experiences should exist in Platonia.  Along with all
possible normal experiences.

I don't see any obvious, non-ad hoc mechanism to eliminate or
minimize strange experiences relative to normal experiences, and I
don't think adding one is justified just for that purpose, or even
necessary since an unconstrained platonic theory does have the obvious
virtue of saying that there will always be Kellys like myself who have
never seen white rabbits.

As for your earlier questions about how you should bet, I have two responses.

First that there exists a Bruno who will make every possible bet.
One particular Bruno will make his bet on a whim, while another Bruno
will do so only after long consideration, and yet another will make a
wild bet in a fit of madness.  Each Bruno will feel like he made a
choice, but actually all possible Brunos exist, so all possible bets
are made, for all possible subjectively felt reasons.

Second, and probably more helpfully, I'll quote this paper
(http://www.kcl.ac.uk/content/1/c6/04/17/78/manymindsandprobs.doc) by
David Papineau, which sounds reasonable to me:

But many minds theorists can respond that the logic of statistical
inference is just the same on their view as on the conventional view.
True, on their view in any repeated trial all the different possible
sequences of results can be observed, and so some attempts to infer
the probability from the observed frequency will get it wrong.  Still,
any particular mind observing any one of these sequences will reason
just as the conventional view would recommend:  note the frequency,
infer that the probability is close to the frequency, and hope that
you are not the unlucky victim of an improbable sample.  Of course the
logic of this kind of statistical inference is itself a matter of
active philosophical controversy.  But it will be just the same
inference on both the many minds and the conventional view.

[...]

It is worth observing that, on the conventional view, what agents want
from their choices are the desired results, rather than that these
results be objectively probable (a choice that makes the results
objectively probable, but unluckily doesn't produce them, doesn't give
you what you want).  Given this, there is room to raise the question:
why are rational agents well-advised to choose actions that make their
desired results objectively probable?  Rather surprisingly, is no good
answer to this question.  (After all, you can't assume you will get
what you want if you so choose.)  From Pierce on, philosophers have
been forced to conclude that it is simply a primitive fact about
rational choice that you ought to weight future possibilities
according to known objective probabilities in making decisions.

The many minds view simply says the same thing.  Rational agents ought
to choose those actions which will maximize the known objective
probability of desired results.  As to why they ought to do this,
there is no further explanation.  This is simply a basic truth about
rational choice.

[...]

I supect that this basic truth actually makes more sense on the many
minds view than on the conventional view.  For on the conventional
view there is a puzzle about the relation between this truth and the
further thought that ultimate success in action depends on desired
results actually occurring.  On the many minds view, by contrast,
there is no such further thought, since all possible results occur,
desired and undesired, and so no puzzle:  in effect there is only one
criterion of success in action, namely, maximizing the known objective
probability of desired results.  However, this is really 

Re: Consciousness is information?

2009-05-28 Thread Kim Jones


On 28/05/2009, at 12:21 AM, Bruno Marchal wrote:

 Also, I will from now on, abandon the term machine for the term
 number. Relatively to a fixed chosen universal machine, like
 Robinson arithmetic, such an identification can be done precisely. I
 will come back on this to my explanation to Kim, if he is still
 interested, and patient enough ...



Am still interested and possessed of infinite patience

Kim

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-28 Thread m.a.
Bruno,
 If there was never a physical world to which living creatures 
adapted after millions of years  and which after further eons prompted the 
evolution of consciousness, do we conclude that comp and numbers alone created 
such a universe and then created people to experience it...all through the 
chance combinations of numbers? Are we saying that monkeys on typewriters 
authored everything we see about us?  If so, how did such purposefulness and 
intentionality get into pure comp?   

marty a.






- Original Message - 
From: Kelly Harmon harmon...@gmail.com
To: everything-list@googlegroups.com
Sent: Thursday, May 28, 2009 3:02 AM
Subject: Re: Consciousness is information?


 
 On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Since you told me that you accept comp, after all, and do no more
 oppose it to your view, I think we agree, at least on many things.
 Indeed you agree with the hypothesis, and your philosophy appears to
 be a consequence of the hypothesis.
 
 Excellent!
 
 
 It remains possible that we have a disagreement concerning the
 probability, and this has some importance, because it is the use of
 probability (or credibility) which makes the consequences of comp
 testable. More in the comment below.
 
 So my only problem with the usual view of probability is that it
 doesn't seem to me to emerge naturally from a platonic theory of
 conscious.  Is your proposal something that would conceivably be
 arrived at by a rational observer in one of the (supposedly) rare
 worlds where white rabbits are common?   Does it have features that
 would lead one to predict the absence of white rabbits, or does it
 just offer a way to explain their absence after the fact?
 
 As I mentioned before, assuming computationalism it seems to me that
 it is theoretically possible to create a computer simulation that
 would manifest any imaginable conscious entity observing any
 imaginable world, including schizophrenic beings observing
 psychedelic realities.  So, then further assuming Platonism, all of
 these strange experiences should exist in Platonia.  Along with all
 possible normal experiences.
 
 I don't see any obvious, non-ad hoc mechanism to eliminate or
 minimize strange experiences relative to normal experiences, and I
 don't think adding one is justified just for that purpose, or even
 necessary since an unconstrained platonic theory does have the obvious
 virtue of saying that there will always be Kellys like myself who have
 never seen white rabbits.
 
 As for your earlier questions about how you should bet, I have two responses.
 
 First that there exists a Bruno who will make every possible bet.
 One particular Bruno will make his bet on a whim, while another Bruno
 will do so only after long consideration, and yet another will make a
 wild bet in a fit of madness.  Each Bruno will feel like he made a
 choice, but actually all possible Brunos exist, so all possible bets
 are made, for all possible subjectively felt reasons.
 
 Second, and probably more helpfully, I'll quote this paper
 (http://www.kcl.ac.uk/content/1/c6/04/17/78/manymindsandprobs.doc) by
 David Papineau, which sounds reasonable to me:
 
 But many minds theorists can respond that the logic of statistical
 inference is just the same on their view as on the conventional view.
 True, on their view in any repeated trial all the different possible
 sequences of results can be observed, and so some attempts to infer
 the probability from the observed frequency will get it wrong.  Still,
 any particular mind observing any one of these sequences will reason
 just as the conventional view would recommend:  note the frequency,
 infer that the probability is close to the frequency, and hope that
 you are not the unlucky victim of an improbable sample.  Of course the
 logic of this kind of statistical inference is itself a matter of
 active philosophical controversy.  But it will be just the same
 inference on both the many minds and the conventional view.
 
 [...]
 
 It is worth observing that, on the conventional view, what agents want
 from their choices are the desired results, rather than that these
 results be objectively probable (a choice that makes the results
 objectively probable, but unluckily doesn't produce them, doesn't give
 you what you want).  Given this, there is room to raise the question:
 why are rational agents well-advised to choose actions that make their
 desired results objectively probable?  Rather surprisingly, is no good
 answer to this question.  (After all, you can't assume you will get
 what you want if you so choose.)  From Pierce on, philosophers have
 been forced to conclude that it is simply a primitive fact about
 rational choice that you ought to weight future possibilities
 according to known objective probabilities in making

Re: Consciousness is information?

2009-05-28 Thread Bruno Marchal


On 28 May 2009, at 09:18, Kim Jones wrote:

 Am still interested and possessed of infinite patience


Nice!

Soon !  (in the relative platonist way ... :)

Bruno




http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-28 Thread Bruno Marchal


On 28 May 2009, at 09:02, Kelly Harmon wrote:


 On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal marc...@ulb.ac.be  
 wrote:

 Since you told me that you accept comp, after all, and do no more
 oppose it to your view, I think we agree, at least on many things.
 Indeed you agree with the hypothesis, and your philosophy appears to
 be a consequence of the hypothesis.

 Excellent!


Glad you say so.






 It remains possible that we have a disagreement concerning the
 probability, and this has some importance, because it is the use of
 probability (or credibility) which makes the consequences of comp
 testable. More in the comment below.

 So my only problem with the usual view of probability is that it
 doesn't seem to me to emerge naturally from a platonic theory of
 conscious.

It does not in Platonia-before-Gödel.
After Godel we know machines (and even most gods), when they observe  
they neighborhoods are confronted to many modalities which reflect the  
gap between truth, intelligibility, observability, sensibility.  
Rational probabilities, qualia sensibilities, quanta probabilities  
emerge by the reflection, for each normal universal machine/number, of  
their personal and collective border of their (abyssal) ignorance.  
After Godel we know that such an ignorance has a mathematical creative/ 
productive shape.
But just UDA should convince you that probabilities emerge. I will  
come back on this.



  Is your proposal something that would conceivably be
 arrived at by a rational observer in one of the (supposedly) rare
 worlds where white rabbits are common?


White rabbits are never common. When a white rabbit is common and  
regular, we call it a particle.
Today, the problem with comp is that it still could predict a priori  
too much white rabbits, and not enough particles, to be short ...
A priori the universal machine dreams too much ...



   Does it have features that
 would lead one to predict the absence of white rabbits, or does it
 just offer a way to explain their absence after the fact?

The UD reasoning has to justify the observation that they are very  
rare. We have to justify why our neighborhoods seems to obey  
computable and compressible laws, when we know that below our  
substitution level we are supported by a continuum of computations.  
Well, computer science and mathematical logic are promising with that  
respect. That continuum has a mathematical shape.




 As I mentioned before, assuming computationalism it seems to me that
 it is theoretically possible to create a computer simulation that
 would manifest any imaginable conscious entity observing any
 imaginable world, including schizophrenic beings observing
 psychedelic realities.

The UD does exactly that.
The first price of its universality is that it will not only do that,  
but it will do that *redundantly*. The redundance is big and  
unavoidable.
The second price is that it will generate (in its platonic static way)  
non terminating histories, from which a continuum will be projected as  
viewed from inside.

The UD is redundant, like the Mandelbrot set. See
http://www.youtube.com/watch?v=x6DD1k4BAUgfeature=channel_page
It is an impressive zoom on compact representation of a universal  
dovetailng (most probably) and illustrates the redundancy, and the  
presence of a rich structure. It is generated by a very little program.



 So, then further assuming Platonism, all of
 these strange experiences should exist in Platonia.  Along with all
 possible normal experiences.

Yes. But clearly we don't see them, or very rarely. This has to be  
explained from the general indeterminacy.
The special indeterminacy is the indeterminacy in a relative to this  
cosmos self-duplication à-la Washington/Moscow.
The global indeterminacy is you in front of a (material or immaterial,  
platonic) Universal Dovetailer (UD).

Elementary arithmetic defines already a universal dovetailer.



 I don't see any obvious, non-ad hoc mechanism to eliminate or
 minimize strange experiences relative to normal experiences, and I
 don't think adding one is justified just for that purpose, or even
 necessary since an unconstrained platonic theory does have the obvious
 virtue of saying that there will always be Kellys like myself who have
 never seen white rabbits.

This explains the Kelly of here and now. But it does not explain where  
my assurance comes from that the Kelly which I hope will read this  
mail will still share my history, in which the white rabbit has not  
made an apparition.
Given that you are already idealist, UDA is just UDA1-7, lucky you!

UDA1-7 is the explanation where the probabilities (or credibilities)   
come from, and why they have to be quantum-like unless comp or quantum  
mechanics are flawed.





 As for your earlier questions about how you should bet, I have two  
 responses.

 First that there exists a Bruno who will make every possible bet.
 One particular Bruno will make his bet on a whim, while another 

Re: Consciousness is information?

2009-05-28 Thread Bruno Marchal
Marty,


On 28 May 2009, at 15:41, m.a. wrote:

  If there was never a physical world to which living  
 creatures adapted after millions of years  and which after further  
 eons prompted the evolution of consciousness, do we conclude that  
 comp and numbers alone created such a universe and then created  
 people to experience it...all through the chance combinations of  
 numbers? Are we saying that monkeys on typewriters authored  
 everything we see about us?  If so, how did such purposefulness and  
 intentionality get into pure comp?


No, it is not the combination by chance, it is, on the contrary, due  
to the extreme richness and complexity of the relation between  
numbers.  It makes it possible to take the numbers, when structured by  
addition and multiplication, as the source of the emerging very long  
and deep computational histories, themselves filtered, and non  
trivially restructured, by they possible self-aware universal numbers.  
No chance is at play there. That is even why you can extract the  
physical laws and justify why they are laws.

Probabilities appears as internal first person modalities because no  
machine, and thus not us (assuming comp) can ever know in which  
histories they are. They can know this (betting on comp), and they can  
infer that they are supported by many histories, leading to a many- 
world interpretation of arithmetic.

Monkeys on typewriter authored all the books, like a counting  
algorithm. You can say it generates all programs, but it execute none  
of those programs. The monkeys will generate books describing  
computation, but never any non trivial computations.

The universal dovetailer, and the arithmetical truth (actually a tiny  
part of it) not only generate all programs, but execute them, in the  
platonic static sense, but still, the arithmetical true relations  
defines computations, not just description of computation.

Look at the Mandelbrot set link I give to Kelly. There is nothing  
really random in that structure, yet the more you zoom in, the more  
intricate the structure appears. You can perhaps intuit that something  
is evolving there.

After Gödel the mathematicians have to abandonthe idea to ever find an  
unifying complete theory of the numbers with their additive and  
multiplicative structure. The monkey's type writing is trivial.

You cannot faithfully embed computer science in the numbers or in the  
monkey's typewitting, but you can fully embed computer science in the  
additive and multiplicative structure of the numbers. It is lawful and  
unpredictable by complexity and deepness, not by chance and randomness.

Monkeys = numbers = not very rich.
Universe emerge not from numbers, but from the logical relations among  
the numbers. That is so rich that there is no TOE for that!

Bruno











 - Original Message -
 From: Kelly Harmon harmon...@gmail.com
 To: everything-list@googlegroups.com
 Sent: Thursday, May 28, 2009 3:02 AM
 Subject: Re: Consciousness is information?

 
  On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal  
 marc...@ulb.ac.be wrote:
 
  Since you told me that you accept comp, after all, and do no more
  oppose it to your view, I think we agree, at least on many things.
  Indeed you agree with the hypothesis, and your philosophy appears  
 to
  be a consequence of the hypothesis.
 
  Excellent!
 
 
  It remains possible that we have a disagreement concerning the
  probability, and this has some importance, because it is the use of
  probability (or credibility) which makes the consequences of comp
  testable. More in the comment below.
 
  So my only problem with the usual view of probability is that it
  doesn't seem to me to emerge naturally from a platonic theory of
  conscious.  Is your proposal something that would conceivably be
  arrived at by a rational observer in one of the (supposedly) rare
  worlds where white rabbits are common?   Does it have features that
  would lead one to predict the absence of white rabbits, or does it
  just offer a way to explain their absence after the fact?
 
  As I mentioned before, assuming computationalism it seems to me that
  it is theoretically possible to create a computer simulation that
  would manifest any imaginable conscious entity observing any
  imaginable world, including schizophrenic beings observing
  psychedelic realities.  So, then further assuming Platonism, all of
  these strange experiences should exist in Platonia.  Along with all
  possible normal experiences.
 
  I don't see any obvious, non-ad hoc mechanism to eliminate or
  minimize strange experiences relative to normal experiences, and I
  don't think adding one is justified just for that purpose, or even
  necessary since an unconstrained platonic theory does have the  
 obvious
  virtue of saying that there will always be Kellys like myself who  
 have
  never seen white rabbits.
 
  As for your earlier questions about how you should bet, I have two

Re: Consciousness is information?

2009-05-28 Thread m.a.
Bruno,
Thank you for this detailed reply. May I pose one follow-up 
question? Is the universal dovetailer some sort of God/Machine that is 
mathematical like the rest of creation but separate from it and of a higher 
order of purpose? If so, is there an explanation for its existence that doesn't 
exclude a deity? 


marty a.




  - Original Message - 
  From: Bruno Marchal 
  To: everything-list@googlegroups.com 
  Sent: Thursday, May 28, 2009 4:33 PM
  Subject: Re: Consciousness is information?


  Marty,




  On 28 May 2009, at 15:41, m.a. wrote:


 If there was never a physical world to which living creatures 
adapted after millions of years  and which after further eons prompted the 
evolution of consciousness, do we conclude that comp and numbers alone created 
such a universe and then created people to experience it...all through the 
chance combinations of numbers? Are we saying that monkeys on typewriters 
authored everything we see about us?  If so, how did such purposefulness and 
intentionality get into pure comp?  




  No, it is not the combination by chance, it is, on the contrary, due to the 
extreme richness and complexity of the relation between numbers.  It makes it 
possible to take the numbers, when structured by addition and multiplication, 
as the source of the emerging very long and deep computational histories, 
themselves filtered, and non trivially restructured, by they possible 
self-aware universal numbers. No chance is at play there. That is even why you 
can extract the physical laws and justify why they are laws.


  Probabilities appears as internal first person modalities because no machine, 
and thus not us (assuming comp) can ever know in which histories they are. They 
can know this (betting on comp), and they can infer that they are supported by 
many histories, leading to a many-world interpretation of arithmetic.


  Monkeys on typewriter authored all the books, like a counting algorithm. You 
can say it generates all programs, but it execute none of those programs. The 
monkeys will generate books describing computation, but never any non trivial 
computations.


  The universal dovetailer, and the arithmetical truth (actually a tiny part of 
it) not only generate all programs, but execute them, in the platonic static 
sense, but still, the arithmetical true relations defines computations, not 
just description of computation.


  Look at the Mandelbrot set link I give to Kelly. There is nothing really 
random in that structure, yet the more you zoom in, the more intricate the 
structure appears. You can perhaps intuit that something is evolving there.


  After Gödel the mathematicians have to abandonthe idea to ever find an 
unifying complete theory of the numbers with their additive and multiplicative 
structure. The monkey's type writing is trivial. 


  You cannot faithfully embed computer science in the numbers or in the 
monkey's typewitting, but you can fully embed computer science in the additive 
and multiplicative structure of the numbers. It is lawful and unpredictable by 
complexity and deepness, not by chance and randomness.


  Monkeys = numbers = not very rich.
  Universe emerge not from numbers, but from the logical relations among the 
numbers. That is so rich that there is no TOE for that!


  Bruno
















- Original Message -
From: Kelly Harmon harmon...@gmail.com
To: everything-list@googlegroups.com
Sent: Thursday, May 28, 2009 3:02 AM
Subject: Re: Consciousness is information?


 
 On Wed, May 27, 2009 at 10:21 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Since you told me that you accept comp, after all, and do no more
 oppose it to your view, I think we agree, at least on many things.
 Indeed you agree with the hypothesis, and your philosophy appears to
 be a consequence of the hypothesis.
 
 Excellent!
 
 
 It remains possible that we have a disagreement concerning the
 probability, and this has some importance, because it is the use of
 probability (or credibility) which makes the consequences of comp
 testable. More in the comment below.
 
 So my only problem with the usual view of probability is that it
 doesn't seem to me to emerge naturally from a platonic theory of
 conscious.  Is your proposal something that would conceivably be
 arrived at by a rational observer in one of the (supposedly) rare
 worlds where white rabbits are common?   Does it have features that
 would lead one to predict the absence of white rabbits, or does it
 just offer a way to explain their absence after the fact?
 
 As I mentioned before, assuming computationalism it seems to me

Re: Consciousness is information?

2009-05-27 Thread Kelly Harmon

On Mon, May 25, 2009 at 11:21 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 Actually I still have no clue of what you mean by information.

Well, I don't think I can say it much better than I did before:

In my view, there are ungrounded abstract symbols that acquire
meaning via constraints placed on them by their relationships to other
symbols.  The only grounding comes from the conscious experience
that is intrinsic to a particular set of relationships.  To repeat my
earlier Chalmers quote, Experience is information from the inside;
physics is information from the outside.  It is this subjective
experience of information that provides meaning to the otherwise
completely abstract platonic symbols.

Going a little further:  I would say that the relationships between
the symbols that make up a particular mental state have some sort of
consistency, some regularity, some syntax - so that when these
syntactical relationships are combined with the symbols it does make
up some sort of descriptive language.  A language that is used to
describe a state of mind.  Here we're well into the realm of semiotics
I think.

To come back to our disagreement, what is it that a Turing machine
does that results in consciousness?  It would seem to me that
ultimately what a Turing machine does is manipulate symbols according
to specific rules.  But is it the process of manipulating the symbols
that produces consciousness?  OR is it the state of the symbols and
their relationships with each other AFTER the manipulation which
really accounts for consciousness?

I say the latter.  You seem to be saying the former...or maybe you're
saying it's both?

As I've mentioned, I think that the symbols which combine to create a
mental state can be manipulated in MANY ways.  And algorithms just
serve as descriptions of these ways.  But subjective consciousness is
in the states, not in how the states are manipulated.


 With different probabilities. That is why we are partially responsible
 of our future. This motivates education and learning, and commenting
 posts ...

In my view, life is just something that we experience.  That's it.
There's nothing more to life than subjective experience.  The feeling
of being an active participant, of making decisions, of planning, of
choosing, is only that:  a feeling.  A type of qualia.

Okay, it's past my bedtime, I'll do probability tomorrow!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-27 Thread Bruno Marchal

Kelly,

Since you told me that you accept comp, after all, and do no more  
oppose it to your view, I think we agree, at least on many things.
Indeed you agree with the hypothesis, and your philosophy appears to  
be a consequence of the hypothesis. That is all my work is about.  
Indeed I show you are right in a constructive way, which leads to the  
testability of the computationalist hypothesis.

It remains possible that we have a disagreement concerning the  
probability, and this has some importance, because it is the use of  
probability (or credibility) which makes the consequences of comp  
testable. More in the comment below.

Also, I will from now on, abandon the term machine for the term  
number. Relatively to a fixed chosen universal machine, like  
Robinson arithmetic, such an identification can be done precisely. I  
will come back on this to my explanation to Kim, if he is still  
interested, and patient enough ...


On 27 May 2009, at 09:05, Kelly Harmon wrote:

 On Mon, May 25, 2009 at 11:21 AM, Bruno Marchal marc...@ulb.ac.be  
 wrote:


 Actually I still have no clue of what you mean by information.

 Well, I don't think I can say it much better than I did before:

 In my view, there are ungrounded abstract symbols that acquire
 meaning via constraints placed on them by their relationships to other
 symbols.


Exactly. And those constraints makes sense once we make explicit the  
many universal numbers involved. I will have opportunities to say more  
on this later.




 The only grounding comes from the conscious experience
 that is intrinsic to a particular set of relationships.


I agree, but only because I have succeeded to make such a statement  
utterly precise, and even testable.




 To repeat my
 earlier Chalmers quote, Experience is information from the inside;
 physics is information from the outside.


Again this is fuzzy, and I think Chalmers is just quoting me with  
different terms (btw). I prefer to avoid the word information because  
it has different meaning in science and in everyday talk. Either you  
use it in the sense of Shannon, or Kolmogorov, or Solomonov, or  
Solovay or even Landauer (which one precisely?), in which case  
information = consciousness is as much non sensical than saying  
consciousness is neuron's firing, or you use it, as I think you do,  
in the everyday sense of information like when we ask do you know the  
last information on TV?. In that case information corresponds to what  
I am used to call first person view, and your identity  
consciousness = information is correct, and even a theorem with  
reasonnably fine grained definitions. So we are OK here.



  It is this subjective
 experience of information that provides meaning to the otherwise
 completely abstract platonic symbols.

As I said.




 Going a little further:  I would say that the relationships between
 the symbols that make up a particular mental state have some sort of
 consistency, some regularity, some syntax - so that when these
 syntactical relationships are combined with the symbols it does make
 up some sort of descriptive language.  A language that is used to
 describe a state of mind.  Here we're well into the realm of semiotics
 I think.

Here you are even closer to what I say in both UDA and AUDA. No  
problem. It takes me 30 years of work to explain this succesfully to a  
part of the experts in those fields, so as to make a PhD thesis from  
that. Sorry to let you know that this has been already developed in  
details. My originality is to take computer science seriously when  
studying computationalism.





 To come back to our disagreement, what is it that a Turing machine
 does that results in consciousness?


 From the third point of view, one universal number relates the 3- 
informations.
 From the first person point of view, all universal and particular  
numbers at once imposes a probability measure on the histories going  
through the corresponding 1-information.




 It would seem to me that
 ultimately what a Turing machine does is manipulate symbols according
 to specific rules.

In the platonic sense, yes. And it concerns 3-information or relative  
computational states.



 But is it the process of manipulating the symbols
 that produces consciousness?


No. Nothing, strictly speaking, ever produce consciousness. It will  
appear to be the unavoidable inside view aspect of numbers in  
arithmetical platonia. AUDA explains this thanks to the fact that self- 
consistency belongs to the G* minus G theory. It is the kind of things  
which a number (machine) can produce as true without being able to  
communicate it scientifically (prove) to another machine, including  
itself.




 OR is it the state of the symbols and
 their relationships with each other AFTER the manipulation which
 really accounts for consciousness?


Preferably indeed. The manipulations are all existing in the static  
Platonia.





 I say the latter.  You seem to be saying the 

Re: Consciousness is information?

2009-05-24 Thread Brent Meeker

Kelly wrote:

 On May 23, 12:54 pm, Brent Meeker meeke...@dslextreme.com wrote:
   
 Either of these ideas is definite
 enough that they could actually be implemented (in contrast to many
 philosophical ideas about consciousness).
 

 Once you had implemented the ideas, how would you then know whether
 consciousness experience had actually been produced, as opposed to the
 mere appearance of it?

 If you don't have a way of definitively detecting the hoped for result
 of consciousness, then how exactly does being implementable really
 help?  You run your test...and then what?

It's no different than any theory (including yours).  You draw some 
conclusions about what should happen if it's correct, you try it and you 
see if your predictions work out.  If I program/build my robot a certain 
way will it seem as conscious as a dog or a chimpanzee or a human?  Can 
I adjust my design to match any of those?  Can I change my brain in a 
certain way and change my experienced consciousness in a predictable 
way.   If so, I place some credence in my theory of consciousness.  If 
not - it's back to the drawing board.  Many things are not observed 
directly.  No theory is certain; it may be true but we can never be 
certain it's true.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-24 Thread Kelly Harmon

On Sun, May 24, 2009 at 1:54 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 May be you could study the UDA, and directly tell me at which step
 your theory departs from the comp hyp.

Okay, I read over your SANE2004 paper again.

From step 1 of UDA:

The scanned (read) information is send by traditional means, by mails
or radio waves for instance, at Helsinki, where you are correctly
reconstituted with ambient organic material.

Okay, so this information that is sent by traditional means is really
I think where consciousness lives.  Though not literally in the
physical instantiation of the information.  For instance if you were
to print out that information in some format, I would NOT point to the
large pile of ink-stained paper and say that it was conscious.  But
would say that the information that is represented by that pile of ink
and paper represents, or identifies, or points to a single
instant of consciousness.

So, what is the information?  Well, let's say the data you're
transmitting is from a neural scan and consists of a bunch of numbers
indicating neural connection weights, chemical concentrations,
molecular positions and states, or whatever.  I wouldn't even say that
this information is the information that is conscious.  Instead this
information is ultimately an encoding (via the particular way that the
brain stores information) of the symbols and the relationships between
those symbols that represent your knowledge, beliefs, and memories
(all of the information that makes you who you are).  (Echoes here of
the Latent Semantic Analysis (LSA) stuff that I referenced before)


From step 8 of UDA:

Instead of linking [the pain I feel] at space-time (x,t) to [a
machine state] at space-time (x,t), we are obliged to associate [the
pain I feel at space-time (x,t)] to a type or a sheaf of computations
(existing forever in the arithmetical Platonia which is accepted as
existing independently of our selves with arithmetical realism).

So instead I would write this as:

Instead of linking [the pain I feel] at space-time (x,t) to [a
machine state] at space-time (x,t), we are obliged to associate [the
pain I feel at space-time (x,t)] to an [informational state] existing
forever in Platonia which is accepted as existing independently of
ourselves.


 You have to see that, personally, I don't have a theory other than the
 assumption that the brain is emulable by a Turing machine

I also believe that, but I think that consciousness is in the
information represented by the discrete states of the data stored on
the Turing machine's tape after each instruction is executed, NOT in
the actual execution of the Turing machine.  The instruction table of
the Turing machine just describes one possible way that a particular
sequence of information states could be produced.

Execution of the instructions in the action table actually doesn't do
anything with respect to the production of consciousness.  The output
informational states represented by data on tape exists platonically
even if the Turing machine program is never run.  And therefore the
consciousness that goes with those states also exists platonically,
even if the Turing machine program is never run.


 OK. So, now, Kelly, just to understand what you mean by your theory, I
 have to ask you what your theory predicts in case of self-
 multiplication.

Well, first I'd say there aren't copies of identical information in
Platonia.  All perceived physical representations all actually point
to (similarly to a C-style pointer in programming) the same
platonically existing information state.  So if there are 1000
identical copies of me in identical mental states, they are really
just representations of the same source information state.

Piles of atoms aren't conscious.  Information is conscious.  1000
identically arranged piles of atoms still represent only a single
information state (setting aside putnam mapping issues).  The
information state is conscious, not the piles of atoms.

However, once their experiences diverge so that they are no longer
identical, then they are totally seperate and they represent (or point
to) seperate, non-overlapping conscious information states.


 To see where does those probabilities come from, you have to
 understand that 1) you can be multiplied (that is read, copy (cut) and
 pasted in Washington AND Moscow (say)), and 2) you are multiplied (by
 2^aleph_zero, at each instant, with a comp definition of instant not
 related in principle with any form of physical time).

Well, probability is a tricky subject, right?

An interesting quote:

Whereas the interpretation of quantum mechanics has only been
puzzling us for ~75 years, the interpretation of probability has been
doing so for more than 300 years [16, 17]. Poincare [18] (p. 186)
described probability as an obscure instinct. In the century that
has elapsed since then philosophers have worked hard to lessen the
obscurity. However, the result has not been to arrive at any
consensus. 

Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 06:39, Brent Meeker wrote:


 Bruno Marchal wrote:
 On 22 May 2009, at 18:25, Jason Resch wrote:

 ...
 Do you believe if we create a computer in this physical
 universe that it could be made conscious,


 But a computer is never conscious, nor is a brain. Only a person is
 conscious, and a computer or a brain can only make it possible for a
 person to be conscious relatively to another computer. So your
 question is ambiguous.
 It is not my brain which is conscious, it is me who is conscious.

 By me do you mean some computation in Platonia?  I'm wondering what
 are the implications of your theory for creating artificial
 consciousness.  Since comp starts with the assumption that replacing
 one's brain with functionally  identical units (at some level of  
 detail)
 will make no discernable difference in your experience, it entails  
 that
 a computer that functionally replaces your brain is conscious  
 (conscious
 of being you in fact).  So if I want to build a conscious robot from
 scratch, not by copying someone's brain, what must I do?


I don't see the problem, besides the obvious and usual difficulties of  
artificial intelligence.
Actually if you implement a theorem prover for Peano Arithmetic (=  
Robinson Arithmetic + the induction axioms) I am willing to say that  
you have build a conscious entity.
It is the entity that I interview (thanks to the work of Gödel, Löb  
and Solovay).
The person related to it, which I identify with the knower (obeying to  
the theaetetical logic of provable(p)  p)
exist simultaneously in all the possible relative implementations of  
it in platonia or in UD* (the universal deployment).
I mean it is the same for a copy of me, or an intelligent robot build  
from scratch. Both person exist in an atemporal and aspatial ways in  
Platonia, and will appear concrete to any entity belonging to some  
computation where they can manifest themselves.
Like numbers. 17 exists in Platonia, but 17 has multiple  
implementation in many computations in Platonia.

I guess I miss something because I don't see any problem here. You may  
elaborate perhaps. We are in the seven step here. Are you sure you  
grasp the six preceding steps?

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Brent Meeker

Bruno Marchal wrote:
 On 23 May 2009, at 06:39, Brent Meeker wrote:

   
 Bruno Marchal wrote:
 
 On 22 May 2009, at 18:25, Jason Resch wrote:

 ...
   
 Do you believe if we create a computer in this physical
 universe that it could be made conscious,

 
 But a computer is never conscious, nor is a brain. Only a person is
 conscious, and a computer or a brain can only make it possible for a
 person to be conscious relatively to another computer. So your
 question is ambiguous.
 It is not my brain which is conscious, it is me who is conscious.
   
 By me do you mean some computation in Platonia?  I'm wondering what
 are the implications of your theory for creating artificial
 consciousness.  Since comp starts with the assumption that replacing
 one's brain with functionally  identical units (at some level of  
 detail)
 will make no discernable difference in your experience, it entails  
 that
 a computer that functionally replaces your brain is conscious  
 (conscious
 of being you in fact).  So if I want to build a conscious robot from
 scratch, not by copying someone's brain, what must I do?
 


 I don't see the problem, besides the obvious and usual difficulties of  
 artificial intelligence.
 Actually if you implement a theorem prover for Peano Arithmetic (=  
 Robinson Arithmetic + the induction axioms) I am willing to say that  
 you have build a conscious entity.
   
But why?  Why not RA without induction?  Is it necessary that there be 
infinite schema?  Since you phrase your answer as I am willing... is 
it a matter of your intuition or is it a matter of degree of 
consciousness.

Brent


 It is the entity that I interview (thanks to the work of Gödel, Löb  
 and Solovay).
 The person related to it, which I identify with the knower (obeying to  
 the theaetetical logic of provable(p)  p)
 exist simultaneously in all the possible relative implementations of  
 it in platonia or in UD* (the universal deployment).
 I mean it is the same for a copy of me, or an intelligent robot build  
 from scratch. Both person exist in an atemporal and aspatial ways in  
 Platonia, and will appear concrete to any entity belonging to some  
 computation where they can manifest themselves.
 Like numbers. 17 exists in Platonia, but 17 has multiple  
 implementation in many computations in Platonia.

 I guess I miss something because I don't see any problem here. You may  
 elaborate perhaps. We are in the seven step here. Are you sure you  
 grasp the six preceding steps?

 Bruno

 http://iridia.ulb.ac.be/~marchal/




 

   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Kelly Harmon

Okay, below are three passages that I think give a good sense of what
I mean by information when I say that consciousness is
information.  The first is from David Chalmers' Facing up to the
Problem of Consciousness.  The second is from the SEP article on
Semantic Conceptions of Information, and the third is from Symbol
Grounding and Meaning:  A comparison of High-Dimensional and Embodied
Theories of Meaning, by Arthur Glenberg and David Robertson.

So I'm looking at these largely from a static, timeless, platonic
view.  In my view, there are ungrounded abstract symbols that acquire
meaning via constraints placed on them by their relationships to other
symbols.  The only grounding comes from the conscious experience
that is intrinsic to a particular set of relationships.  To repeat my
earlier Chalmers quote, Experience is information from the inside;
physics is information from the outside.  It is this subjective
experience of information that provides meaning to the otherwise
completely abstract platonic symbols.

So I think that something like David Lewis' modal realism is true by
virtue of the fact that all possible sets of relationships are
realized in Platonia.

Note that I don't have Bruno's fear of white rabbits.  Assuming that
we are typical observers is fine as a starting point, and is a good
way to choose between otherwise equivalent explanations, but I don't
think it should hold a unilateral veto over our final conclusions.  If
the most reasonable explanation says that our observations aren't
especially typical, then so be it.  Not everyone can be typical.

I think the final passage from Glenberg and Robertson (from a paper
that actually argues against what's being described) gives the best
sense of what I have in mind, though obviously I'm extrapolating out
quite abit from the ideas presented.

Okay, so the passages of interest:

--

David Chalmers:

The basic principle that I suggest centrally involves the notion of
information. I understand information in more or less the sense of
Shannon (1948). Where there is information, there are information
states embedded in an information space. An information space has a
basic structure of difference relations between its elements,
characterizing the ways in which different elements in a space are
similar or different, possibly in complex ways. An information space
is an abstract object, but following Shannon we can see information as
physically embodied when there is a space of distinct physical states,
the differences between which can be transmitted down some causal
pathway. The states that are transmitted can be seen as themselves
constituting an information space. To borrow a phrase from Bateson
(1972), physical information is a difference that makes a difference.

The double-aspect principle stems from the observation that there is a
direct isomorphism between certain physically embodied information
spaces and certain phenomenal (or experiential) information spaces.
From the same sort of observations that went into the principle of
structural coherence, we can note that the differences between
phenomenal states have a structure that corresponds directly to the
differences embedded in physical processes; in particular, to those
differences that make a difference down certain causal pathways
implicated in global availability and control. That is, we can find
the same abstract information space embedded in physical processing
and in conscious experience.

--

SEP:

Information cannot be dataless but, in the simplest case, it can
consist of a single datum.  A datum is reducible to just a lack of
uniformity (diaphora is the Greek word for “difference”), so a general
definition of a datum is:

The Diaphoric Definition of Data (DDD):

A datum is a putative fact regarding some difference or lack of
uniformity within some context.  [In particular data as diaphora de
dicto, that is, lack of uniformity between two symbols, for example
the letters A and B in the Latin alphabet.]

--

Glenberg and Robertson:

Meaning arises from the syntactic combination of abstract, amodal
symbols that are arbitrarily related to what they signify.  A new form
of the abstract symbol approach to meaning affords the opportunity to
examine its adequacy as a psychological theory of meaning.  This form
is represented by two theories of linguistic meaning (that is, the
meaning of words, sentences, and discourses), both of which take
advantage of the mathematics of high-dimensional spaces. The
Hyperspace Analogue to Language (HAL; Burgess  Lund, 1997) posits
that the meaning of a word is its vector representation in a space
based on 140,000 word–word co-occurrences. Latent Semantic Analysis
(LSA; Landauer  Dumais, 1997) posits that the meaning of a word is
its vector representation in a space with approximately 300 dimensions
derived from a space with many more dimensions. The vector elements
found in both theories are just the sort of abstract features that are
prototypical in 

Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 09:08, Brent Meeker wrote:



 But why?  Why not RA without induction?  Is it necessary that there be
 infinite schema?  Since you phrase your answer as I am willing... is
 it a matter of your intuition or is it a matter of degree of
 consciousness.


OK. I could have taken RA. But without the induction axioms, RA is  
very poor in provability abilities, it has the consciousness of a low  
animals, if you want. Its provability logic is very weak with respect  
to self-reference. It cannot prove the arithmetical formula Bp - BBp  
for any arithmetical p. So it is not even a type 4 reasoner (cf  
Smullyan's Forever Undecided, see my posts on FU), and it cannot know  
its own incompleteness. But it can be considered as conscious. It is  
not self-conscious, like the Lobian machine.

Note that Bp - BBp is true *for* RA, but it is not provable *by* RA.
Bp - BBp is true for and provable by PA. Smullyan says that PA, or  
any G reasoner, is self-aware.

Of course, consciousness (modeled by consistency) is true for PA and  
RA, and not provable neither by RA nor PA (incompleteness).

But all this is not related to the problem you were talking about,  
which I still don't understand.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread John Mikes
I missed the meaning of *'conscious'* as applied in this discussion. *If we
accept* that it means 'responding to information' ( used in the wides sense:
in *responding* there is an *absorption* of the result of an observer
moment and *completenig relations thereof* and te *information* as the
*absorbed
relations*) *then a thermostat is conscious*.
Without such clarification Jason's question is elusive. (I may question the
term physical universe as well - as the compilation of aspect-slanted
figments to explain observations we made in select views by select means
(cf. conventional and not-so-conventional science, numbers, Platonist
filters, quantum considerations, theological views, etc.)
*
Then Bruno's response below refers to a *fetish* (person? what is this?) -
definitely NOT a computer, but relative to* ANOTHER(?)* computer. *The
'another' points to similarity.*
 It also reverberates with Jason's *WE*(??) (Is this 'a person', a
homunculus, or what?)create a computer further *segregating* the 'fetish'
Bruno refers to from 'a computer'.
*I don't find it ambiguous: I find it undefined terms clashing in elusive
meanings.*

Another open spot is the 'conscious robot' that would not become conscious
even by copying someone's BRAIN (which is NOT conscious! - as said).
We still face the I, the ME *UFO* (considered as 'self'') that DOES but
IS NOT. - And - is conscious. Whatever that may mean.

Then comes Brent with the reasonable question. I would add: what is
necessary for a 'computation in Platonia' to become a person? should it pee?
I feel the term Brent asked is still a select artifact ideation, APPLICABLE
(maybe) to non-computational domains to make it a person (whatever that
may be). It is still not I, the conscious, thinking of it.
The 'conscious' ME is different from a computation with denied consciousness
- as I read.
Replacing the (non-conscious) brain with identical other parts does not
impart the missing conscious quality - unless the replacement IS conscious,
in which case it is NOT a replacement. It is a exchange to - as Brent
correctly points to.  (Leaving open the term 'you - conscious' as a deus ex
machina quale-addition for the replacement).

Just looking through differently colored goggles.

John Mikes






On Sat, May 23, 2009 at 12:39 AM, Brent Meeker meeke...@dslextreme.comwrote:


 Bruno Marchal wrote:
  On 22 May 2009, at 18:25, Jason Resch wrote:
 
  ...
  Do you believe if we create a computer in this physical
  universe that it could be made conscious,
 
 
  But a computer is never conscious, nor is a brain. Only a person is
  conscious, and a computer or a brain can only make it possible for a
  person to be conscious relatively to another computer. So your
  question is ambiguous.
  It is not my brain which is conscious, it is me who is conscious.

 By me do you mean some computation in Platonia?  I'm wondering what
 are the implications of your theory for creating artificial
 consciousness.  Since comp starts with the assumption that replacing
 one's brain with functionally  identical units (at some level of detail)
 will make no discernable difference in your experience, it entails that
 a computer that functionally replaces your brain is conscious (conscious
 of being you in fact).  So if I want to build a conscious robot from
 scratch, not by copying someone's brain, what must I do?

 Brent

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 09:35, Kelly Harmon wrote:


 Okay, below are three passages that I think give a good sense of what
 I mean by information when I say that consciousness is
 information.  The first is from David Chalmers' Facing up to the
 Problem of Consciousness.  The second is from the SEP article on
 Semantic Conceptions of Information, and the third is from Symbol
 Grounding and Meaning:  A comparison of High-Dimensional and Embodied
 Theories of Meaning, by Arthur Glenberg and David Robertson.

 So I'm looking at these largely from a static, timeless, platonic
 view.

We agree then. Assuming comp we have no choice in the matter here.




 In my view, there are ungrounded abstract symbols that acquire
 meaning via constraints placed on them by their relationships to other
 symbols.

Absolutely so.




  The only grounding comes from the conscious experience
 that is intrinsic to a particular set of relationships.


Exactly.



 To repeat my
 earlier Chalmers quote, Experience is information from the inside;
 physics is information from the outside.  It is this subjective
 experience of information that provides meaning to the otherwise
 completely abstract platonic symbols.


I insist on this well before Chalmers. We are agreeing on this.
But then you associate consciousness with the experience of information.
This is what I told you. I can understand the relation between  
consciousness and information content.





 So I think that something like David Lewis' modal realism is true by
 virtue of the fact that all possible sets of relationships are
 realized in Platonia.


We agree. This is explained in detail in conscience et mécanisme.  
Comp forces modal realism. AUDA just gives the precise modal logics,  
extracted from the theory of the self-referentially correct machine.





 Note that I don't have Bruno's fear of white rabbits.


Then you disagree with all reader of David Lewis, including David  
lewis himself who recognizes this inflation of to many realities as a  
weakness of its modal realism. My point is that the comp constraints  
leads to a solution of that problem, indeed a solution close to the  
quantum Everett solution. But the existence of white rabbits, and thus  
the correctness of comp remains to be tested.




 Assuming that
 we are typical observers is fine as a starting point, and is a good
 way to choose between otherwise equivalent explanations, but I don't
 think it should hold a unilateral veto over our final conclusions.  If
 the most reasonable explanation says that our observations aren't
 especially typical, then so be it.  Not everyone can be typical.

It is just a question of testing a theory. You seem to say something  
like if the theory predict that water under fire will typically boil,  
and that experience does not confirm that typicality (water froze  
regularly) then it means we are just very unlucky. But then all  
theories are correct.





 I think the final passage from Glenberg and Robertson (from a paper
 that actually argues against what's being described) gives the best
 sense of what I have in mind, though obviously I'm extrapolating out
 quite abit from the ideas presented.

 Okay, so the passages of interest:

 --

 David Chalmers:

 The basic principle that I suggest centrally involves the notion of
 information. I understand information in more or less the sense of
 Shannon (1948). Where there is information, there are information
 states embedded in an information space. An information space has a
 basic structure of difference relations between its elements,
 characterizing the ways in which different elements in a space are
 similar or different, possibly in complex ways. An information space
 is an abstract object, but following Shannon we can see information as
 physically embodied when there is a space of distinct physical states,
 the differences between which can be transmitted down some causal
 pathway. The states that are transmitted can be seen as themselves
 constituting an information space. To borrow a phrase from Bateson
 (1972), physical information is a difference that makes a difference.

 The double-aspect principle stems from the observation that there is a
 direct isomorphism between certain physically embodied information
 spaces and certain phenomenal (or experiential) information spaces.

This can be shown false in Quantum theory without collapse, and more  
easily with the comp assumption.
No problem if you tell me that you reject both Everett and comp.  
Chalmers seems in some place to accept both Everett and comp, indeed.  
He explains to me that he stops at step 3. He believes that after a  
duplication you feel to be simultaneously at the both place, even  
assuming comp. I think and can argue that this is non sense. Nobody  
defends this on the list. Are you defending an idea like that?




 From the same sort of observations that went into the principle of
 structural coherence, we can note that the differences 

Re: Consciousness is information?

2009-05-23 Thread Brent Meeker

Bruno Marchal wrote:
 On 23 May 2009, at 09:08, Brent Meeker wrote:


   
 But why?  Why not RA without induction?  Is it necessary that there be
 infinite schema?  Since you phrase your answer as I am willing... is
 it a matter of your intuition or is it a matter of degree of
 consciousness.
 


 OK. I could have taken RA. But without the induction axioms, RA is  
 very poor in provability abilities, it has the consciousness of a low  
 animals, if you want. Its provability logic is very weak with respect  
 to self-reference. It cannot prove the arithmetical formula Bp - BBp  
 for any arithmetical p. So it is not even a type 4 reasoner (cf  
 Smullyan's Forever Undecided, see my posts on FU), and it cannot know  
 its own incompleteness. But it can be considered as conscious. It is  
 not self-conscious, like the Lobian machine.

 Note that Bp - BBp is true *for* RA, but it is not provable *by* RA.
 Bp - BBp is true for and provable by PA. Smullyan says that PA, or  
 any G reasoner, is self-aware.

 Of course, consciousness (modeled by consistency) is true for PA and  
 RA, and not provable neither by RA nor PA (incompleteness).

 But all this is not related to the problem you were talking about,  
 which I still don't understand.

 Bruno

I think it is related.  I'm just trying to figure out the implications 
of your theory for the problem of creating artificial, conscious 
intelligences. What I gather from the above is that you think there are 
degrees of consciousness marked by the ability to prove things.  To 
consider another view, for example, John McCarthy thinks there are 
degrees of consciousness marked by having narratives created and 
remembered and meta-narratives.  Either of these ideas is definite 
enough that they could actually be implemented (in contrast to many 
philosophical ideas about consciousness).   I have some reservation 
about your idea because I know many people that I think are conscious 
but who couldn't prove even the simplest theorem in PA.  Are we to 
suppose they just have a qualitatively different kind of consciousness?

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


On 23 May 2009, at 18:54, Brent Meeker wrote:



 I think it is related.  I'm just trying to figure out the implications
 of your theory for the problem of creating artificial, conscious
 intelligences. What I gather from the above is that you think there  
 are
 degrees of consciousness marked by the ability to prove things.


Hmm ... It is more a degree of self-reflexivity, or a degree of  
introspective ability. RA, although universal (in the Church Turing  
thesis sense) is a *very* weak theorem prover. RA is quite limited in  
its introspection abilities. I am open to the idea that RA could be  
conscious, but the interview does not lead to a theory of consciousness.
It is not a lobian machine like PA (= RA + induction). Lobianity  
begins with weaker theory than PA though, somewhere between RA and PA,  
and Lobianity is persistant, it concerns all sound extensions of PA,  
even hyperturing extension actually.

Also, I don't think I have a theory. I work in a very old theory:  
mechanism. It is not mine, and I use it because it makes possible to  
use computer science to prove things. Enough things to show mechanism  
empirically refutable.
For AUDA you need to accept the Theatetical approach to knowledge, all  
right.

I recall that in Smullyan Forever Undecided, which introduces to the  
logic of self-reference G, a nice hierarchy of reasoners is displayed  
up to the Lobian machine.



  To
 consider another view, for example, John McCarthy thinks there are
 degrees of consciousness marked by having narratives created and
 remembered and meta-narratives.  Either of these ideas is definite
 enough that they could actually be implemented (in contrast to many
 philosophical ideas about consciousness).

It is not bad. PA has the meta-narrative ability, and RA lacks it. You  
can see this in that way.



 I have some reservation
 about your idea because I know many people that I think are conscious
 but who couldn't prove even the simplest theorem in PA.

Because they lack the familiarity with the notations, or they have  
some math trauma, or because they are impatient or not interested. But  
all human beings, if you motivate them and give them time, can prove  
all theorems of PA, and, more importantly believe the truth of those  
theorems.

I have to add this last close, because even RA can prove all theorems  
of PA, given that RA is turing universal. But RA, without becoming PA,  
cannot really understand the proofs, like the guy in the chinese room  
can talk chinese, yet cannot understand its talk. It is the place  
where people easily make a confusion of level similar to Searle  
confusion (described by Dennett and Hofstadter). I can simulate  
Einstein's brain, but this does not make me Einstein. On the contrary  
this makes possible to discuss with Einstein. It is in that sense that  
RA can simulate PA without becoming PA. Likewise, all theories can  
simulate all effective theories. PA is probably still very simple  
compared to any human, except highly mentally disabled person or  
person in comatose state of course.




 Are we to
 suppose they just have a qualitatively different kind of  
 consciousness?

I don't think so, but in the entheogen forums people can discuss at  
infinitum if under such or such plants people experience a  
qualitatively different kind of consciousness. Given the hardness to  
just discuss on consciousness you can understand that this is a bit of  
a premature question.

Many estimate that to be conscious is always to be conscious of some  
qualia. In that case I could argue that even me today has already a  
qualitatively different kind of consciousness compared with me  
yesterday.  Now, my opinion (which plays no role in the UDA- 
reasoning) is that consciousness can be qualia independent, and is  
something qualitatively stable, as opposed to the content of  
consciousness, which can vary a lot.

Now, if you compare RA (non lobian) and PA (lobian), then it is far  
more possible that they have a different kind of consciousness, and  
even lives in a different kind of physics, as a consequence. RA could  
be closer to a universal consciousness notion. It would mean that PA  
could already be under some illusions ...
I don't know. Real hard questions here.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Kelly Harmon

On Sat, May 23, 2009 at 8:47 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 To repeat my
 earlier Chalmers quote, Experience is information from the inside;
 physics is information from the outside.  It is this subjective
 experience of information that provides meaning to the otherwise
 completely abstract platonic symbols.


 I insist on this well before Chalmers. We are agreeing on this.
 But then you associate consciousness with the experience of information.
 This is what I told you. I can understand the relation between
 consciousness and information content.

Information.  Information content.  H.  Well, I'm not entirely
sure what you're saying here.  Maybe I don't have a problem with this,
but maybe I do.  Maybe we're really saying the same thing here, but
maybe we're not.  Hm.


 Note that I don't have Bruno's fear of white rabbits.

 Then you disagree with all reader of David Lewis, including David
 lewis himself who recognizes this inflation of to many realities as a
 weakness of its modal realism. My point is that the comp constraints
 leads to a solution of that problem, indeed a solution close to the
 quantum Everett solution. But the existence of white rabbits, and thus
 the correctness of comp remains to be tested.

True, Lewis apparently saw it as a cost, BUT not so high a cost as to
abandon modal realism.  I don't even see it as a high cost, I see it
as a logical consequence.  Again, it's easy to imagine a computer
simulation/virtual reality in which a conscious observer would see
disembodied talking heads and flying pigs.  So it certainly seems
possible for a conscious being to be in a state of observing an
unattached talking head.

Given that it's possible, why wouldn't it be actual?

The only reason to think that it wouldn't be actual is that our
external objectively existing physical universe doesn't have physical
laws that can lead easily to the existance of such talking heads to be
observed.  But once you've abandoned the external universe and
embraced platonism, then where does the constraint against observing
talking heads come from?

Assuming platonism, I can explain why I don't see talking heads:
because every possible Kelly is realized, and that includes a Kelly
who doesn't observe disembodied talking heads and who doesn't know
anyone who has ever seen such a head.

So given that my observations aren't in conflict with my theory, I
don't see a problem.  The fact that nothing that I could observe would
ever conflict with my theory is also not particularly troubling to me
because I didn't arrive at my theory as means of explaining any
particular observed fact about the external universe.

My theory isn't intended to explain the contingent details of what I
observe.  It's intended to explain the fact THAT I subjectively
observe anything at all.

Given that it seems theoretically possible to create a computer
simulation that would manifest any imaginable conscious being
observing any imaginable world, including schizophrenic beings
observing psychodelic realities, I don't see why you are trying to
constrain the platonic realities that can be experienced to those that
are extremely similar to ours.


 It is just a question of testing a theory. You seem to say something
 like if the theory predict that water under fire will typically boil,
 and that experience does not confirm that typicality (water froze
 regularly) then it means we are just very unlucky. But then all
 theories are correct.

I say there is no water.  There is just our subjective experience of
observing water.  Trying to constrain a Platonic theory of
consciousness so that it matches a particular observed physical
reality seems like a mistake to me.

Is there a limit to what we could experience in a computer simulated
reality?  If not, why would there be a limit to what we could
experience in Platonia?


 The double-aspect principle stems from the observation that there is a
 direct isomorphism between certain physically embodied information
 spaces and certain phenomenal (or experiential) information spaces.

 This can be shown false in Quantum theory without collapse, and more
 easily with the comp assumption.
 No problem if you tell me that you reject both Everett and comp.
 Chalmers seems in some place to accept both Everett and comp, indeed.
 He explains to me that he stops at step 3. He believes that after a
 duplication you feel to be simultaneously at the both place, even
 assuming comp. I think and can argue that this is non sense. Nobody
 defends this on the list. Are you defending an idea like that?

I included the Chalmers quote because I think it provides a good image
of how abstract information seems to supervene on physical systems.
BUT by quoting the passage I'm not saying that I think that this
appearance of supervenience is the source of consciousness.  I still
buy into the putnam mapping view that there is no 1-to-1 mapping from
information or computation to any physical system, which 

Re: Consciousness is information?

2009-05-23 Thread Kelly



On May 23, 12:54 pm, Brent Meeker meeke...@dslextreme.com wrote:

 Either of these ideas is definite
 enough that they could actually be implemented (in contrast to many
 philosophical ideas about consciousness).

Once you had implemented the ideas, how would you then know whether
consciousness experience had actually been produced, as opposed to the
mere appearance of it?

If you don't have a way of definitively detecting the hoped for result
of consciousness, then how exactly does being implementable really
help?  You run your test...and then what?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-23 Thread Bruno Marchal


OK. So, now, Kelly, just to understand what you mean by your theory, I  
have to ask you what your theory predicts in case of self- 
multiplication.
You have to see that, personally, I don't have a theory other than the  
assumption that the brain is emulable by a Turing machine, and by  
brain I mean any portion of my local neighborhood needed for surviving  
the comp functional substitution. This is the comp hypothesis.

Because we are both modal realist(*), and, true, worlds (histories)  
with white rabbit exists, and from inside are as actual as our present  
state. But then, I say that, as a consequence of the comp hyp, there  
is a relative probability or credibility measure on those histories.  
To see where does those probabilities come from, you have to  
understand that 1) you can be multiplied (that is read, copy (cut) and  
pasted in Washington AND Moscow (say)), and 2) you are multiplied (by  
2^aleph_zero, at each instant, with a comp definition of instant not  
related in principle with any form of physical time).

What does your theory predicts concerning your expectation in such an  
experience/experiment.

The fact is that your explanation, that we are in an typical universe,  
because those exist as well, just does not work with the comp hyp. It  
does not work, because it does not explain why we REMAIN in that  
typical worlds. It seems to me that, as far as I can put meaning on  
your view, the probability I will see a white rabbit in two seconds is  
as great than the probability I will see anything else, and this is in  
contradiction with the fact. What makes us staying in apparent lawful  
histories?

What does you theory predict about agony and death, from the first  
person point of view? This is an extreme case where comp is sensibly  
in opposition with Aristotelian naturalism.

May be you could study the UDA, and directly tell me at which step  
your theory departs from the comp hyp. It has to depart, because you  
say below that we are in a quantum reality by chance, where the comp  
hyp explains why we have to be (even after death) in a quantum reality.

Bruno

(*) Once and for all, when I say I am a modal realist, I really mean  
this I have an argument showing that the comp theory imposes modal  
realism.  I am really not defending any theory. I am just showing  
that the comp theory leads to precise and verifiable/refutable facts.  
I am a logician: all what I show to people is that IF you believe this  
THEN you have to believe that. It is part of my personal religion that  
my personal religion is personal and private (and evolvable).



On 23 May 2009, at 23:56, Kelly Harmon wrote:


 On Sat, May 23, 2009 at 8:47 AM, Bruno Marchal marc...@ulb.ac.be  
 wrote:


 To repeat my
 earlier Chalmers quote, Experience is information from the inside;
 physics is information from the outside.  It is this subjective
 experience of information that provides meaning to the otherwise
 completely abstract platonic symbols.


 I insist on this well before Chalmers. We are agreeing on this.
 But then you associate consciousness with the experience of  
 information.
 This is what I told you. I can understand the relation between
 consciousness and information content.

 Information.  Information content.  H.  Well, I'm not entirely
 sure what you're saying here.  Maybe I don't have a problem with this,
 but maybe I do.  Maybe we're really saying the same thing here, but
 maybe we're not.  Hm.


 Note that I don't have Bruno's fear of white rabbits.

 Then you disagree with all reader of David Lewis, including David
 lewis himself who recognizes this inflation of to many realities as a
 weakness of its modal realism. My point is that the comp constraints
 leads to a solution of that problem, indeed a solution close to the
 quantum Everett solution. But the existence of white rabbits, and  
 thus
 the correctness of comp remains to be tested.

 True, Lewis apparently saw it as a cost, BUT not so high a cost as to
 abandon modal realism.  I don't even see it as a high cost, I see it
 as a logical consequence.  Again, it's easy to imagine a computer
 simulation/virtual reality in which a conscious observer would see
 disembodied talking heads and flying pigs.  So it certainly seems
 possible for a conscious being to be in a state of observing an
 unattached talking head.

 Given that it's possible, why wouldn't it be actual?

 The only reason to think that it wouldn't be actual is that our
 external objectively existing physical universe doesn't have physical
 laws that can lead easily to the existance of such talking heads to be
 observed.  But once you've abandoned the external universe and
 embraced platonism, then where does the constraint against observing
 talking heads come from?

 Assuming platonism, I can explain why I don't see talking heads:
 because every possible Kelly is realized, and that includes a Kelly
 who doesn't observe disembodied talking heads and who 

Re: Consciousness is information?

2009-05-22 Thread Bruno Marchal


On 21 May 2009, at 12:28, Alberto G.Corona wrote:


 Hi Bruno.
 Thanks for the link. As an physicist and computer researcher I have
 knowledge of some of the fields involved in UDA, but at the first
 sight I fear that I will have a hard time understanding it.


We can do the reasoning step by step if you want. I am not sure why  
you feel that you have will have an hard time understanding it.  
Usually people find the 6th first step easy. Some have problem with  
the idea that comp makes us, in principle, duplicable, and that if we  
are duplicated we cannot predict the personal outcome of the  
experience, but up to now, it always appear that it is either a  
problem of misunderstanding of the distinction between first and third  
person I give there, or it happens they just dislike or are shocked by  
that first person indeterminacy, and I agree it is a bit shocking---it  
already forces some reflexion on personal identity. Some just quit the  
reasoning at step 3, considering those three steps as a refutation of  
the computationalist hypothesis That is a form of wishful thinking. I  
use comp because it is plausible, assumed by many people, and it leads  
to a deep insight into the nature of what there could be.
  I am not interested in the question of the truth of comp. Of course  
I like to criticize invalid argument against comp.  Some deduce,  
invalidly, that I defend comp, but I don't. (as a logician I like to  
demolish all invalid argument, it appears that comp, (like the domain  
of the relation between drugs and health), attracts many invalid  
arguments ...








 and my subjective experience is  the most objective fact
 that I can reach.
 t
 I see what you mean, but the subjective experience, although real and
 true, and undoubtable, is subjective. It exists as far as you cannot
 prove to an other that it exists.  To communicate you have to bet on
 tools and on others, and other many doubtable (yet plausible) mind
 constructions.


 Hence, qualia are subjective ...


Of course.




 ... and, as such,  I cannot assure that you
 have it.


Right.



 But I'm sure that you have it


Thanks God!




 and therefore that my knowledge
 of qualia is objective


? Perhaps we have a vocabulary problem. I would say that knowledge is  
always subjective and never sharable. But we can share beliefs and  
develop objective theories. As far as they are objective and clear,  
they are probably false, and we can, and usually do, refute them, so  
science can progress. A tiny part of science develop some sharable  
knowledge, which still cannot be communicated as such.  It is the  
hard condition of the consistent entities: they can have develop their  
internal knowledge only by communicating their doubtable beliefs.
With comp fundamental science is akin to a negative theology. As  
soon we have unified theory we learn it to be false. We learn that  
Reality is not this, not that, neither this or that, ...
Comp provides the simplest explanation, in the form of a simple third  
person sharable reality (the number) why the Inside Reality has to  
behave like that. Why it contradicts us all the time.

There is a question of taste here.
Those who like to believe they can control everything, and search for  
security, hates such views.
Those who like surprises and love let it go, and search for freedom  
should appreciate.


 simply for one causal reason: natural
 selection;

Here you are terribly quick. And although I do accept the main line of  
natural selection as an explanation of our biological history, I am  
not happy at all with the explanation or absence of explanation of  
everything needed to have a reality where natural selection can exist.  
I don't take granted the notion of physical world, nor any physicalist  
notion of causality. I do agree with many things asserted in physics,  
but not as an ultimate explanation.
The reason I like comp is that it assures us that indeed we have to  
dig deeper with respect of what we see, observe and measure.




 Our brains, shaped by very similar genetic programs, share
 the same architecture and therefore produce very similar
 phemomenologies.

I mostly agree.




 This follows of course if you admit matter- mind (or better math-
 matter-mind)

math-matter-mind is indeed already far better, and, this makes even  
more bizarre you fear UDA, because UDA8, the more complex step, is  
just the step which force to put math at the beginning (even  
arithmetic, but OK). Now, comp makes matter a subtle first person  
plural notion, and it will appears that, in such rough description  
math-mind-matter is more correct. But look, it is still possible that  
we have something like (with UM = universal machine, and HU = Human):

math - UM-mind - Matter- HU-mind

But this could mean that our comp level of substitution is very low,  
and it would be a threat of natural selection. So the picture is a  
bit more difficult.




 and admit natural selection as the 

Re: Consciousness is information?

2009-05-22 Thread Bruno Marchal


On 22 May 2009, at 18:25, Jason Resch wrote:


 On Fri, May 22, 2009 at 9:37 AM, Bruno Marchal marc...@ulb.ac.be  
 wrote:


 Indeed assuming comp I support Arithmetic - Mind - Matter
 I could almost define mind by intensional arithmetic: the numbers  
 when
 studied by the numbers. This does not work because I have to say:
 the numbers as studied by the numbers relatively to their most
 probable local universal number, and this is how matter enters in the
 play: an indeterminacy bearing on an infinity of possible universal
 machines/numbers.



 Bruno, I was wondering if there are anyn concrete examples to help
 clarify what you mean by numbers studied by numbers.  Are there things
 for example, that 31 could know about 6, or are such things only
 possible with or between very big numbers?




Do you remember that the partial computable functions are recursively  
enumerable?
Do you remember the phi_i: computing partial functions from N to N.

phi_1, phi_2, phi_3, phi_4, 

You can associate a computation to a proof in Robinson Arithmetic of a  
statement like phi_31(6) = 745.
The idea is to use the original Robinson Arithmetic as the basic  
universal machine.

A description of a computation would be a representation of that  
computation in arithmetic. And Robinson arithmetic is already Sigma_1- 
complete and thus, if there is a computation of phi_31(6) = 745, there  
will be proof of that fact in Robinson Arithmetic.

The difference is really a question of level, and is basically  
(simplifying a little bit) the difference between the fact that

phi_31(6) = 745, is true and provable in RA, and the fact

provable(phi_31(6) = 745) is true and provable in RA

The numbers involved will not be so great, but can hardly be very  
little.



 I still have a confusion as to what you label a computation and a
 description.

A computation is an abstract object. It is what is usually described  
by a description of a computation. It is a sequence of step of a  
universal machine. Remember that you can also enumerate the partial  
computable functions from NXN to N, noted with P capital:

Phi_1, Phi_2, Phi_3, Phi_4, 

Let me say that a number u is universal if   Phi_u(x,y) = phi_x(y) for  
all x and y. x is the number-program, and y is the number-data. By  
chosing RA as basic system, all those numbers are well defined. It can  
be shown that there will be an infinity of such universal number u1,  
u2, u3 ... (enumerable but not recursively enumerable!).

A computation is a finite or infinite sequences of step of some u_i on  
some input x.
A description of a (finite piece of) a computation is a nummber code  
for an arithmetical description of such a computation.

The difference between computation and description of a computation is  
similar to the difference between 1+1=2, and the Gödel number of the  
formula 1+1 =2.




 Do you believe if we create a computer in this physical
 universe that it could be made conscious,

But a computer is never conscious, nor is a brain. Only a person is  
conscious, and a computer or a brain can only make it possible for a  
person to be conscious relatively to another computer. So your  
question is ambiguous.
It is not my brain which is conscious, it is me who is conscious. My  
brain appears to make it possible for my consciousness to manifest  
itself relatively to you. Remember that we are supposed to no more  
count on the physical supervenience thesis.
It remains locally correct to attribute a consciousness through a  
brain or a body to a person we judged succesfully implemented locally  
in some piece of matter (like when we say yes to a doctor).  But the  
piece of matter is not the subject of the consciousness. It is only  
the abstract person or program who is the subject of consciousness.
To say a brain is conscious consists in doing Searle's'mistake when he  
confused levels of computations in the Chinese room, as well seen  
already by Hofstadter and Dennett in Mind's I.



 or do you count all
 appearance of matter to be only a description of a computation and not
 capable of true computation?

appearance of matter is a qualia. It does not describe anything but  
is a subjective experience, which may correspond to something stable  
and reflecting the existence of a computation (in Platonia) capable to  
manifest itself relatively to you.


 Do you believe that the only real
 computation exists platonically and this is the only source of
 conscious experience?

Computations and their relative implementations exist only in  
platonia, yes. But even in Platonia, they exist in multiple relative  
version, all defined eventually through many multiple relations  
between numbers.


  If so I find this confusing, as could there not
 be multiple levels?

But they are multiple levels of computations in Platonia or  
Arithmetic. Even a huge number of them. That is why we have to take  
into account the first person indeterminacies.




 For example would a 

Re: Consciousness is information?

2009-05-22 Thread Brent Meeker

Bruno Marchal wrote:
 On 22 May 2009, at 18:25, Jason Resch wrote:
   
 ...
 Do you believe if we create a computer in this physical
 universe that it could be made conscious,
 

 But a computer is never conscious, nor is a brain. Only a person is  
 conscious, and a computer or a brain can only make it possible for a  
 person to be conscious relatively to another computer. So your  
 question is ambiguous.
 It is not my brain which is conscious, it is me who is conscious. 

By me do you mean some computation in Platonia?  I'm wondering what 
are the implications of your theory for creating artificial 
consciousness.  Since comp starts with the assumption that replacing 
one's brain with functionally  identical units (at some level of detail) 
will make no discernable difference in your experience, it entails that 
a computer that functionally replaces your brain is conscious (conscious 
of being you in fact).  So if I want to build a conscious robot from 
scratch, not by copying someone's brain, what must I do?

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-21 Thread Alberto G.Corona

Hi Bruno.
Thanks for the link. As an physicist and computer researcher I have
knowledge of some of the fields involved in UDA, but at the first
sight I fear that I will have a hard time understanding it.


  and my subjective experience is  the most objective fact
  that I can reach.
 t
 I see what you mean, but the subjective experience, although real and  
 true, and undoubtable, is subjective. It exists as far as you cannot  
 prove to an other that it exists.  To communicate you have to bet on  
 tools and on others, and other many doubtable (yet plausible) mind  
 constructions.


Hence, qualia are subjective and , as such,  I cannot assure that you
have it. But I'm sure that you have it and therefore that my knowledge
of qualia is objective simply for one causal reason: natural
selection; Our brains, shaped by very similar genetic programs, share
the same architecture and therefore produce very similar
phemomenologies.

This follows of course if you admit matter- mind (or better math-
matter-mind)  and admit natural selection as the entropic pump
that creates structure and function (and computer structures) in
living beings. I know no other testable alternative.



  I cannot support this Kantian notion consciousness - matter.

 The problem is that if you are ready to attribute consciousness to a  
 device, by its virtue of simulating digitally a conscious brain at  
 some correct level of description, you will be forced to attribute  
 that consciousness to an infinity of computations already defined by  
 the additive and multiplicative structure of the numbers (by UDA). A  
 quasi direct consequence is that if a machine look at herself below  
 its substitution level, it will build indirect evidences of a flux of  
 many (a continuum) of computational histories (a typical quantum  
 feature, I mean for QM without wave collapse). But comp forces the  
 structure of those many realities (or dreams) to be determined by  
 specifiable number theoretical relations.  Those relations are either  
 extensional relations (like in number theory), or intensional  
 relations (like in computer science, where number can also points  
 toward other numbers, and effective set of numbers). It makes  
 computationalism testable. The genral shape of QM confirm it, but  
 cosmogenesis remains troubling ...

I cannot understand this until I read your paper, but, just one
question ¿what is the nature of the process that reduces local entropy
(sculpt chaos, poetically speaking) so that in creates life and
intelligence starting from unanimated matter along the arrow of time?.
Is it of a mathematical nature; is it some general principle of
change? Is it natural selection with some additional principle? I just
want to know what your context in relation with mine is. Of course if
you support Mind- matter - math, then you mechanism for such
evolution should be quite different.





  The final words that I can say about the hard problem of
  consciousness is that any conversation with a robot, with the self-
  module that  I described in the previous post, will give answers about
  qualia indistinguisable from the answers of any of you. He would
  indeed doubt about if you are indeed robots and he is the only
  conscious being on earth.  Just as any of you may think.

  Its self module would not say I perceive the green as green because
  he has this as an standard answer, like a fake Turing test program,
  but because it can zoom in the details of every leaf, grass etc and
  verify that the range of ligh frecuencies are in the range of
  frequencias that  a computer programmer assigned  to green and a
  trainer later told him to call it green.  He even can have its own
  philosophical theories about qualia, the self etc. He even may ask
  himself about the origins of moral  and self determination, and even
  all of this may force him to believe in God.  So we must conclude that
  he have its own qualia and all the attributes of consciousness. in no
  less degree  than I could believe in yours.

 A priori I have no problem, although I could pretend you have solved  
 only the easy problem.
 The hard problem is: why do *we* (and not just a robot)  have those  
 qualia, if robot can have the same talk and behavior? You have still  
 to explain the nature of the qualia, and why we have to experience  
 them, given that a mechanical explanation seems to make them  
 unnecessary, especially if you invoke Darwinian natural selection. And  
 then, by UDA you have to (re)explain what is matter and how to relate  
 them with the qualia. Eventually matter will appear to be a sort of  
 sharable qualia (or comp is false).

Yes I said that this is all that I can say without pretending to solve
the problem. That is because the problem qualia is so interesting.
But in the absence of natural selection, as I said, I can not be sure
if you have such qualia. I cannot be sure either you are zombies or
not. In fact the main school of 

Re: Consciousness is information?

2009-05-20 Thread Alberto G.Corona

Hi Bruno

On May 19, 7:37 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 ... UDA is an argument showing that the current  
 paradigmatic chain MATTER = CONSCIOUSNESS = NUMBER is reversed: with  
 comp I can explain too you in details (it is long) that the chain  
 should be NUMBER = CONSCIOUSNESS = MATTER. Some agree already that  
 it could be NUMBER = MATTER = CONSCIOUSNESS, and this indeed is more  
 locally obvious, yet I pretend that comp forces eventually the  
 complete reversal.

Do you have any reference where this is developed?
I try to be as close to facts as possible, and the most plausible
explanation for me, trough natural selection, is that consciousness is
a processing device made by natural selection as an adaptation to the
physical environment, social environment included.  So I support
matter- consciousness. Dualism is the result of my subjective
experience, and my subjective experience is  the most objective fact
that I can reach.

I cannot support this Kantian notion consciousness - matter.

The final words that I can say about the hard problem of
consciousness is that any conversation with a robot, with the self-
module that  I described in the previous post, will give answers about
qualia indistinguisable from the answers of any of you. He would
indeed doubt about if you are indeed robots and he is the only
conscious being on earth.  Just as any of you may think.

Its self module would not say I perceive the green as green because
he has this as an standard answer, like a fake Turing test program,
but because it can zoom in the details of every leaf, grass etc and
verify that the range of ligh frecuencies are in the range of
frequencias that  a computer programmer assigned  to green and a
trainer later told him to call it green.  He even can have its own
philosophical theories about qualia, the self etc. He even may ask
himself about the origins of moral  and self determination, and even
all of this may force him to believe in God.  So we must conclude that
he have its own qualia and all the attributes of consciousness. in no
less degree  than I could believe in yours.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-20 Thread Bruno Marchal
Hi Alberto,

On 20 May 2009, at 13:08, Alberto G.Corona wrote:
 On May 19, 7:37 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 ... UDA is an argument showing that the current
 paradigmatic chain MATTER = CONSCIOUSNESS = NUMBER is reversed:  
 with
 comp I can explain too you in details (it is long) that the chain
 should be NUMBER = CONSCIOUSNESS = MATTER. Some agree already that
 it could be NUMBER = MATTER = CONSCIOUSNESS, and this indeed is  
 more
 locally obvious, yet I pretend that comp forces eventually the
 complete reversal.

 Do you have any reference where this is developed?


I have often explain UDA on this list. There is a very older version  
in 15 steps, and a more recent in 8 steps.
You could search in the archive of this list.
Or look at my Sane04 paper:
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html
You can print the slides. I refer now often to UDA-i with i from 0 to  
8, which are the main step of the reasoning. PDF slide

UDA is for Universal Dovetailer Argument. The UD provides a concrete  
base for a reasoning in line with the everything or many worlder  
open minded philosophy common on this list, especially for the  
relativist one (where proba are always conditional).
UDA is provably available to Universal (in the theoretical computer  
science sense of Post, Turing, Kleene, Church, ...  machine, which  
leads to a machine version of UDA: AUDA (Arithmetical UDA).

UDA is mainly an argument showing that, assuming comp, the mind body  
problem reduce to the body problem.
And AUDA shows a natural path to extract the solution of the body  
problem by that interview of the universal machine.

Much older versions are in French (my PhD actually, and more older  
paper). See my URL.



 I try to be as close to facts as possible, and the most plausible
 explanation for me, trough natural selection, is that consciousness is
 a processing device made by natural selection as an adaptation to the
 physical environment, social environment included.


This is plausible for most of the human and animal part of  
consciousness. It is a reasonable local description. But globally a  
dual version of this has the advantage of explaining how nature itself  
evolves, from sort of competition and selection of pieces of machine  
dreams, which are easy to define in arithmetic (assuming comp ...).
It is normal that comp depends on the many non trivial results in  
computer science. A universal machine is itself a rather non obvious  
notion.





 So I support
 matter- consciousness.


I could explain why it has to look locally that way, but it can not  
work in the big picture, unless you make both matter and mind, not  
just infinite, but very highly infinite ... (just read UDA, I think I  
have make progress through those explanation on the list).


 Dualism is the result of my subjective
 experience,

I doubt this can be. I would say it is a result of your experience  
together with a bet (instinctive or/and rational) in a independent  
reality.
you cannot experience the independent reality. You can experience only  
the dependent reality, but not as a dependent one, for this you need  
to bet on the independent one. What makes this diificult is that we  
make that bet instinctively since birth and beyond.



 and my subjective experience is  the most objective fact
 that I can reach.

I see what you mean, but the subjective experience, although real and  
true, and undoubtable, is subjective. It exists as far as you cannot  
prove to an other that it exists.  To communicate you have to bet on  
tools and on others, and other many doubtable (yet plausible) mind  
constructions.






 I cannot support this Kantian notion consciousness - matter.


The problem is that if you are ready to attribute consciousness to a  
device, by its virtue of simulating digitally a conscious brain at  
some correct level of description, you will be forced to attribute  
that consciousness to an infinity of computations already defined by  
the additive and multiplicative structure of the numbers (by UDA). A  
quasi direct consequence is that if a machine look at herself below  
its substitution level, it will build indirect evidences of a flux of  
many (a continuum) of computational histories (a typical quantum  
feature, I mean for QM without wave collapse). But comp forces the  
structure of those many realities (or dreams) to be determined by  
specifiable number theoretical relations.  Those relations are either  
extensional relations (like in number theory), or intensional  
relations (like in computer science, where number can also points  
toward other numbers, and effective set of numbers). It makes  
computationalism testable. The genral shape of QM confirm it, but  
cosmogenesis remains troubling ...





 The final words that I can say about the hard problem of
 consciousness is that any conversation with a robot, with the self-
 module that  I described in the previous 

Re: Consciousness is information?

2009-05-19 Thread Kelly Harmon

On Mon, May 18, 2009 at 4:22 PM, George Levy gl...@quantics.net wrote:
 Kelly Harmon wrote:

 What if you used a lookup table for only a single neuron in a computer
 simulation of a brain?


 Hi Kelly

 Zombie arguments involving look up tables are faulty because look up tables
 are not closed systems. They require someone to fill them up.
 To resolve these arguments you need to include the creator of the look up
 table in the argument. (Inclusion can be across widely different time
 periods and spacial location)


Indeed!  I'm not arguing that the use of look-up tables entails
zombie-ism.  I was posing a question in response to Jessie's comment:

 I don't have a problem with the idea that a giant lookup table is just
 a sort of zombie, since after all the way you'd create a lookup table

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-19 Thread Kelly Harmon

On Mon, May 18, 2009 at 12:30 AM, Brent Meeker meeke...@dslextreme.com wrote:

 On the contrary, I think it does.  First, I think Chalmers idea that
 vitalists recognized that all that needed explaining was structure and
 function is revisionist history.  They were looking for the animating
 spirit.  It is in hind sight, having found the function and structure,
 that we've realized that was all the explanation available.

Hmmm.  I'm not familiar enough with the history of this to argue one
way or the other.  A quick read through the wikipedia article on
vitalism, and some light googling, left me with the impression that
most of the argument centered around function.  And also the
difference between organic and inorganic chemical compounds.

Though to the extent that there was something being debated beyond
structure and function, I think that Chalmers makes a good point here:

 There is not even a plausible candidate for a further sort of property of
 life that needs explaining (leaving aside consciousness itself), and
 indeed there never was.

I'm highlighting the parenthetical leaving aside consciousness itself.

SO.  Dennett makes one claim.  Chalmers makes what I thought was a
pretty good rebuttal.  I've never seen a counter-response from Dennett
on this point, and it's not a historical topic that I know much about.
 Do you have some special expertise, or a good source that overturns
Chalmers rebuttal?

Though, comparing what people thought about an entirely different
topic 150 years ago to this topic now seems like a clever debating
point, but otherwise of iffy relevance.


 We will eventually
 be able to make robots that behave as humans do and we will infer, from
 their behavior, that they are conscious.

What about robots (or non-embodied computer programs) that are equally
complex but (for whatever design reasons) don't exhibit any
human-like behaviors?  Will we infer that they are conscious?  How
will we know which types of complex systems are conscious and which
aren't?  What is the marker?

We'll just know it when we see it?  If so, it's only because we have
definite knowledge of our own conscious experience, and we're looking
for behaviors that we can empathize with.  But is empathy reliable?
It's certainly exploitable...Kismet for example.  So it can generate
false positives, but what might it also miss?


 And we, being their designers,
 will be able to analyze them and say, Here's what makes R2D2 have
 conscious experiences of visual perception and here's what makes 3CPO
 have self awareness relative to humans.

I would agree that we could say something definite about the
functional aspects, but not about any experiential aspects.  Those
would have to be taken on faith.  For all we know, R2D2 might have a
case of blindsight AND Anton-Babinski syndrome...in which case he
would react to visual data but have no conscious experience of what he
saw (blindsight), BUT would claim that he did experience it
(Anton-Babinksi)!


 We will find that there are
 many different kinds of conscious and we will be able to invent new
 ones.

How would we know that we had actually invented new ones?  What is it
like to be a robo-Bat?


 We will never solve Chalmers hard problem, we'll just realize
 it's a non-question.

Maybe.  Time will tell.  But even if we all agree that it's a
non-question, that wouldn't necessarily mean that we'd be correct in
doing so.



 Well, here's where it gets tricky.  Conscious experience is associated
 with information.

 I think that's the point in question.  However, we all agree that
 consciousness is associated with, can be identified by, certain
 behavior.  So to say that physical systems are too representationally
 ambiguous seems to me to beg the question.  It is based on assuming that
 consciousness is information and since the physical representation of
 information is ambiguous it is inferred that physical representations
 aren't enough for consciousness.  But  going back to the basis: Is
 behavior ambiguous?  Sure it is - yet we rely in it to identify
 consciousness (at least if you don't believe in philosophical
 zombies).   I think the significant point is that consciousness is an
 attribute of behavior that is relative to an environment.


So I think the possibility (conceivability?) of conscious computer
simulations is what throws a kink into this line of thought.

I'll quote Hans Moravec here:

A simulated world hosting a simulated person can be a closed
self-contained entity. It might exist as a program on a computer
processing data quietly in some dark corner, giving no external hint
of the joys and pains, successes and frustrations of the person
inside. Inside the simulation events unfold according to the strict
logic of the program, which defines the ``laws of physics'' of the
simulation. The inhabitant might, by patient experimentation and
inference, deduce some representation of the simulation laws, but not
the nature or even existence of the simulating 

Re: Consciousness is information?

2009-05-19 Thread Kelly Harmon

On Mon, May 18, 2009 at 6:36 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 I agree with your critic of consciousness = information. This is not
 even wrong,

Ouch!  Et tu, Bruno???


 and Kelly should define what he means by information so
 that we could see what he really means.

Okay, okay!  I was hoping it wouldn't come to this, but you've backed
me into a corner.  (ha!)

I'll come up with a definition and post it asap.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-19 Thread Alberto G.Corona

That is also my case. I wonder how the materialist hypothesis has
advanced in a plausible explanation of consciousness, and I think that
this is the right path, and I follow it. But at the deep level, my
subjective experience tells me that I must remain dualist.

I think however that for evolutionary purposes, the consciousness,
being designed by natural selection for keeping an accurate picture of
how the others see us, must naturally reject a materialist explanation
because this is not an accurate picture. The other people do not see
us as a piece of evolved mechanisms, but as moral beings. An adaptive
self must be, and is, fiercely dualist, with a strong notion of self
autonomy and unit of purpose. So all of us feel that way when not
thinking about that.

Thus, maybe if ever a robot is made to simulate our behavior must
incorporate an inner rejection of materialist explanation about the
nature of his higher level circuits, and a vivid notion of subjective
experience. That is not difficult at a certain level of technology, to
create a central “self” module that receives the filtered, relevant
information, plus information of the commands and actions of other
decision modules. This self module must be capable of inventing (and
that´s the tricky thing) a self centered, socially plausible, moral
history that link together such perceptions and such actions. Then,
when someone ask him do you have subjective experience, qualia and so
on the robot will answer, “of cause, yes, I have a very strong
sensation of unity of mind, perception and I´m a moral subject capable
of self determination”.  Otherwise, he will be inconsistent or non
functional as human simulation.

By the way, the role of the self process as a creator of self centered
histories that are credible for the rest of us, that tend to show a
favorable moral image of the self has been checked in different
experiments, especially with lobotomized people (that invent two
different histories of the same perception-action in each hemisphere).
It also explains many mental disorders: compulsive liars and crazy
overhyped egos made of fantastic histories (reincarnations of
Napoleon) for example. It also explains many effects in social life of
sane people. How hard is to achieve objectivity, for example?


On May 18, 4:50 am, Kelly Harmon harmon...@gmail.com wrote:
 On Sun, May 17, 2009 at 9:13 PM, Brent Meeker meeke...@dslextreme.com wrote:

  Generally I don't think that what we experience is necessarily caused
  by physical systems.  I think that sometimes physical systems assume
  configurations that shadow, or represent, our conscious experience.
  But they don't CAUSE our conscious experience.

  So if we could track the functions of the brain at a fine enough scale,
  we'd see physical events that didn't have physical causes (ones that
  were caused by mental events?).

 No, no, no.  I'm not saying that at all.  Ultimately I'm saying that
 if there is a physical world, it's irrelevant to consciousness.
 Consciousness is information.  Physical systems can be interpreted as
 representing, or storing, information, but that act of storage
 isn't what gives rise to conscious experience.



  You're aware of course that the same things were said about the
  physio/chemical bases of life.

 You mentioned that point before, as I recall.  Dennett made a similar
 argument against Chalmers, to which Chalmers had what I thought was an
 effective response:

 ---http://consc.net/papers/moving.html

 Perhaps the most common strategy for a type-A materialist is to
 deflate the hard problem by using analogies to other domains, where
 talk of such a problem would be misguided. Thus Dennett imagines a
 vitalist arguing about the hard problem of life, or a neuroscientist
 arguing about the hard problem of perception. Similarly, Paul
 Churchland (1996) imagines a nineteenth century philosopher worrying
 about the hard problem of light, and Patricia Churchland brings up
 an analogy involving heat. In all these cases, we are to suppose,
 someone might once have thought that more needed explaining than
 structure and function; but in each case, science has proved them
 wrong. So perhaps the argument about consciousness is no better.

 This sort of argument cannot bear much weight, however. Pointing out
 that analogous arguments do not work in other domains is no news: the
 whole point of anti-reductionist arguments about consciousness is that
 there is a disanalogy between the problem of consciousness and
 problems in other domains. As for the claim that analogous arguments
 in such domains might once have been plausible, this strikes me as
 something of a convenient myth: in the other domains, it is more or
 less obvious that structure and function are what need explaining, at
 least once any experiential aspects are left aside, and one would be
 hard pressed to find a substantial body of people who ever argued
 otherwise.

 When it comes to the problem of life, for 

Re: Consciousness is information?

2009-05-19 Thread Brent Meeker

Kelly Harmon wrote:
 ...
 So I think the possibility (conceivability?) of conscious computer
 simulations is what throws a kink into this line of thought.
   

No, that's why I wrote ...relative to an environment.  In Moravec's 
thought experiment the consciousness is relative to simulation.  From 
outside it might many entirely different interpretations, like the stone 
that calculates everything.

Brent
 I'll quote Hans Moravec here:

 A simulated world hosting a simulated person can be a closed
 self-contained entity. It might exist as a program on a computer
 processing data quietly in some dark corner, giving no external hint
 of the joys and pains, successes and frustrations of the person
 inside. Inside the simulation events unfold according to the strict
 logic of the program, which defines the ``laws of physics'' of the
 simulation. The inhabitant might, by patient experimentation and
 inference, deduce some representation of the simulation laws, but not
 the nature or even existence of the simulating computer. The
 simulation's internal relationships would be the same if the program
 were running correctly on any of an endless variety of possible
 computers, slowly, quickly, intermittently, or even backwards and
 forwards in time, with the data stored as charges on chips, marks on a
 tape, or pulses in a delay line, with the simulation's numbers
 represented in binary, decimal, or Roman numerals, compactly or spread
 widely across the machine. There is no limit, in principle, on how
 indirect the relationship between simulation and simulated can be.

 http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-19 Thread Bruno Marchal


On 19 May 2009, at 10:13, Kelly Harmon wrote:


 On Mon, May 18, 2009 at 6:36 AM, Bruno Marchal marc...@ulb.ac.be  
 wrote:

 I agree with your critic of consciousness = information. This is  
 not
 even wrong,

 Ouch!  Et tu, Bruno???


Apology. I was a bit rude.






 and Kelly should define what he means by information so
 that we could see what he really means.

 Okay, okay!  I was hoping it wouldn't come to this, but you've backed
 me into a corner.  (ha!)

OK OK. I am glad you are not KO :)




 I'll come up with a definition and post it asap.

After the corner, the terrible trap  I am curious about what you  
will say. The concept of information is more tricky than randomness,  
meaning and infinity all together. To relate it with consciousness?  
This makes sense. This makes too much sense ... I think, and that is  
the problem.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-19 Thread Bruno Marchal

Hi Alberto,

On 19 May 2009, at 11:37, Alberto G.Corona wrote:


 That is also my case. I wonder how the materialist hypothesis has
 advanced in a plausible explanation of consciousness, and I think that
 this is the right path, and I follow it. But at the deep level, my
 subjective experience tells me that I must remain dualist.

I am glad you are not an eliminative materialist. But you tell us that  
you remain a weak-materialist? You believe there is a primary physical  
or material world, and that physics is the fundamental science. OK?
The point of most of my posts here is to explain this does not work,  
at least once we accept the computationalist hypothesis in the  
cognitive science.





 I think however that for evolutionary purposes, the consciousness,
 being designed by natural selection for keeping an accurate picture of
 how the others see us, must naturally reject a materialist explanation
 because this is not an accurate picture.

You mean must reject eliminative materialism. I agree with you. All  
sentient beings do that naturally.



 The other people do not see
 us as a piece of evolved mechanisms, but as moral beings.

As person, yes.



 An adaptive
 self must be, and is, fiercely dualist, with a strong notion of self
 autonomy and unit of purpose. So all of us feel that way when not
 thinking about that.

Why dualist? Well, I do agree even animals feel themselves implicitly  
dualist, they believe they are hungry and that food exist. They does  
not reflect much on the difference between the appearance of  
substantial food and they first person hungriness.
But comp forces us to abandon weak materialism, like I think most of  
the greek and Indian philosophers already did intuit. The appearance  
of matter is appearance of something else. The days I believe in comp,  
I feel myself fiercely monist: I believe, those days, that matter is a  
construction of the mind. Not the human mind, but the universal  
machines mind. UDA is an argument showing that the current  
paradigmatic chain MATTER = CONSCIOUSNESS = NUMBER is reversed: with  
comp I can explain too you in details (it is long) that the chain  
should be NUMBER = CONSCIOUSNESS = MATTER. Some agree already that  
it could be NUMBER = MATTER = CONSCIOUSNESS, and this indeed is more  
locally obvious, yet I pretend that comp forces eventually the  
complete reversal.
Here I agree with Kelly, and probably some others: idealism, or  
spiritual/mental/informational/number-theoretical monism is where we  
go, and have to go, once we bet we can survive with a digital brain. I  
don't pretend this is obvious, but I have an argument, called UDA. It  
is a constructive argument, it shows how to explicitly derive the  
physical laws from a theory of mind (computer science), so that we can  
test comp empirically, by comparing the physics from comp and the  
physics from usual observation of our neighborhood. Would the world  
still look Newtonian, I would never dare to suggest that comp is  
possible. Thanks to QM, the possibility of comp remains. (And QM's MWI  
prevent comp from solipsism, in case you worried).



 Thus, maybe if ever a robot is made to simulate our behavior must
 incorporate an inner rejection of materialist explanation about the
 nature of his higher level circuits, and a vivid notion of subjective
 experience.

Note that even physicalist explanations are more and more  
mathematical, and does never really refer to metaphysical materialism.  
But such beliefs lives in the background, and when defended leads  
often to eliminativism (of person), or dualism, which are rarely  
intelligible, or epiphenomenalism, where consciousness loss its grip  
to reality.





 That is not difficult at a certain level of technology, to
 create a central “self” module that receives the filtered, relevant
 information, plus information of the commands and actions of other
 decision modules. This self module must be capable of inventing (and
 that´s the tricky thing) a self centered, socially plausible, moral
 history that link together such perceptions and such actions.


In our case, we can bet we belong to deep computational histories,  
which give serious hints.



 Then,
 when someone ask him do you have subjective experience, qualia and so
 on the robot will answer, “of cause, yes, I have a very strong
 sensation of unity of mind, perception and I´m a moral subject capable
 of self determination”.  Otherwise, he will be inconsistent or non
 functional as human simulation.

This is a bit tautological, but OK.




 By the way, the role of the self process as a creator of self centered
 histories that are credible for the rest of us, that tend to show a
 favorable moral image of the self has been checked in different
 experiments, especially with lobotomized people (that invent two
 different histories of the same perception-action in each hemisphere).
 It also explains many mental disorders: compulsive liars and crazy
 overhyped 

Re: Consciousness is information?

2009-05-18 Thread Bruno Marchal

Note also that, by being universal machine, our look-up table are 
infinite.

Bruno

Le 18-mai-09, à 03:11, Kelly Harmon a écrit :


 On Fri, May 15, 2009 at 12:32 AM, Jesse Mazer laserma...@hotmail.com 
 wrote:

 I don't have a problem with the idea that a giant lookup table is 
 just a
 sort of zombie, since after all the way you'd create a lookup table 
 for a
 given algorithmic mind would be to run a huge series of actual 
 simulations
 of that mind with all possible inputs, creating a huge archive of
 recordings so that later if anyone supplies the lookup table with a 
 given
 input, the table just looks up the recording of the occasion in which 
 the
 original simulated mind was supplied with that exact input in the 
 past, and
 plays it back. Why should merely replaying a recording of something 
 that
 happened to a simulated observer in the past contribute to the 
 measure of
 that observer-moment? I don't believe that playing a videotape of me 
 being
 happy or sad in the past will increase the measure of happy or sad
 observer-moments involving me, after all. And Olympia seems to be 
 somewhat
 similar to a lookup table in that the only way to construct her 
 would be
 to have already run the regular Turing machine program that she is 
 supposed
 to emulate, so that you know in advance the order that the Turing 
 machine's
 read/write head visits different cells, and then you can rearrange the
 positions of those cells so Olympia will visit them in the correct 
 order
 just by going from one cell to the next in line over and over again.


 What if you used a lookup table for only a single neuron in a computer
 simulation of a brain?  So actual calculations for the rest of the
 brain's neurons are performed, but this single neuron just does
 lookups into a table of pre-calculated outputs.  Would consciousness
 still be produced in this case?

 What if you then re-ran the simulation with 10 neurons doing lookups,
 but calculations still being executed for the rest of the simulated
 brain?  Still consciousness is produced?

 What if 10% of the neurons are implemented using lookup tables?  50%?
 90%?  How about all except 1 neuron is implemented via lookup tables,
 but that 1 neuron's outputs are still calculated from inputs?

 At what point does the simulation become a zombie?

 

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-18 Thread Bruno Marchal


Le 17-mai-09, à 12:43, Alberto G.Corona a écrit :


 The hard problem may be unsolvable, but I think it would be much more
 unsolvable if we don´t fix the easy problem, isn´t?


I think that the hard problem is more easy to solve than the easy 
problem.
Indeed it is a theorem in computer science that an (ideally) correct 
universal machine which introspects itself (in the usual mathematical 
self-referential (Lobian) sense) will discover (not prove, but still 
produce as true) many non machine-communicable statements.

AUDA gives a thorough precise theory of qualia, which is Popper 
refutable, in the (idealist) sense that the quanta appears as 
particular type of sharable first person plural qualia. If it appears 
false on quanta, we can abandon that theory of qualia too!

What is cute in AUDA, is that it provides an explanation why the hard 
problem of consciousness has to seem hard from the point of view of 
the machine. In a sense the hard problem is proved to be unsolvable by 
any direct means, but completely meta-solvable.

It relies mainly on the Gödel points where Penrose and Lucas are wrong: 
machine *can* access their own incompleteness theorem through local 
self-consistency assumptions.





 With a clear idea
 of the easy problem it is possible to infer something about the hard
 problem:

 For example, the latter is a product of the former, because we
 perceive things that have (or had) relevance in evolutionary terms.
 Second, the unitary nature of perception match well with the
 evolutionary explanation My inner self is a private reconstruction,
 for fitness purposes, of how others see me, as an unit of perception
 and purpose, not as a set of processors, motors and sensors, although,
 analytically, we are so. Third, the machinery of this constructed
 inner self sometimes take control (i.e. we feel ourselves capable of
 free will) whenever our acts would impact of the image that others may
 have of ourselves.

 If these conclusions are all in the easy lever, I think that we have
 solved a few of moral and perceptual problems that have puzzled
 philosophers and scientists for centuries. Relabeling them as easy
 problems the instant after an evolutionary explanation of them has
 been aired is preposterous.

 Therefore I think that I answer your question: it´s not only
 information; It´s about a certain kind of information and their own
 processor. The exact nature of this processor that permits qualia is
 not known;


I think we know (assuming comp) the exact nature of that processor. 
It is an immaterial universal machine. The machine does not need to be 
Lobian (as some people think). It needs only to be lobian to be able to 
develop by its own this very special theory of qualia and quanta.

I agree with your critic of consciousness = information. This is not 
even wrong, and Kelly should define what he means by information so 
that we could see what he really means. I suspect Kelly is confusing 
information and information content. Information content needs the 
(immaterial and atemporal) processing of a universal machine or number. 
Not a physical processing, but a processing similar to those in the UD, 
or implemented naturally in (a tiny part) of Arithmetic.




  that’s true, and it´s good from my point of view, because,
 for one side, the unknown is stimulating and for the other,
 reductionist explanations for everything, like the mine above, are a
 bit frustrating.


I can explain in what sense comp is a vaccine against reductionism, but 
you have to be familiar with the UD Argument. Even the physics which 
appears cannot be reduced, still less the person. Hmm ..., you still 
believe we can have both comp and a primitive material universe, isn't 
it?

Computationalism leads to a genuine non trivial and refutable solution 
of both the hard problem of matter *and* the hard problem of 
consciousness. It preserves the necessity of an irreducible gap between 
those things (and other things), but it provides a geometry of that 
gap, together with an explanation of the mystery feeling. Of course (in 
case you have read some of my older post), the geometry of the gap is 
provided by the possible modal semantics of the logic G* \minus G, and 
its intensional variants, (all this on the Sigma_1 restriction, to take 
into account the comp hyp and the Universal Dovetailer in Arithmetic).

The bad news is that the easy problem of matter and consciousness, 
thorugh comp could as well be as diificult as possible. It remains 
possible that only very long computation can lead tp present form of 
human mind and matter. Computationalism does not just reverse math and 
physics, or theology and physics, it reverse hard and easy ...

Eventually everything is reduced to the (deep) mystery of our 
understanding of an assertion like N = {0, 1, 2, ...}.  But, by 
accepting that the expression N = {0, 1, 2, ...} makes sense,  we can 
explain in all detail why this one is absolutely unsolvable. We cannot 

Re: Consciousness is information?

2009-05-18 Thread George Levy
Kelly Harmon wrote:

 What if you used a lookup table for only a single neuron in a computer
 simulation of a brain?
   
Hi Kelly

Zombie arguments involving look up tables are faulty because look up 
tables are not closed systems. They require someone to fill them up.
To resolve these arguments you need to include the creator of the look 
up table in the argument. (Inclusion can be across widely different time 
periods and spacial location)

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Brent Meeker

Kelly Harmon wrote:
 I think your discussing the functional aspects of consciousness.  AKA,
 the easy problems of consciousness.  The question of how human
 behavior is produced.
 
 My question was what is the source of phenomenal consciousness.
 What is the absolute minimum requirement which must be met in order
 for conscious experience to exist?  So my question isn't HOW human
 behavior is produced, but instead I'm asking why the mechanistic
 processes that produce human behavior are accompanied by subjective
 first person conscious experience.  The hard problem.  Qualia.
 
 I wasn't asking how is it that we do the things we do, or, how did
 this come about, but instead given that we do these things, why is
 there a subjective experience associated with doing them.

Do you suppose that something could behave just as humans do yet not be 
conscious, i.e. could there be a philosophical zombie?

 
 So none of the things you reference are relevant to the question of
 whether a computer simulation of a human mind would be conscious in
 the same way as a real human mind.  If a simulation would be, then
 what are the properties that those to two very dissimilar physical
 systems have in common that would explain this mutual experience of
 consciousness?

The information processing?

Brent


 
 
 
 On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com wrote:
 No. Consciousness is not information. It is an additional process that
 handles its own generated information. I you don´t recognize the
 driving mechanism towards order in the universe, you will be running
 on empty. This driving mechanism is natural selection. Things gets
 selected, replicated and selected again.

 In the case of humans, time ago the evolutionary psychologists and
 philosophers (Dennet etc) discovered the evolutionary nature of
 consciousness, that is double: For social animals, consciousness keeps
 an actualized image of how the others see ourselves. This ability is
 very important in order to plan future actions with/towards others
 members. A memory of past actions, favors and offenses are kept in
 memory for consciousness processing.  This is a part of our moral
 sense, that is, our navigation device in the social environment.
 Additionally, by reflection on ourselves, the consciousness module can
 discover the motivations of others.

 The evolutionary steps for the emergence of consciousness are: 1) in
 order to optimize the outcome of collaboration, a social animal start
 to look the others as unique individuals, and memorize their own
 record of actions. 2) Because the others do 1, the animal develop a
 sense of itself and record how each one of the others see himself
 (this is adaptive because 1). 3) This primitive conscious module
 evolved in 2 starts to inspect first and lately, even take control of
 some action with a deep social load. 4) The conscious module
 attributes to an individual moral self every action triggered by the
 brain, even if it driven by low instincts, just because that´s is the
 way the others see himself as individual. That´s why we feel ourselves
 as unique individuals and with an indivisible Cartesian mind.

 The consciousness ability is fairly recent in evolutionary terms. This
 explain its inefficient and sequential nature. This and 3 explains why
 we feel anxiety in some social situations: the cognitive load is too
 much for the conscious module when he tries to take control of the
 situation when self image it at a stake. This also explain why when we
 travel we feel a kind of liberation: because the conscious module is
 made irrelevant outside our social circle, so our more efficient lower
 level modules take care of our actions


 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Alberto G.Corona

The hard problem may be unsolvable, but I think it would be much more
unsolvable if we don´t fix the easy problem, isn´t? With a clear idea
of the easy problem it is possible to infer something about the hard
problem:

For example, the latter is a product of the former, because we
perceive things that have (or had) relevance in evolutionary terms.
Second, the unitary nature of perception match well with the
evolutionary explanation My inner self is a private reconstruction,
for fitness purposes, of how others see me, as an unit of perception
and purpose, not as a set of processors, motors and sensors, although,
analytically, we are so. Third, the machinery of this constructed
inner self sometimes take control (i.e. we feel ourselves capable of
free will) whenever our acts would impact of the image that others may
have of ourselves.

If these conclusions are all in the easy lever, I think that we have
solved a few of moral and perceptual problems that have puzzled
philosophers and scientists for centuries. Relabeling them as easy
problems the instant after an evolutionary explanation of them has
been aired is preposterous.

Therefore I think that I answer your question: it´s not only
information; It´s about a certain kind of information and their own
processor. The exact nature of this processor that permits qualia is
not known; that’s true, and it´s good from my point of view, because,
for one side, the unknown is stimulating and for the other,
reductionist explanations for everything, like the mine above, are a
bit frustrating.


On May 16, 8:39 pm, Kelly Harmon harmon...@gmail.com wrote:
 I think your discussing the functional aspects of consciousness.  AKA,
 the easy problems of consciousness.  The question of how human
 behavior is produced.

 My question was what is the source of phenomenal consciousness.
 What is the absolute minimum requirement which must be met in order
 for conscious experience to exist?  So my question isn't HOW human
 behavior is produced, but instead I'm asking why the mechanistic
 processes that produce human behavior are accompanied by subjective
 first person conscious experience.  The hard problem.  Qualia.

 I wasn't asking how is it that we do the things we do, or, how did
 this come about, but instead given that we do these things, why is
 there a subjective experience associated with doing them.

 So none of the things you reference are relevant to the question of
 whether a computer simulation of a human mind would be conscious in
 the same way as a real human mind.  If a simulation would be, then
 what are the properties that those to two very dissimilar physical
 systems have in common that would explain this mutual experience of
 consciousness?

 On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com wrote:

  No. Consciousness is not information. It is an additional process that
  handles its own generated information. I you don´t recognize the
  driving mechanism towards order in the universe, you will be running
  on empty. This driving mechanism is natural selection. Things gets
  selected, replicated and selected again.

  In the case of humans, time ago the evolutionary psychologists and
  philosophers (Dennet etc) discovered the evolutionary nature of
  consciousness, that is double: For social animals, consciousness keeps
  an actualized image of how the others see ourselves. This ability is
  very important in order to plan future actions with/towards others
  members. A memory of past actions, favors and offenses are kept in
  memory for consciousness processing.  This is a part of our moral
  sense, that is, our navigation device in the social environment.
  Additionally, by reflection on ourselves, the consciousness module can
  discover the motivations of others.

  The evolutionary steps for the emergence of consciousness are: 1) in
  order to optimize the outcome of collaboration, a social animal start
  to look the others as unique individuals, and memorize their own
  record of actions. 2) Because the others do 1, the animal develop a
  sense of itself and record how each one of the others see himself
  (this is adaptive because 1). 3) This primitive conscious module
  evolved in 2 starts to inspect first and lately, even take control of
  some action with a deep social load. 4) The conscious module
  attributes to an individual moral self every action triggered by the
  brain, even if it driven by low instincts, just because that´s is the
  way the others see himself as individual. That´s why we feel ourselves
  as unique individuals and with an indivisible Cartesian mind.

  The consciousness ability is fairly recent in evolutionary terms. This
  explain its inefficient and sequential nature. This and 3 explains why
  we feel anxiety in some social situations: the cognitive load is too
  much for the conscious module when he tries to take control of the
  situation when self image it at a stake. This also explain why when we
  

Re: Consciousness is information?

2009-05-17 Thread John Mikes
Let me please insert my remarks into this remarkable chain of thoughts below
(my inserts in bold)
John M

On Sun, May 17, 2009 at 2:03 AM, Brent Meeker meeke...@dslextreme.comwrote:


 Kelly Harmon wrote:
  I think your discussing the functional aspects of consciousness.  AKA,
  the easy problems of consciousness.  The question of how human
  behavior is produced.


*I believe it is a 'forced artifact' to separate any aspect of a complex
image from the entire 'unit' we like to call 'conscious behavior'. In our
(analytical) view we regard the 'activity' as separate from the initiation
and the process resulting from it through decision(?) AND the assumed
maintaining of the function. *


 
  My question was what is the source of phenomenal consciousness.
  What is the absolute minimum requirement which must be met in order
  for conscious experience to exist?  So my question isn't HOW human
  behavior is produced, but instead I'm asking why the mechanistic
  processes that produce human behavior are accompanied by subjective
  first person conscious experience.  The hard problem.  Qualia.


*We are 'human' concentrated and slanted in our views. *
*Extending it not only to other 'conscious' animals, but to phenomena in the
so (mis)called 'inanimate' - and reversing our logical habit (see below to
Brent) brings up different questions so far not much discussed. The 'hard
problem' is a separation in the totality of the phenomenon  -*
*[from its physical/physiological observation within our so far
outlined  figment of viewing the 'physical world' separately and its
reduced, conventional ('scientific')  explanations] - *
* into assuming (some) undisclosed other aspects of the same complex. From
'quantized' into some 'qualia'. *

 
  I wasn't asking how is it that we do the things we do, or, how did
  this come about, but instead given that we do these things, why is
  there a subjective experience associated with doing them.


*And we should exactly ask what you wasn't asking. *
**


 Brent: Meeker:
 Do you suppose that something could behave just as humans do yet not be
 conscious, i.e. could there be a philosophical zombie?


*Once we consider the totality of the phenomenon and do not separate aspects
of th complexity, the zombie becomes a meaningless artifact of the
primitive ways our thinking evolved. *


 Kelly:
 
  So none of the things you reference are relevant to the question of
  whether a computer simulation of a human mind would be conscious in
  the same way as a real human mind.  If a simulation would be, then
  what are the properties that those to two very dissimilar physical
  systems have in common that would explain this mutual experience of
  consciousness?


*A fitting computer simulation would include ALL aspects involved - call it
mind AND body, 'physically' observable 'activity' and 'consciousness as
cause' -- but alas, no such thing so far. Our embryonic machine with its
binary algorithms, driven by a switched on (electrically induced) primitive
mechanism can do just that much, within the known segments designed 'in'. *
*What we may call 'qualia' is waiting for some analogue comp, working
simultaneously on all aspects of the phenomena involved (IMO not practical,
since there cannot be a limit drawn in the interrelated totality, beyond
which relations may be irrelevant). *
**


 Brent:
 The information processing?


*Does that mean a homunculus, that 'processes' the (again separated) aspect
of 'information' into a format that fits our image of the aspectwise
formulated items? *
*What I question is the 'initiation' and 'maintenance' of what we call the
occurrence of phenomena. We do imagine a 'functioning' world where
everything just does occur, observed by itself and in no connection to the
rest of the world. *
*I am looking for 'relations' that 'influence' each other into aspects we
consider as 'different' (from what?) and call such relational
interconnectedness the world. *
*We are far from knowing it all, even further from any 'true' understanding
so we fabricted in our epistemic enrichment over the millennia  a stepwise
approach to 'explain' the miracles. *
*Learning of acknowledged(?) relational aspects (call it decisionmaking?)
and realization of ramifications upon such (call it process, function,
activity) is the basis of our (now still reductionistic) physical
worldview.  *
*Please excuse my hasty writing in premature ideas I could not detail out or
even justify using inadequate old words that should be relaced by a fitting
vocabulry. ((Alberto (below) even mentions 'memory' - that could as well be
a re-visiting of relations in the a-temporal totality view we coordinate as
a time - space physics)). *



 Brent

*John M*

 
  On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com
 wrote:
  No. Consciousness is not information. It is an additional process that
  handles its own generated information. I you don´t recognize the
  driving mechanism towards order in the 

Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 2:03 AM, Brent Meeker meeke...@dslextreme.com wrote:

 Do you suppose that something could behave just as humans do yet not be
 conscious, i.e. could there be a philosophical zombie?

I think that somewhere there would have to be a conscious experience
associated with the production of the behavior, THOUGH the conscious
experience might not supervene onto the system producing the behavior
in an obvious way.

Generally I don't think that what we experience is necessarily caused
by physical systems.  I think that sometimes physical systems assume
configurations that shadow, or represent, our conscious experience.
But they don't CAUSE our conscious experience.

So a computer simulation of a human brain that thinks it's at the
beach would be an example.  The computer running the simulation
assumes a sequence of configurations that could be interpreted as
representing the mental processes of a person enjoying a day at the
beach.  But I can't see any reason why a bunch of electrons moving
through copper and silicon in a particular way would cause that
subjective experience of surf and sand.

And for similar reasons I don't see why a human brain would either,
even if it was actually at the beach, given that it is also just
electrons and protons and neutrons.moving in specific ways.

It doesn't seem plausible to me that it is the act of being
represented in some way by a physical system that produces conscious
experience.

Though it DOES seem plausible/obvious to me that a physical system
going through a sequence of these representations is what produces
human behavior.


 The information processing?


Well, I would say information processing, but it seems to me that many
different processes could produce the same information.  And I would
not expect a change in process or algorithm to produce a different
subjective experience if the information that was being
processed/output remained the same.

So for this reason I go with consciousness is information, not
consciousness is information processing.

Processes just describe ways that different information states CAN be
connected, or related, or transformed.  But I don't think that
consciousness resides in those processes.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 8:07 AM, John Mikes jami...@gmail.com wrote:

 A fitting computer simulation would include ALL aspects involved - call it
 mind AND body, 'physically' observable 'activity' and 'consciousness as
 cause' -- but alas, no such thing so far. Our embryonic machine with its
 binary algorithms, driven by a switched on (electrically induced) primitive
 mechanism can do just that much, within the known segments designed 'in'.
 What we may call 'qualia' is waiting for some analogue comp, working
 simultaneously on all aspects of the phenomena involved (IMO not practical,
 since there cannot be a limit drawn in the interrelated totality, beyond
 which relations may be irrelevant).


So you're saying that it's not possible, even in principle, to
simulate a human brain on a digital computer?  But that it would be
possible on a massively parallel analog computer?  What extra
something do you think an analog computer provides that isn't
available from a digital computer?  Why would it be necessary to run
all of the calculations in parallel?


 'consciousness as cause'

You are saying that consciousness has a causal role, that is
additional to the causal structure found in non-conscious physical
systems?  What leads you to this conclusion?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Fri, May 15, 2009 at 12:32 AM, Jesse Mazer laserma...@hotmail.com wrote:

 I don't have a problem with the idea that a giant lookup table is just a
 sort of zombie, since after all the way you'd create a lookup table for a
 given algorithmic mind would be to run a huge series of actual simulations
 of that mind with all possible inputs, creating a huge archive of
 recordings so that later if anyone supplies the lookup table with a given
 input, the table just looks up the recording of the occasion in which the
 original simulated mind was supplied with that exact input in the past, and
 plays it back. Why should merely replaying a recording of something that
 happened to a simulated observer in the past contribute to the measure of
 that observer-moment? I don't believe that playing a videotape of me being
 happy or sad in the past will increase the measure of happy or sad
 observer-moments involving me, after all. And Olympia seems to be somewhat
 similar to a lookup table in that the only way to construct her would be
 to have already run the regular Turing machine program that she is supposed
 to emulate, so that you know in advance the order that the Turing machine's
 read/write head visits different cells, and then you can rearrange the
 positions of those cells so Olympia will visit them in the correct order
 just by going from one cell to the next in line over and over again.


What if you used a lookup table for only a single neuron in a computer
simulation of a brain?  So actual calculations for the rest of the
brain's neurons are performed, but this single neuron just does
lookups into a table of pre-calculated outputs.  Would consciousness
still be produced in this case?

What if you then re-ran the simulation with 10 neurons doing lookups,
but calculations still being executed for the rest of the simulated
brain?  Still consciousness is produced?

What if 10% of the neurons are implemented using lookup tables?  50%?
90%?  How about all except 1 neuron is implemented via lookup tables,
but that 1 neuron's outputs are still calculated from inputs?

At what point does the simulation become a zombie?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-17 Thread Kelly Harmon

On Sun, May 17, 2009 at 9:13 PM, Brent Meeker meeke...@dslextreme.com wrote:

 Generally I don't think that what we experience is necessarily caused
 by physical systems.  I think that sometimes physical systems assume
 configurations that shadow, or represent, our conscious experience.
 But they don't CAUSE our conscious experience.


 So if we could track the functions of the brain at a fine enough scale,
 we'd see physical events that didn't have physical causes (ones that
 were caused by mental events?).


No, no, no.  I'm not saying that at all.  Ultimately I'm saying that
if there is a physical world, it's irrelevant to consciousness.
Consciousness is information.  Physical systems can be interpreted as
representing, or storing, information, but that act of storage
isn't what gives rise to conscious experience.


 You're aware of course that the same things were said about the
 physio/chemical bases of life.


You mentioned that point before, as I recall.  Dennett made a similar
argument against Chalmers, to which Chalmers had what I thought was an
effective response:

---
http://consc.net/papers/moving.html

Perhaps the most common strategy for a type-A materialist is to
deflate the hard problem by using analogies to other domains, where
talk of such a problem would be misguided. Thus Dennett imagines a
vitalist arguing about the hard problem of life, or a neuroscientist
arguing about the hard problem of perception. Similarly, Paul
Churchland (1996) imagines a nineteenth century philosopher worrying
about the hard problem of light, and Patricia Churchland brings up
an analogy involving heat. In all these cases, we are to suppose,
someone might once have thought that more needed explaining than
structure and function; but in each case, science has proved them
wrong. So perhaps the argument about consciousness is no better.

This sort of argument cannot bear much weight, however. Pointing out
that analogous arguments do not work in other domains is no news: the
whole point of anti-reductionist arguments about consciousness is that
there is a disanalogy between the problem of consciousness and
problems in other domains. As for the claim that analogous arguments
in such domains might once have been plausible, this strikes me as
something of a convenient myth: in the other domains, it is more or
less obvious that structure and function are what need explaining, at
least once any experiential aspects are left aside, and one would be
hard pressed to find a substantial body of people who ever argued
otherwise.

When it comes to the problem of life, for example, it is just obvious
that what needs explaining is structure and function: How does a
living system self-organize? How does it adapt to its environment? How
does it reproduce? Even the vitalists recognized this central point:
their driving question was always How could a mere physical system
perform these complex functions?, not Why are these functions
accompanied by life? It is no accident that Dennett's version of a
vitalist is imaginary. There is no distinct hard problem of life,
and there never was one, even for vitalists.

In general, when faced with the challenge explain X, we need to ask:
what are the phenomena in the vicinity of X that need explaining, and
how might we explain them? In the case of life, what cries out for
explanation are such phenomena as reproduction, adaptation,
metabolism, self-sustenance, and so on: all complex functions. There
is not even a plausible candidate for a further sort of property of
life that needs explaining (leaving aside consciousness itself), and
indeed there never was. In the case of consciousness, on the other
hand, the manifest phenomena that need explaining are such things as
discrimination, reportability, integration (the functions), and
experience. So this analogy does not even get off the ground.

--

 Though it DOES seem plausible/obvious to me that a physical system
 going through a sequence of these representations is what produces
 human behavior.

 So you're saying that a sequence of physical representations is enough
 to produce behavior.

Right, observed behavior.  What I'm saying here is that it seems
obvious to me that mechanistic computation is sufficient to explain
observed human behavior.  If that was the only thing that needed
explaining, we'd be done.  Mission accomplished.

BUT...there's subjective experience that also needs explained, and
this is actually the first question that needs answered.  All other
answers are suspect until subjective experience has been explained.


 And there must be conscious experience associated
 with behavior.

Well, here's where it gets tricky.  Conscious experience is associated
with information.  But how information is tied to physical systems is
a different question.  Any physical systems can be interpreted as
representing all sorts of things (again, back to Putnam and Searle,
one-time pads, Maudlin's Olympia example, Bruno's movie graph

Re: Consciousness is information?

2009-05-17 Thread Brent Meeker

Kelly Harmon wrote:
 On Sun, May 17, 2009 at 9:13 PM, Brent Meeker meeke...@dslextreme.com wrote:
   
 Generally I don't think that what we experience is necessarily caused
 by physical systems.  I think that sometimes physical systems assume
 configurations that shadow, or represent, our conscious experience.
 But they don't CAUSE our conscious experience.

   
 So if we could track the functions of the brain at a fine enough scale,
 we'd see physical events that didn't have physical causes (ones that
 were caused by mental events?).

 

 No, no, no.  I'm not saying that at all.  Ultimately I'm saying that
 if there is a physical world, it's irrelevant to consciousness.
 Consciousness is information.  Physical systems can be interpreted as
 representing, or storing, information, but that act of storage
 isn't what gives rise to conscious experience.

   
 You're aware of course that the same things were said about the
 physio/chemical bases of life.

 

 You mentioned that point before, as I recall.  Dennett made a similar
 argument against Chalmers, to which Chalmers had what I thought was an
 effective response:

 ---
 http://consc.net/papers/moving.html

 Perhaps the most common strategy for a type-A materialist is to
 deflate the hard problem by using analogies to other domains, where
 talk of such a problem would be misguided. Thus Dennett imagines a
 vitalist arguing about the hard problem of life, or a neuroscientist
 arguing about the hard problem of perception. Similarly, Paul
 Churchland (1996) imagines a nineteenth century philosopher worrying
 about the hard problem of light, and Patricia Churchland brings up
 an analogy involving heat. In all these cases, we are to suppose,
 someone might once have thought that more needed explaining than
 structure and function; but in each case, science has proved them
 wrong. So perhaps the argument about consciousness is no better.

 This sort of argument cannot bear much weight, however. Pointing out
 that analogous arguments do not work in other domains is no news: the
 whole point of anti-reductionist arguments about consciousness is that
 there is a disanalogy between the problem of consciousness and
 problems in other domains. As for the claim that analogous arguments
 in such domains might once have been plausible, this strikes me as
 something of a convenient myth: in the other domains, it is more or
 less obvious that structure and function are what need explaining, at
 least once any experiential aspects are left aside, and one would be
 hard pressed to find a substantial body of people who ever argued
 otherwise.

 When it comes to the problem of life, for example, it is just obvious
 that what needs explaining is structure and function: How does a
 living system self-organize? How does it adapt to its environment? How
 does it reproduce? Even the vitalists recognized this central point:
 their driving question was always How could a mere physical system
 perform these complex functions?, not Why are these functions
 accompanied by life? It is no accident that Dennett's version of a
 vitalist is imaginary. There is no distinct hard problem of life,
 and there never was one, even for vitalists.

 In general, when faced with the challenge explain X, we need to ask:
 what are the phenomena in the vicinity of X that need explaining, and
 how might we explain them? In the case of life, what cries out for
 explanation are such phenomena as reproduction, adaptation,
 metabolism, self-sustenance, and so on: all complex functions. There
 is not even a plausible candidate for a further sort of property of
 life that needs explaining (leaving aside consciousness itself), and
 indeed there never was. In the case of consciousness, on the other
 hand, the manifest phenomena that need explaining are such things as
 discrimination, reportability, integration (the functions), and
 experience. So this analogy does not even get off the ground.

 --
   

On the contrary, I think it does.  First, I think Chalmers idea that 
vitalists recognized that all that needed explaining was structure and 
function is revisionist history.  They were looking for the animating 
spirit.  It is in hind sight, having found the function and structure, 
that we've realized that was all the explanation available.  And I 
expect the same thing will happen with consciousness. We will eventually 
be able to make robots that behave as humans do and we will infer, from 
their behavior, that they are conscious.  And we, being their designers, 
will be able to analyze them and say, Here's what makes R2D2 have 
conscious experiences of visual perception and here's what makes 3CPO 
have self awareness relative to humans.  We will find that there are 
many different kinds of conscious and we will be able to invent new 
ones.  We will never solve Chalmers hard problem, we'll just realize 
it's a non-question.

   
 Though it DOES seem plausible/obvious to me that a physical 

Re: Consciousness is information?

2009-05-16 Thread Alberto G.Corona

No. Consciousness is not information. It is an additional process that
handles its own generated information. I you don´t recognize the
driving mechanism towards order in the universe, you will be running
on empty. This driving mechanism is natural selection. Things gets
selected, replicated and selected again.

In the case of humans, time ago the evolutionary psychologists and
philosophers (Dennet etc) discovered the evolutionary nature of
consciousness, that is double: For social animals, consciousness keeps
an actualized image of how the others see ourselves. This ability is
very important in order to plan future actions with/towards others
members. A memory of past actions, favors and offenses are kept in
memory for consciousness processing.  This is a part of our moral
sense, that is, our navigation device in the social environment.
Additionally, by reflection on ourselves, the consciousness module can
discover the motivations of others.

The evolutionary steps for the emergence of consciousness are: 1) in
order to optimize the outcome of collaboration, a social animal start
to look the others as unique individuals, and memorize their own
record of actions. 2) Because the others do 1, the animal develop a
sense of itself and record how each one of the others see himself
(this is adaptive because 1). 3) This primitive conscious module
evolved in 2 starts to inspect first and lately, even take control of
some action with a deep social load. 4) The conscious module
attributes to an individual moral self every action triggered by the
brain, even if it driven by low instincts, just because that´s is the
way the others see himself as individual. That´s why we feel ourselves
as unique individuals and with an indivisible Cartesian mind.

The consciousness ability is fairly recent in evolutionary terms. This
explain its inefficient and sequential nature. This and 3 explains why
we feel anxiety in some social situations: the cognitive load is too
much for the conscious module when he tries to take control of the
situation when self image it at a stake. This also explain why when we
travel we feel a kind of liberation: because the conscious module is
made irrelevant outside our social circle, so our more efficient lower
level modules take care of our actions


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-16 Thread Kelly Harmon

I think your discussing the functional aspects of consciousness.  AKA,
the easy problems of consciousness.  The question of how human
behavior is produced.

My question was what is the source of phenomenal consciousness.
What is the absolute minimum requirement which must be met in order
for conscious experience to exist?  So my question isn't HOW human
behavior is produced, but instead I'm asking why the mechanistic
processes that produce human behavior are accompanied by subjective
first person conscious experience.  The hard problem.  Qualia.

I wasn't asking how is it that we do the things we do, or, how did
this come about, but instead given that we do these things, why is
there a subjective experience associated with doing them.

So none of the things you reference are relevant to the question of
whether a computer simulation of a human mind would be conscious in
the same way as a real human mind.  If a simulation would be, then
what are the properties that those to two very dissimilar physical
systems have in common that would explain this mutual experience of
consciousness?



On Sat, May 16, 2009 at 3:22 AM, Alberto G.Corona agocor...@gmail.com wrote:

 No. Consciousness is not information. It is an additional process that
 handles its own generated information. I you don´t recognize the
 driving mechanism towards order in the universe, you will be running
 on empty. This driving mechanism is natural selection. Things gets
 selected, replicated and selected again.

 In the case of humans, time ago the evolutionary psychologists and
 philosophers (Dennet etc) discovered the evolutionary nature of
 consciousness, that is double: For social animals, consciousness keeps
 an actualized image of how the others see ourselves. This ability is
 very important in order to plan future actions with/towards others
 members. A memory of past actions, favors and offenses are kept in
 memory for consciousness processing.  This is a part of our moral
 sense, that is, our navigation device in the social environment.
 Additionally, by reflection on ourselves, the consciousness module can
 discover the motivations of others.

 The evolutionary steps for the emergence of consciousness are: 1) in
 order to optimize the outcome of collaboration, a social animal start
 to look the others as unique individuals, and memorize their own
 record of actions. 2) Because the others do 1, the animal develop a
 sense of itself and record how each one of the others see himself
 (this is adaptive because 1). 3) This primitive conscious module
 evolved in 2 starts to inspect first and lately, even take control of
 some action with a deep social load. 4) The conscious module
 attributes to an individual moral self every action triggered by the
 brain, even if it driven by low instincts, just because that´s is the
 way the others see himself as individual. That´s why we feel ourselves
 as unique individuals and with an indivisible Cartesian mind.

 The consciousness ability is fairly recent in evolutionary terms. This
 explain its inefficient and sequential nature. This and 3 explains why
 we feel anxiety in some social situations: the cognitive load is too
 much for the conscious module when he tries to take control of the
 situation when self image it at a stake. This also explain why when we
 travel we feel a kind of liberation: because the conscious module is
 made irrelevant outside our social circle, so our more efficient lower
 level modules take care of our actions


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-15 Thread Bruno Marchal
Hi Jesse,


On 15 May 2009, at 06:32, Jesse Mazer wrote:


 Maudlin shows that you can reduce almost arbitrarily the amount of  
 physical activity for running any computation, and keep their  
 computational genuineness through the use of inert material. So the  
 isomorphism you introduce vanish on the original Olympia (Pre- 
 olympia).

 Olympia *is*  Pre-Olympia + Klara (the inert (for the computation  
 PI) machinery needed for the counterfactuals) OK? Olympia run the  
 computation PI.



 But what do you mean when you say the isomorphism vanishes? Do you  
 mean that the causal structure of pre-Olympia would *not* be  
 isomorphic to the causal structure of the original Turing machine  
 that pre-Olympia was supposed to imitate (according to the  
 definition of causal structure in terms of logical relations between  
 propositions about the system's state at different moments)?


Yes. When I assume physical supervenience, for the benefit of the  
refutation. Olympia, relatively to me, implements Alice (or PI), Pre- 
Olypia does not. I would say yes to a doctor if he gives me an Olympia  
brain, no if he gives me pre-Olympia! That is what I mean by the  
vansihing of the causal isomorphism. Of course my goal, when I say  
yes to the doctor, is to preserve my consciousness, and my ability to  
manifest it in the normal (most probable) histories. My  
consciousness is already in plato heaven, so what I need here are  
the right dispositional devices.



 If so, that would mean that regular Olympia (pre-Olympia + Klara)  
 wouldn't have a causal structure isomorphic to the Turing machine  
 either, since I was defining causal structure solely in terms of  
 propositions about events that *do* occur in the system's history,  
 meaning the extra counterfactual conditions provided by Klara are  
 irrelevant to Olympia's causal structure, so Olympia's causal  
 structure would be the same as pre-Olympia's.


Right. But Maudlin manages to show that Olympia can have an empty  
causal structure, and that you have to say yes to the doctor when he  
proposes to substitute your brain by nothing. Personally I conceive  
propositions only in a net of related propositions by theories or  
models. The causal structure is mainly given by axioms and inference  
or computation rules, or by a (mathematical) semantics (model). You  
can't separate a proposition from other propositions like you can't  
separate a number from the other numbers.
I guess you would say that the movie-graph (the movie of the filmed  
active boolean graph corresponding to Alice's dream) would vehiculate  
Alice's dream. I can agree if you call the causal structure the  
computation corresponding to the local events lending to that graph,  
but then you have abandon the real time physical supervenience  
thesis already (or comp).
It is a very subtle and complex point here, we can go back on this  
later.




 If that's the case, why can't we postulate that consciousness  
 supervenes on causal structure, since causal structure is after all  
 part of the physical world?


The point is that you can realize any computation with any causal  
structure in that sense. Maudlin's construction explains well that the  
Klaras, or the *material* for the counterfactuals are a read herring  
as far as giving a role in the logical relations describing a  
computation. And not just the material one! Any choice of a particular  
universal system cannot work, you have to take them all. You can then  
choose the simplest one (+ and *) to retrieve those who define  
observable realities from the point of view of universal machines.




 In fact one could say that physics is *only* concerned with  
 causality in the sense of lawlike relations between propositions  
 about observations, since the laws of physics tell us nothing about  
 what particles or fields or wavefunctions really are, only about  
 how they interact with one another and how they can be used to  
 predict the outcomes measurements. So if we say consciousness  
 supervenes on causal structure, then Olympia would not qualify as an  
 instantiation of the observer-moments that the original Turing  
 machine instantiated, in much the same way that a lookup table  
 wouldn't qualify.


I don't see that at all. Olympia is just a crazy implementation of  
an algorithm, but it is correct on all inputs. Its resemblance with a  
look-up table is local, finite, and does not change Olympia's  
semantics. If such a change makes a change, I would no more say yes to  
a doctor. My consciousness would depend on the nature of the  
implementation.




 I don't have a problem with the idea that a giant lookup table is  
 just a sort of zombie,

Look-up table contains the counterfactuals. I am not sure that a giant  
look-up up table can be considered as a zombie. The problem is that  
such a look-up table would be gigantic and hard to address. Also, its  
origin, relatively to me, would need a strange history. 

Re: Consciousness is information?

2009-05-14 Thread John Mikes
Stathis,
I agree halfway with you and expected something (maybe more).
Do you mean the others are zombies? not ME (you, etc. 1st pers).
I take it one step further, the fun (I agree) includes a satisfaction that
here is a bunch of really smart guys and I can tell them something in their
profession they may respond to - even if I am outside of their learned
profession - which is not so 'practical'. Mental narcissism?
*
Somebody made an 'expert' list, collecting opinions for open concepts in  a
statistical evaluation of what the majority of experts think. Of course I
objected: scientific identification is NO democratic voting matter, if 100
so called 'experts' voice an opinion I may still represent the right one
in a single-vote different position.

Thanks for your input

John M

On Wed, May 13, 2009 at 10:29 PM, Stathis Papaioannou stath...@gmail.comwrote:


 2009/5/13 John Mikes jami...@gmail.com:
  Bruno,
  merci pour le nom Jean Cocteau. J'ai voulu montrer que je semble
  vivant.
  I told my young bride of 61 years (originally economist, but follows all
 the
  plaisantries I speculate on) about the assumptions you guys speculate on
 and
  connect to assumptions of assumptions,  Torgny the zombie, Stephen
 Leibnitz'
  Monads, you numbers, others Q-immortality/suicide and partial
 teleportation
  at the level of highest science - and she asked -
  (because she believes in her love that I am into all that,
 - understanding):
  What do you guys hope to achieve by all this speculation?
  I replied: it's getting late, let's go to sleep.
 
  Well??? (I believe this is the most meaningful word in English)

 Mainly it's just fun; but it's also profoundly important from a
 practical point of view if, for example, other people are zombies or
 we are all immortal (in a non-living-dead sort of way), no?


 --
 Stathis Papaioannou

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-14 Thread Stathis Papaioannou

2009/5/15 John Mikes jami...@gmail.com:
 Stathis,
 I agree halfway with you and expected something (maybe more).
 Do you mean the others are zombies? not ME (you, etc. 1st pers).

I don't think others are zombies, but it is interesting nevertheless
to consider the possibility.

 I take it one step further, the fun (I agree) includes a satisfaction that
 here is a bunch of really smart guys and I can tell them something in their
 profession they may respond to - even if I am outside of their learned
 profession - which is not so 'practical'. Mental narcissism?

Yes, on some mailing lists people try to score points and show how
smart they are but on this one, that doesn't seem to happen so much.

 Somebody made an 'expert' list, collecting opinions for open concepts in  a
 statistical evaluation of what the majority of experts think. Of course I
 objected: scientific identification is NO democratic voting matter, if 100
 so called 'experts' voice an opinion I may still represent the right one
 in a single-vote different position.

That's true, but scientific consensus must count for *something*. If I
have no idea about a subject it is more likely I will get the right
answer from an expert than from a random person. But of course,
experts cannot always be right, and historically many things that
scientists have believed even unanimously have turned out to be wrong.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Consciousness is information?

2009-05-14 Thread Jesse Mazer

Hi Bruno, I meant to reply to this earlier:

From: marc...@ulb.ac.be
To: everything-list@googlegroups.com
Subject: Re: Consciousness is information?
Date: Sat, 2 May 2009 14:45:13 +0200


On 30 Apr 2009, at 18:29, Jesse Mazer wrote:
Bruno Marchal wrote:

On 29 Apr 2009, at 23:30, Jesse Mazer wrote:
But I'm not convinced that the basic Olympia machine he describes doesn't 
already have a complex causal structure--the causal structure would be in the 
way different troughs influence each other via the pipe system he describes, 
noting the motion of the armature. 
But Maudlin succeed in showing that in its particular running history,  *that* 
causal structure is physically inert. Or it has mysterious influence not 
related to the computation. 


Maudlin only showed that *if* you define causal structure in terms of 
counterfactuals, then the machinery that ensures the proper counterfactuals 
might be physically inert. But if you reread my post at 
http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html you 
can see that I was trying to come up with a definition of the causal 
structure of a set of events that did *not* depend on counterfactuals...look 
at these two paragraphs from that post, particular the first sentence of the 
first paragraph and the last sentence of the second paragraph:
It seems to me that there might be ways of defining causal structure which 
don't depend on counterfactuals, though. One idea I had is that for any system 
which changes state in a lawlike way over time, all facts about events in the 
system's history can be represented as a collection of propositions, and then 
causal structure might be understood in terms of logical relations between 
propositions, given knowledge of the laws governing the system. As an example, 
if the system was a cellular automaton, one might have a collection of 
propositions like cell 156 is colored black at time-step 36, and if you know 
the rules for how the cells are updated on each time-step, then knowing some 
subsets of propositions would allow you to deduce others (for example, if you 
have a set of propositions that tell you the states of all the cells 
surrounding cell 71 at time-step 106, in most cellular automata that would 
allow you to figure out the state of cell 71 at the subsequent time-step 107). 
If the laws of physics in our universe are deterministic than you should in 
principle be able to represent all facts about the state of the universe at 
all times as a giant (probably infinite) set of propositions as well, and 
given knowledge of the laws, knowing certain subsets of these propositions 
would allow you to deduce others.
Causal structure could then be defined in terms of what logical relations 
hold between the propositions, given knowledge of the laws governing the 
system. Perhaps in one system you might find a set of four propositions A, B, 
C, D such that if you know the system's laws, you can see that AB imply C, 
and D implies A, but no other proposition or group of propositions in this set 
of four are sufficient to deduce any of the others in this set. Then in 
another system you might find a set of four propositions X, Y, Z and W such 
that WZ imply Y, and X implies W, but those are the only deductions you can 
make from within this set. In this case you can say these two different sets 
of four propositions represent instantiations of the same causal structure, 
since if you map W to A, Z to B, Y to C, and D to X then you can see an 
isomorphism in the logical relations. That's obviously a very simple causal 
structure involving only 4 events, but one might define much more complex 
causal structures and then check if there was any subset of events in a 
system's history that matched that structure. And the propositions could be 
restricted to ones concerning events that actually did occur in the system's 
history, with no counterfactual propositions about what would have happened if 
the system's initial state had been different.


For a Turing machine running a particular program the propositions might be 
things like at time-step 35 the Turing machine's read/write head moved to 
memory cell #82 and at time-step 35 the Turing machine had internal state S3 
and at time-step 35 memory cell #82 held the digit 1. I'm not sure whether 
the general rules for how the Turing machine's internal state changes from one 
step to the next should also be included among the propositions, my guess is 
you'd probably need to do so in order to ensure that different computations had 
different causal structures according to the type of definition above...so, 
you might have a proposition expressing a rule like if the Turing machine is 
in internal state S3 and its read/write head detects the digit 1, it changes 
the digit in that cell to a 0 and moves 2 cells to the left, also changing its 
internal state to S5. Then this set of four propositions would be sufficient 
to deduce some other propositions about

Re: Consciousness is information?

2009-05-13 Thread John Mikes
Jason, thanks for your reply.
Those BIG questions? IMO: typical SO WHAT ones. AND if we know?
There is one (practical?) point though: knowing some 'right(?)' answer will
reduce our danger to succumb to underhanded assumptions that mostly involve
pressure to do what otherwise we wouldn't do.
(Like killing the religiously 'infidel', or a gynecologist, and the like.
Pay our church-tax and vote as the pastor/political leader said)
And the truth? whose?
we live in our 1-pov's mini-solipsism, limited to our own perceived reality
plus the genetic- and experience- formed ways to interpret what we got as
enrichment in the epistemic cognitive inventory and call it 'truth'.
Any further learned information is stored(?) as interpreted into our own
ways. No two persons have identical knowledge, belief, or thinking.

John M

On Tue, May 12, 2009 at 10:17 PM, Jason Resch jasonre...@gmail.com wrote:


 John,

 Great question I am glad you asked it.  I think I was driven to this
 list because of big questions, especially those which most people seem
 to believe are unanswerable.  Questions such as:  Where did this
 universe come from?  Why are we here and why am I me?  Is there a God?
  What is responsible for consciousness?  What is time?  Is there life
 after death? Etc.  After much reading and thought I am now mostly
 satisfied with the answers I have arrived at, and keeping up with this
 list and the issues people raise on various topics helps me to keep
 updating my models of reality to hopefully become more correct.  I
 think it is good mental exercise to ponder the questions people on
 this list raise, and despite all the disagreement, chains of
 assumptions, and inability to test many of the conjectures I think
 this list is slowly making progress toward truth.

 Jason


 On Tue, May 12, 2009 at 3:42 PM, John Mikes jami...@gmail.com wrote:
  Bruno,
  merci pour le nom Jean Cocteau. J'ai voulu montrer que je semble
  vivant.
  I told my young bride of 61 years (originally economist, but follows all
 the
  plaisantries I speculate on) about the assumptions you guys speculate on
 and
  connect to assumptions of assumptions,  Torgny the zombie, Stephen
 Leibnitz'
  Monads, you numbers, others Q-immortality/suicide and partial
 teleportation
  at the level of highest science - and she asked -
  (because she believes in her love that I am into all that,
 - understanding):
  What do you guys hope to achieve by all this speculation?
  I replied: it's getting late, let's go to sleep.
 
  Well??? (I believe this is the most meaningful word in English)
 
  John M
 
 
  On Tue, May 12, 2009 at 11:22 AM, Bruno Marchal marc...@ulb.ac.be
 wrote:
 
  Hi John,
 
 
 
  On 11 May 2009, at 22:49, John Mikes wrote:
 
  
   who was that French poet who made puns after death?
  
   ...
   A french poet said, after he died  (!) :  friends, pretend only to
   cry because poet pretends only to dye. (Faites semblant de pleurer
   mes amis puisque les poètes font semblant de mourrir).
  
  
 
  It is Jean Cocteau.
 
  In Le Testament d'Orphée. A movie, made by Jean Cocteau, where he
  plays the role of the dying poet. I am not entirely sure of the total
  correctness of the quote. It could be Faites semblant de pleurer mes
  amis puisque les poètes ne font que semblant d'être mort.
 
  Best,
 
  Bruno
 
 
 
  http://iridia.ulb.ac.be/~marchal/
 
 
 
 
 
  
 

  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-13 Thread Bruno Marchal
John,


On 12 May 2009, at 22:42, John Mikes wrote:

 (because she believes in her love that I am into all that, -  
 understanding):
 What do you guys hope to achieve by all this speculation?


I think there is a difference between speculating on the truth on some  
theories, and trying just to make as clear as possible those theories  
so that we can derive some observable consequences, so that we can  
make a test to luckily be able to abandon an erroneous speculation/ 
theory.

And normally UDA shows that we cannot be consistent and still  
speculate on primary substance and on mechanism simultaneously, like  
we tend to do since a long time. And AUDA shows a way to test  
mechanism indeed.

I don't like too much the word speculation, because it can be used  
pejoratively, and people, when attributing it to you, believes that  
you are making some new extraordinary assumption, when, personally,  I  
try to show the amazing things arrive already quickly with very simple  
common assumption believed by almost everybody (that our bodies obeys  
computable laws).

Comp is a speculation, but it is far less speculative than any non- 
comp theory, which has to postulate actual infinities in the mind.

Of course on this list we are ambitious in the spectrum of what we  
want to figure out. It is fundamental research.

But many are just modestly searching. I guess most knows that theories  
are just ways to put some light on some part of the unknown, so that  
we can continue the exploration.

What we hope? No more no less than those who have put Hubble in space.  
We hope to see big and beautiful things.

Bruno





http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-13 Thread Stathis Papaioannou

2009/5/13 John Mikes jami...@gmail.com:
 Bruno,
 merci pour le nom Jean Cocteau. J'ai voulu montrer que je semble
 vivant.
 I told my young bride of 61 years (originally economist, but follows all the
 plaisantries I speculate on) about the assumptions you guys speculate on and
 connect to assumptions of assumptions,  Torgny the zombie, Stephen Leibnitz'
 Monads, you numbers, others Q-immortality/suicide and partial teleportation
 at the level of highest science - and she asked -
 (because she believes in her love that I am into all that, - understanding):
 What do you guys hope to achieve by all this speculation?
 I replied: it's getting late, let's go to sleep.

 Well??? (I believe this is the most meaningful word in English)

Mainly it's just fun; but it's also profoundly important from a
practical point of view if, for example, other people are zombies or
we are all immortal (in a non-living-dead sort of way), no?


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-12 Thread Bruno Marchal


Hi Torgny,

 I come from Stockholm, Sweden.  I was constructed by my parents.  In
 reality I think that all humans are zombies, but because I am a polite
 person, I do not tell the other zombies that they are zombies.  I do  
 not
 want to hurt the other zombies by telling them the truth.


I guess you know that Sweden is the main country of Snus, that  
delicious oral tobacco product. Now, if there is one thing easy to  
imitate, for a zombie, is the discrete enjoyment of snus. But why  
would ever a zombie discretely fake for itself the pleasure of  
consuming snus. I can understand a young zombie fakes smoking  
cigarette, with the goal of faking faking being an adult, but why  
would an adult zombie ever fake, alone, at home, snusing some tobacco?

Here I use the belgo-african Makla Ifrikia, cheaper and stronger. I t  
helped me to quit smoking (tobacco). I enjoy it, and although I cannot  
prove it to you, I don't fake the enjoyment . Nobody can even see that  
pure first person pleasure. Very useful for consuming tobacco in  
public place, where it is forbidden almost everywhere nowadays.

Surely you are joking, mister zombie,

Best,

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-12 Thread Bruno Marchal

Hi John,



On 11 May 2009, at 22:49, John Mikes wrote:


 who was that French poet who made puns after death?

 ...
 A french poet said, after he died  (!) :  friends, pretend only to
 cry because poet pretends only to dye. (Faites semblant de pleurer
 mes amis puisque les poètes font semblant de mourrir).



It is Jean Cocteau.

In Le Testament d'Orphée. A movie, made by Jean Cocteau, where he  
plays the role of the dying poet. I am not entirely sure of the total  
correctness of the quote. It could be Faites semblant de pleurer mes  
amis puisque les poètes ne font que semblant d'être mort.

Best,

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-12 Thread Torgny Tholerus

Bruno Marchal skrev:
 On 08 May 2009, at 19:15, Torgny Tholerus wrote:
   
 Bruno Marchal skrev:
 
 On 07 May 2009, at 18:29, Torgny Tholerus wrote:
   
 Yes it is right.  There is no infinity of natural numbers.  But the
 natural numbers are UNLIMITED, you can construct as many natural
 numbers as you want.  But how many numbers you construct, the  
 number of
 numbers will always be finite.  You can never construct an  
 infinite number of
 natural numbers.
 
 This is no more ultrafinitism. Just the usal finitism or  
 intuitionism.
 It seems I recall you have had a stronger view on this point.
 Ontologically I am neutral on this question. With comp I don't need
 any actual infinity in the third person ontology. Infinities are not
 avoidable from inside, at least when the inside view begins some  
 self-reflexion studies.
   
 I was an ultrafinitist before, but I have changed my mind.
 
 Excellent. The ability of changing its mind is a wonderful gift.
   


It was the Mathematical Universe that made me change my mind:

Earlier I was convinced that the number of time steps in the universe 
was explicitely finite, that time goes in a circle.

But the Mathematical Universe says that all mathematically possible 
universes exists.  And it is possible to construct an EXPANDING 
universe, where you have a simple rule stating that the status of a 
space-time point is a combination of the statuses of the neighboring 
space-time points in the previous time point.  In this universe there 
will never happen that the same space will be repeated at a later time, 
because the space consists of more space points at the later time.  So 
in that case the universe is UNLIMITED, it will never stop, but continue 
for ever...

-- 
Torgny Tholerus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-12 Thread John Mikes
Bruno,
merci pour le nom Jean Cocteau. J'ai voulu montrer que je semble
vivant.
I told my young bride of 61 years (originally economist, but follows all the
plaisantries I speculate on) about the assumptions you guys speculate on and
connect to assumptions of assumptions,  Torgny the zombie, Stephen Leibnitz'
Monads, you numbers, others Q-immortality/suicide and partial teleportation
at the level of highest science - and she asked -
(because she believes in her love that I am into all that, - understanding):

What do you guys hope to achieve by all this speculation?
I replied: it's getting late, let's go to sleep.

Well??? (I believe this is the most meaningful word in English)

John M



On Tue, May 12, 2009 at 11:22 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 Hi John,



 On 11 May 2009, at 22:49, John Mikes wrote:

 
  who was that French poet who made puns after death?
 
  ...
  A french poet said, after he died  (!) :  friends, pretend only to
  cry because poet pretends only to dye. (Faites semblant de pleurer
  mes amis puisque les poètes font semblant de mourrir).
 
 

 It is Jean Cocteau.

 In Le Testament d'Orphée. A movie, made by Jean Cocteau, where he
 plays the role of the dying poet. I am not entirely sure of the total
 correctness of the quote. It could be Faites semblant de pleurer mes
 amis puisque les poètes ne font que semblant d'être mort.

 Best,

 Bruno



 http://iridia.ulb.ac.be/~marchal/






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-12 Thread Jason Resch

John,

Great question I am glad you asked it.  I think I was driven to this
list because of big questions, especially those which most people seem
to believe are unanswerable.  Questions such as:  Where did this
universe come from?  Why are we here and why am I me?  Is there a God?
 What is responsible for consciousness?  What is time?  Is there life
after death? Etc.  After much reading and thought I am now mostly
satisfied with the answers I have arrived at, and keeping up with this
list and the issues people raise on various topics helps me to keep
updating my models of reality to hopefully become more correct.  I
think it is good mental exercise to ponder the questions people on
this list raise, and despite all the disagreement, chains of
assumptions, and inability to test many of the conjectures I think
this list is slowly making progress toward truth.

Jason


On Tue, May 12, 2009 at 3:42 PM, John Mikes jami...@gmail.com wrote:
 Bruno,
 merci pour le nom Jean Cocteau. J'ai voulu montrer que je semble
 vivant.
 I told my young bride of 61 years (originally economist, but follows all the
 plaisantries I speculate on) about the assumptions you guys speculate on and
 connect to assumptions of assumptions,  Torgny the zombie, Stephen Leibnitz'
 Monads, you numbers, others Q-immortality/suicide and partial teleportation
 at the level of highest science - and she asked -
 (because she believes in her love that I am into all that, - understanding):
 What do you guys hope to achieve by all this speculation?
 I replied: it's getting late, let's go to sleep.

 Well??? (I believe this is the most meaningful word in English)

 John M


 On Tue, May 12, 2009 at 11:22 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Hi John,



 On 11 May 2009, at 22:49, John Mikes wrote:

 
  who was that French poet who made puns after death?
 
  ...
  A french poet said, after he died  (!) :  friends, pretend only to
  cry because poet pretends only to dye. (Faites semblant de pleurer
  mes amis puisque les poètes font semblant de mourrir).
 
 

 It is Jean Cocteau.

 In Le Testament d'Orphée. A movie, made by Jean Cocteau, where he
 plays the role of the dying poet. I am not entirely sure of the total
 correctness of the quote. It could be Faites semblant de pleurer mes
 amis puisque les poètes ne font que semblant d'être mort.

 Best,

 Bruno



 http://iridia.ulb.ac.be/~marchal/





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-11 Thread John Mikes
Bruno,
who was that French poet who made puns after death?
JohnM

On Sun, May 10, 2009 at 3:04 PM, Bruno Marchal marc...@ulb.ac.be wrote:



 On 08 May 2009, at 19:15, Torgny Tholerus wrote:

 
  Bruno Marchal skrev:
  On 07 May 2009, at 18:29, Torgny Tholerus wrote:
 
 
  Bruno Marchal skrev:
 
 
  you are human, all right?
 
  I look exactly as a human.  When you look at me, you will not be
  able to know if I am a human or a zombie, because I behave exacly
  like a
  human.
 
  So you believe that human are not zombie, and you agree that you are
  not human.
  Where do you come from? Vega? Centaur?
 
 
  I come from Stockholm, Sweden.  I was constructed by my parents.  In
  reality I think that all humans are zombies, but because I am a polite
  person, I do not tell the other zombies that they are zombies.  I do
  not
  want to hurt the other zombies by telling them the truth.

 Truth? you mean your theory. As far as I know, you may be a zombie,
 although I believe that you are conscious and only believe you are a
 zombie.
 Or you could suffer from a sort of radical blindsight, making you
 belief you lack consciousness. You should perhaps consult.
 And I appreciate very much your attempt to be polite, and your
 willingness to not hurt other ... zombie.

 but you should not worry, because if we are zombie, we will only fake
 being hurt, you know.

 *A french poet* *said, after he died  (!)* :  friends, pretend only to
 cry because poet pretends only to dye. *(Faites semblant de pleurer
 mes amis puisque les poêtes font semblant de mourrir).
 *

truncated




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-10 Thread Torgny Tholerus

Quentin Anciaux skrev:
 Hi,

 2009/5/8 Torgny Tholerus tor...@dsv.su.se:
   
 I was an ultrafinitist before, but I have changed my mind.  Now I accept
 that you can say that the natural numbers are unlimited.  I only deny
 actual infinities.  The set of all natural numbers are always finite,
 but you can always increase the set of all natural number by adding more
 natural numbers to it.
 
 Then it's not the set of *all* natural numbers. You do nothing by
 adding a number... you don't create numbers by writing them down, you
 don't invent properties about them, it's absurd... especially for a
 zombie.
   

What do you mean by *all*?  How do you define *all*?  Can you give a 
definition that is not a circular definition?

-- 
Torgny

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-10 Thread Bruno Marchal


On 08 May 2009, at 19:15, Torgny Tholerus wrote:


 Bruno Marchal skrev:
 On 07 May 2009, at 18:29, Torgny Tholerus wrote:


 Bruno Marchal skrev:


 you are human, all right?

 I look exactly as a human.  When you look at me, you will not be
 able to know if I am a human or a zombie, because I behave exacly  
 like a
 human.

 So you believe that human are not zombie, and you agree that you are
 not human.
 Where do you come from? Vega? Centaur?


 I come from Stockholm, Sweden.  I was constructed by my parents.  In
 reality I think that all humans are zombies, but because I am a polite
 person, I do not tell the other zombies that they are zombies.  I do  
 not
 want to hurt the other zombies by telling them the truth.

Truth? you mean your theory. As far as I know, you may be a zombie,  
although I believe that you are conscious and only believe you are a  
zombie.
Or you could suffer from a sort of radical blindsight, making you  
belief you lack consciousness. You should perhaps consult.
And I appreciate very much your attempt to be polite, and your  
willingness to not hurt other ... zombie.

but you should not worry, because if we are zombie, we will only fake  
being hurt, you know.

A french poet said, after he died  (!) :  friends, pretend only to  
cry because poet pretends only to dye. (Faites semblant de pleurer  
mes amis puisque les poêtes font semblant de mourrir).




 Yes it is right.  There is no infinity of natural numbers.  But the
 natural numbers are UNLIMITED, you can construct as many natural
 numbers as you want.  But how many numbers you construct, the  
 number of
 numbers will always be finite.  You can never construct an  
 infinite number of
 natural numbers.

 This is no more ultrafinitism. Just the usal finitism or  
 intuitionism.
 It seems I recall you have had a stronger view on this point.
 Ontologically I am neutral on this question. With comp I don't need
 any actual infinity in the third person ontology. Infinities are not
 avoidable from inside, at least when the inside view begins some  
 self-
 reflexion studies.


 I was an ultrafinitist before, but I have changed my mind.


Excellent. The ability of changing its mind is a wonderful gift.



 Now I accept
 that you can say that the natural numbers are unlimited.  I only deny
 actual infinities.

I can deny them ontologically, and with comp, their existence is  
absolutely undecidable. yet they are also unavoidable on the  
epistemological planes, once we search truth.




 The set of all natural numbers are always finite,

Of course, but you mean constructed natural number. You stay at the  
1-pov. I have no problem for translating.



 but you can always increase the set of all natural number by adding  
 more
 natural numbers to it.

Life will be harder.




 An ordinary computer can never be arithmetically unsound.

 ? (this seems to me plainly false, unless you mean perfect for
 ordinary. But computers can be as unsound as you and me.
 There is no vaccine against soundness: all computers can be unsound
 soo or later. there is no perfect computer. Most gods are no immune,
 you have to postulate the big unnameable One and be very near to It,
 to have some guaranty ... if any ...


 OK, I misunderstood what you meant by unsound, I thougth you meant
 something like unlogical.  But now I see that you mean something  
 like
 irrational.  And I sure am irrational.

By unsound I meant that you believe in some false arithmetical  
proposition. But trivially so, and by using intuitionist arithmetic,  
and modal logics, you could make your point.





 I do not want to be tortured, I behave as if I try to avoid that as
 strongly as I can.  Because I behave in this way, I answer no to
 your question, because that answer will decrease the probability  
 of you
 torturing me.

 Do you realize that to defend your point you are always in the
 obligation, when talking about any first person notion, like
 consciousness, fear, desire, to add I behave like . But if you
 can do that successfully you will make me doubt that you are a  
 zombie.
 Or ... do you think a zombie could eventually find a correct theory  
 of
 consciousness, so that he can correctly fake consciousness, and  
 delude
 the humans?


 An intelligent zombie can correctly fake consciousness, and I am an
 intelligent zombie.


How could a zombie know that he correctly fake consciousness?





 3) Do you have any sort-of feeling, insight, dreams, impression,
 sensations, subjective or mental life, ... ?

 I behave as if I have sort-of feelings, I behave as if I have
 insights, I behave as if I have dreams, I behave as if I have
 impressions, I behave as if I have sensations, I behave as if I  
 have a
 subjective or mental life, ...

 As I said. But if you know that, I mean if you can behave like if you
 were knowing that, it would mean that such words do have some meaning
 for you.

 How can you know that you are not conscious? Why do you behave like  
 

Re: Consciousness is information?

2009-05-08 Thread Bruno Marchal


On 07 May 2009, at 18:29, Torgny Tholerus wrote:


 Bruno Marchal skrev:
 On 06 May 2009, at 11:35, Torgny Tholerus wrote:


 Bruno Marchal skrev:

 Someone unconscious cannot doubt either ... (A zombie can only fake
 doubts)

 Yes, you are right.  I can only fake doubts...




 I suspect you are faking faking doubts, but of course I cannot  
 provide
 any argument.
 I mean it is hard for me to believe that you are a zombie, still less
 a zombie conscious to be a zombie!


 I am a zombie that behaves AS IF it knows that it is a zombie.


OK. Meaning you don't know that you are  zombie. But you know nothing.
It is a good thing to link consciousness and knowledge.











 When you say yes to the doctor, we
 assume the yes is related to the belief that you will survive.  
 This
 means you believe that you will not loose consciousness, not  
 become a
 zombie, nor will you loose (by assumption) your own  
 consciousness, by
 becoming someone else you can't identify with.

 I can say yes to the doctor, because it will not be any difference
 for me, I will still be a zombie afterwards...





  I don't know if you do this to please me, but you illustrate quite
 well the Löbian consciousness theory.
 Indeed the theory says that consciousness can be very well
 approximated logically by consistency.
 So a human (you are human, all right?

 I look exactly as a human.  When you look at me, you will not be  
 able to
 know if I am a human or a zombie, because I behave exacly like a  
 human.


So you believe that human are not zombie, and you agree that you are  
not human.
Where do you come from? Vega? Centaur?






 ) who says I am a zombie, means
 I am not conscious, which can mean I am not consistent.
 By Gödel's second theorem, you remain consistent(*), but you loose
 arithmetical soundness, which is quite coherent with your
 ultrafinitism. If I remember well, you don't believe that there is an
 infinity of natural numbers, right?


 Yes it is right.  There is no infinity of natural numbers.  But the
 natural numbers are UNLIMITED, you can construct as many natural  
 numbers
 as you want.  But how many numbers you construct, the number of  
 numbers
 will always be finite.  You can never construct an infinite number of
 natural numbers.


This is no more ultrafinitism. Just the usal finitism or intuitionism.  
It seems I recall you have had a stronger view on this point.
Ontologically I am neutral on this question. With comp I don't need  
any actual infinity in the third person ontology. Infinities are not  
avoidable from inside, at least when the inside view begins some self- 
reflexion studies.





 We knew already you are not arithmetically sound.  Nevertheless it is
 amazing that you pretend that you are a zombie. This confirms, in the
 lobian frame, that you are a zombie. I doubt all ultrafinitists are
 zombie, though.

 It is coherent with what I tell you before: I don't think a real
 ultrafinitist can know he/she is an ultrafinitist. No more than a
 zombie can know he is a zombie, nor even give any meaning to a word
 like zombie.

 My diagnostic: you are a consistent, but arithmetically unsound,
 Löbian machine. No problem.


 An ordinary computer can never be arithmetically unsound.


? (this seems to me plainly false, unless you mean perfect for  
ordinary. But computers can be as unsound as you and me.
There is no vaccine against soundness: all computers can be unsound  
soo or later. there is no perfect computer. Most gods are no immune,  
you have to postulate the big unnameable One and be very near to It,  
to have some guaranty ... if any ...






 So I am not
 arithmetically unsound.  I am build by a finite number of atoms, and  
 the
 atoms are build by a finite number of elementary parts.  (And these
 elementary parts are just finite mathematics...)

The inconsistency of this follows from the seven step. You are always  
under the spell of the galois Connexion between what you can be here  
and now and the space of possibilities there and elsewhere.
The more you are 3-finite, the more you are 1-infinite.
That is why you are quite coherent by saying that you are a zombie.  
Zombies lack first personhood.




 There are not many zombies around me, still fewer argue that they are
 zombie, so I have some questions for you, if I may.

 1) Do you still answer yes to the doctor if he proposes to substitute
 your brain by a sponge?


 If the sponge behaves exactly in the same way as my current brain,  
 then
 it will be OK.


Why do you care about you behavior? This remains unclear for me.
Well, you will tell me that you behave like if you were caring, but  
that you don't really care ...




 2) Do humans have the right to torture zombie?


 Does an ordinary computer have the right to do anything?


I don't think a computer has the right to cross a red stop, nor does a  
computer have the right to smoke salvia in my country. Now that you  
ask, I am not sure. If I am arrest for having some 

Re: Consciousness is information?

2009-05-08 Thread Torgny Tholerus

Bruno Marchal skrev:
 On 07 May 2009, at 18:29, Torgny Tholerus wrote:

   
 Bruno Marchal skrev:
 

 you are human, all right?
   
 I look exactly as a human.  When you look at me, you will not be  
 able to know if I am a human or a zombie, because I behave exacly like a  
 human.
 
 So you believe that human are not zombie, and you agree that you are  
 not human.
 Where do you come from? Vega? Centaur?
   

I come from Stockholm, Sweden.  I was constructed by my parents.  In 
reality I think that all humans are zombies, but because I am a polite 
person, I do not tell the other zombies that they are zombies.  I do not 
want to hurt the other zombies by telling them the truth.

 Yes it is right.  There is no infinity of natural numbers.  But the
 natural numbers are UNLIMITED, you can construct as many natural  
 numbers as you want.  But how many numbers you construct, the number of  
 numbers will always be finite.  You can never construct an infinite number of
 natural numbers.
 
 This is no more ultrafinitism. Just the usal finitism or intuitionism.  
 It seems I recall you have had a stronger view on this point.
 Ontologically I am neutral on this question. With comp I don't need  
 any actual infinity in the third person ontology. Infinities are not  
 avoidable from inside, at least when the inside view begins some self- 
 reflexion studies.
   

I was an ultrafinitist before, but I have changed my mind.  Now I accept 
that you can say that the natural numbers are unlimited.  I only deny 
actual infinities.  The set of all natural numbers are always finite, 
but you can always increase the set of all natural number by adding more 
natural numbers to it.

 An ordinary computer can never be arithmetically unsound.
 
 ? (this seems to me plainly false, unless you mean perfect for  
 ordinary. But computers can be as unsound as you and me.
 There is no vaccine against soundness: all computers can be unsound  
 soo or later. there is no perfect computer. Most gods are no immune,  
 you have to postulate the big unnameable One and be very near to It,  
 to have some guaranty ... if any ...
   

OK, I misunderstood what you meant by unsound, I thougth you meant 
something like unlogical.  But now I see that you mean something like 
irrational.  And I sure am irrational.


 I do not want to be tortured, I behave as if I try to avoid that as
 strongly as I can.  Because I behave in this way, I answer no to  
 your question, because that answer will decrease the probability of you
 torturing me.
 
 Do you realize that to defend your point you are always in the  
 obligation, when talking about any first person notion, like  
 consciousness, fear, desire, to add I behave like . But if you  
 can do that successfully you will make me doubt that you are a zombie.
 Or ... do you think a zombie could eventually find a correct theory of  
 consciousness, so that he can correctly fake consciousness, and delude  
 the humans?
   

An intelligent zombie can correctly fake consciousness, and I am an 
intelligent zombie.

 3) Do you have any sort-of feeling, insight, dreams, impression,
 sensations, subjective or mental life, ... ?
   
 I behave as if I have sort-of feelings, I behave as if I have
 insights, I behave as if I have dreams, I behave as if I have
 impressions, I behave as if I have sensations, I behave as if I have a
 subjective or mental life, ...
 
 As I said. But if you know that, I mean if you can behave like if you  
 were knowing that, it would mean that such words do have some meaning  
 for you.

 How can you know that you are not conscious? Why do you behave like if  
 you are conscious, and then confess to us that you are not. Why  
 don't you behave like if you were not conscious. Should not a zombie  
 defend the idea that he is conscious, if he behaves like if he was  
 conscious.

If you ask me if I am conscious, I will reply yes.  But I am so 
intelligent that I can look at myself from the outside, and then I 
understand why I behave like I do.  I can see that all my behaviour is 
explained by chemical reactions in my brain, and there is no more than 
that.  So when I talk about myself on the meta level, then I can say 
that I have no consciousness.  But most people are not intelligent 
enough to realize that.

-- 
Torgny Tholerus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-08 Thread Quentin Anciaux

Hi,

2009/5/8 Torgny Tholerus tor...@dsv.su.se:

 Bruno Marchal skrev:
 On 07 May 2009, at 18:29, Torgny Tholerus wrote:


 Bruno Marchal skrev:


 you are human, all right?

 I look exactly as a human.  When you look at me, you will not be
 able to know if I am a human or a zombie, because I behave exacly like a
 human.

 So you believe that human are not zombie, and you agree that you are
 not human.
 Where do you come from? Vega? Centaur?


 I come from Stockholm, Sweden.  I was constructed by my parents.  In
 reality I think that all humans are zombies, but because I am a polite
 person, I do not tell the other zombies that they are zombies.  I do not
 want to hurt the other zombies by telling them the truth.

If we are zombie... you cannot hurt us, a zombie can't be hurt, a
zombie is a thing, a zombie is totally like a rock from it's inner
live pov. A zombie can't think, a zombie can't behave like from its
point of view because a zombie has no point of view.

 Yes it is right.  There is no infinity of natural numbers.  But the
 natural numbers are UNLIMITED, you can construct as many natural
 numbers as you want.  But how many numbers you construct, the number of
 numbers will always be finite.  You can never construct an infinite number 
 of
 natural numbers.

 This is no more ultrafinitism. Just the usal finitism or intuitionism.
 It seems I recall you have had a stronger view on this point.
 Ontologically I am neutral on this question. With comp I don't need
 any actual infinity in the third person ontology. Infinities are not
 avoidable from inside, at least when the inside view begins some self-
 reflexion studies.


 I was an ultrafinitist before, but I have changed my mind.  Now I accept
 that you can say that the natural numbers are unlimited.  I only deny
 actual infinities.  The set of all natural numbers are always finite,
 but you can always increase the set of all natural number by adding more
 natural numbers to it.

Then it's not the set of *all* natural numbers. You do nothing by
adding a number... you don't create numbers by writing them down, you
don't invent properties about them, it's absurd... especially for a
zombie.

 An ordinary computer can never be arithmetically unsound.

 ? (this seems to me plainly false, unless you mean perfect for
 ordinary. But computers can be as unsound as you and me.
 There is no vaccine against soundness: all computers can be unsound
 soo or later. there is no perfect computer. Most gods are no immune,
 you have to postulate the big unnameable One and be very near to It,
 to have some guaranty ... if any ...


 OK, I misunderstood what you meant by unsound, I thougth you meant
 something like unlogical.  But now I see that you mean something like
 irrational.  And I sure am irrational.

You're not, remember you're a zombie hence there is no *you*.


 I do not want to be tortured, I behave as if I try to avoid that as
 strongly as I can.  Because I behave in this way, I answer no to
 your question, because that answer will decrease the probability of you
 torturing me.

 Do you realize that to defend your point you are always in the
 obligation, when talking about any first person notion, like
 consciousness, fear, desire, to add I behave like . But if you
 can do that successfully you will make me doubt that you are a zombie.
 Or ... do you think a zombie could eventually find a correct theory of
 consciousness, so that he can correctly fake consciousness, and delude
 the humans?


 An intelligent zombie can correctly fake consciousness, and I am an
 intelligent zombie.

A zombie is not intelligent, a zombie simply isn't. There is no
consciousness in a zombie by definition, so a zombie is not and can't
be anything.

 3) Do you have any sort-of feeling, insight, dreams, impression,
 sensations, subjective or mental life, ... ?

 I behave as if I have sort-of feelings, I behave as if I have
 insights, I behave as if I have dreams, I behave as if I have
 impressions, I behave as if I have sensations, I behave as if I have a
 subjective or mental life, ...

 As I said. But if you know that, I mean if you can behave like if you
 were knowing that, it would mean that such words do have some meaning
 for you.

 How can you know that you are not conscious? Why do you behave like if
 you are conscious, and then confess to us that you are not. Why
 don't you behave like if you were not conscious. Should not a zombie
 defend the idea that he is conscious, if he behaves like if he was
 conscious.

 If you ask me if I am conscious, I will reply yes.  But I am so
 intelligent

You're not, you are a zombie. There is no you.

 that I can look at myself from the outside,

You can't, you have no self.

 and then I
 understand why I behave like I do.
 I can see that all my behaviour is

You can't, there is no you and you can't see anything, you are a zombie.

 explained by chemical reactions in my brain, and there is no more than
 that.  So when I talk about 

Re: Consciousness is information?

2009-05-07 Thread Bruno Marchal


On 06 May 2009, at 11:35, Torgny Tholerus wrote:


 Bruno Marchal skrev:

 Something conscious cannot doubt about the existence of its
 consciousness, I think, although it can doubt everything else it can
 be conscious *about*.
 It is the unprovable (but coverable) fixed point of Descartes
 systematic doubting procedure (this fit well with the self-reference
 logics, taking consciousness as consistency).

 Someone unconscious cannot doubt either ... (A zombie can only fake
 doubts)

 Yes, you are right.  I can only fake doubts...



I suspect you are faking faking doubts, but of course I cannot provide  
any argument.
I mean it is hard for me to believe that you are a zombie, still less  
a zombie conscious to be a zombie!









 We live on the overlap of a subjective un-sharable certainty (the
 basic first person knowledge) and an objective doubtful but sharable
 possible reality (the third person belief).

 To keep 3-comp, and to abandon consciousness *is* the correct
 materialist step, indeed. But you cannot keep 1-comp(*) then, because
 it is defined
 by reference to consciousness. When you say yes to the doctor, we
 assume the yes is related to the belief that you will survive. This
 means you believe that you will not loose consciousness, not become a
 zombie, nor will you loose (by assumption) your own consciousness, by
 becoming someone else you can't identify with.

 I can say yes to the doctor, because it will not be any difference  
 for
 me, I will still be a zombie afterwards...




  I don't know if you do this to please me, but you illustrate quite  
well the Löbian consciousness theory.
Indeed the theory says that consciousness can be very well  
approximated logically by consistency.
So a human (you are human, all right?) who says I am a zombie, means  
I am not conscious, which can mean I am not consistent.
By Gödel's second theorem, you remain consistent(*), but you loose  
arithmetical soundness, which is quite coherent with your  
ultrafinitism. If I remember well, you don't believe that there is an  
infinity of natural numbers, right?

We knew already you are not arithmetically sound.  Nevertheless it is  
amazing that you pretend that you are a zombie. This confirms, in the  
lobian frame, that you are a zombie. I doubt all ultrafinitists are  
zombie, though.

It is coherent with what I tell you before: I don't think a real  
ultrafinitist can know he/she is an ultrafinitist. No more than a  
zombie can know he is a zombie, nor even give any meaning to a word  
like zombie.

My diagnostic: you are a consistent, but arithmetically unsound,  
Löbian machine. No problem.

There are not many zombies around me, still fewer argue that they are  
zombie, so I have some questions for you, if I may.

1) Do you still answer yes to the doctor if he proposes to substitute  
your brain by a sponge?
2) Do humans have the right to torture zombie?
3) Do you have any sort-of feeling, insight, dreams, impression,  
sensations, subjective or mental life, ... ?
4) Does the word pain have a meaning for you? In particular, what if  
the doctor, who does not know that you are a zombie, proposes to you a  
cheaper artificial brain, but warning you that it produces often  
unpleasant hard migraine? Still saying yes?

Bruno


(*) For example: Peano Arithmetic + Peano Arithmetic is inconsistent  
gives a consistent theory. If not, Peano Arithmetic + Peano  
Arithmetic is inconsistent would prove 0=1, and thus PA would prove  
~(provable Peano Arithmetic is inconsistent ), and thus PA would  
prove its own consistency, contradicting Gödel II.


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-07 Thread Torgny Tholerus

Bruno Marchal skrev:
 On 06 May 2009, at 11:35, Torgny Tholerus wrote:

   
 Bruno Marchal skrev:
 
 Someone unconscious cannot doubt either ... (A zombie can only fake
 doubts)
   
 Yes, you are right.  I can only fake doubts...
 



 I suspect you are faking faking doubts, but of course I cannot provide  
 any argument.
 I mean it is hard for me to believe that you are a zombie, still less  
 a zombie conscious to be a zombie!
   

I am a zombie that behaves AS IF it knows that it is a zombie.






   
 
 When you say yes to the doctor, we
 assume the yes is related to the belief that you will survive. This
 means you believe that you will not loose consciousness, not become a
 zombie, nor will you loose (by assumption) your own consciousness, by
 becoming someone else you can't identify with.
   
 I can say yes to the doctor, because it will not be any difference  
 for me, I will still be a zombie afterwards...
 




   I don't know if you do this to please me, but you illustrate quite  
 well the Löbian consciousness theory.
 Indeed the theory says that consciousness can be very well  
 approximated logically by consistency.
 So a human (you are human, all right?

I look exactly as a human.  When you look at me, you will not be able to 
know if I am a human or a zombie, because I behave exacly like a human.

 ) who says I am a zombie, means  
 I am not conscious, which can mean I am not consistent.
 By Gödel's second theorem, you remain consistent(*), but you loose  
 arithmetical soundness, which is quite coherent with your  
 ultrafinitism. If I remember well, you don't believe that there is an  
 infinity of natural numbers, right?
   

Yes it is right.  There is no infinity of natural numbers.  But the 
natural numbers are UNLIMITED, you can construct as many natural numbers 
as you want.  But how many numbers you construct, the number of numbers 
will always be finite.  You can never construct an infinite number of 
natural numbers.

 We knew already you are not arithmetically sound.  Nevertheless it is  
 amazing that you pretend that you are a zombie. This confirms, in the  
 lobian frame, that you are a zombie. I doubt all ultrafinitists are  
 zombie, though.

 It is coherent with what I tell you before: I don't think a real  
 ultrafinitist can know he/she is an ultrafinitist. No more than a  
 zombie can know he is a zombie, nor even give any meaning to a word  
 like zombie.

 My diagnostic: you are a consistent, but arithmetically unsound,  
 Löbian machine. No problem.
   

An ordinary computer can never be arithmetically unsound.  So I am not 
arithmetically unsound.  I am build by a finite number of atoms, and the 
atoms are build by a finite number of elementary parts.  (And these 
elementary parts are just finite mathematics...)

 There are not many zombies around me, still fewer argue that they are  
 zombie, so I have some questions for you, if I may.

 1) Do you still answer yes to the doctor if he proposes to substitute  
 your brain by a sponge?
   

If the sponge behaves exactly in the same way as my current brain, then 
it will be OK.

 2) Do humans have the right to torture zombie?
   

Does an ordinary computer have the right to do anything?

I do not want to be tortured, I behave as if I try to avoid that as 
strongly as I can.  Because I behave in this way, I answer no to your 
question, because that answer will decrease the probability of you 
torturing me.

 3) Do you have any sort-of feeling, insight, dreams, impression,  
 sensations, subjective or mental life, ... ?
   

I behave as if I have sort-of feelings, I behave as if I have 
insights, I behave as if I have dreams, I behave as if I have 
impressions, I behave as if I have sensations, I behave as if I have a 
subjective or mental life, ...

 4) Does the word pain have a meaning for you? In particular, what if  
 the doctor, who does not know that you are a zombie, proposes to you a  
 cheaper artificial brain, but warning you that it produces often  
 unpleasant hard migraine? Still saying yes?
   

No, I will say no in this case, because I avoid things that causes 
pain.  I have an avoiding center in my brain, and when this center 
in my brain is stimulated, then my behavior will be to avoid those 
things that causes this center to be stimulated.  Stimulating this 
center will cause me to say: I feel pain.

-- 
Torgny Tholerus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Consciousness is information?

2009-05-07 Thread m.a.

Perhaps apropos.

   Common let's do de zombie rock
   All around de zombie block


  http://www.dailypaul.com/node/90682




-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com]on Behalf Of Bruno Marchal
Sent: Thursday, May 07, 2009 11:10 AM
To: everything-list@googlegroups.com
Subject: Re: Consciousness is information?




On 06 May 2009, at 11:35, Torgny Tholerus wrote:


 Bruno Marchal skrev:

 Something conscious cannot doubt about the existence of its
 consciousness, I think, although it can doubt everything else it can
 be conscious *about*.
 It is the unprovable (but coverable) fixed point of Descartes
 systematic doubting procedure (this fit well with the self-reference
 logics, taking consciousness as consistency).

 Someone unconscious cannot doubt either ... (A zombie can only fake
 doubts)

 Yes, you are right.  I can only fake doubts...



I suspect you are faking faking doubts, but of course I cannot provide
any argument.
I mean it is hard for me to believe that you are a zombie, still less
a zombie conscious to be a zombie!









 We live on the overlap of a subjective un-sharable certainty (the
 basic first person knowledge) and an objective doubtful but sharable
 possible reality (the third person belief).

 To keep 3-comp, and to abandon consciousness *is* the correct
 materialist step, indeed. But you cannot keep 1-comp(*) then, because
 it is defined
 by reference to consciousness. When you say yes to the doctor, we
 assume the yes is related to the belief that you will survive. This
 means you believe that you will not loose consciousness, not become a
 zombie, nor will you loose (by assumption) your own consciousness, by
 becoming someone else you can't identify with.

 I can say yes to the doctor, because it will not be any difference
 for
 me, I will still be a zombie afterwards...




  I don't know if you do this to please me, but you illustrate quite
well the Löbian consciousness theory.
Indeed the theory says that consciousness can be very well
approximated logically by consistency.
So a human (you are human, all right?) who says I am a zombie, means
I am not conscious, which can mean I am not consistent.
By Gödel's second theorem, you remain consistent(*), but you loose
arithmetical soundness, which is quite coherent with your
ultrafinitism. If I remember well, you don't believe that there is an
infinity of natural numbers, right?

We knew already you are not arithmetically sound.  Nevertheless it is
amazing that you pretend that you are a zombie. This confirms, in the
lobian frame, that you are a zombie. I doubt all ultrafinitists are
zombie, though.

It is coherent with what I tell you before: I don't think a real
ultrafinitist can know he/she is an ultrafinitist. No more than a
zombie can know he is a zombie, nor even give any meaning to a word
like zombie.

My diagnostic: you are a consistent, but arithmetically unsound,
Löbian machine. No problem.

There are not many zombies around me, still fewer argue that they are
zombie, so I have some questions for you, if I may.

1) Do you still answer yes to the doctor if he proposes to substitute
your brain by a sponge?
2) Do humans have the right to torture zombie?
3) Do you have any sort-of feeling, insight, dreams, impression,
sensations, subjective or mental life, ... ?
4) Does the word pain have a meaning for you? In particular, what if
the doctor, who does not know that you are a zombie, proposes to you a
cheaper artificial brain, but warning you that it produces often
unpleasant hard migraine? Still saying yes?

Bruno


(*) For example: Peano Arithmetic + Peano Arithmetic is inconsistent
gives a consistent theory. If not, Peano Arithmetic + Peano
Arithmetic is inconsistent would prove 0=1, and thus PA would prove
~(provable Peano Arithmetic is inconsistent ), and thus PA would
prove its own consistency, contradicting Gödel II.


http://iridia.ulb.ac.be/~marchal/






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-06 Thread Torgny Tholerus

Bruno Marchal skrev:

 Something conscious cannot doubt about the existence of its 
 consciousness, I think, although it can doubt everything else it can 
 be conscious *about*.
 It is the unprovable (but coverable) fixed point of Descartes 
 systematic doubting procedure (this fit well with the self-reference 
 logics, taking consciousness as consistency).

 Someone unconscious cannot doubt either ... (A zombie can only fake 
 doubts)

Yes, you are right.  I can only fake doubts...


 We live on the overlap of a subjective un-sharable certainty (the 
 basic first person knowledge) and an objective doubtful but sharable 
 possible reality (the third person belief).

 To keep 3-comp, and to abandon consciousness *is* the correct 
 materialist step, indeed. But you cannot keep 1-comp(*) then, because 
 it is defined
 by reference to consciousness. When you say yes to the doctor, we 
 assume the yes is related to the belief that you will survive. This 
 means you believe that you will not loose consciousness, not become a 
 zombie, nor will you loose (by assumption) your own consciousness, by 
 becoming someone else you can't identify with.

I can say yes to the doctor, because it will not be any difference for 
me, I will still be a zombie afterwards...

-- 
Torgny Tholerus

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-05 Thread Bruno Marchal

On 04 May 2009, at 13:31, Stathis Papaioannou wrote:


 2009/5/4 Bruno Marchal marc...@ulb.ac.be:

 ...

 It seems to me that we agree that physical supervenience leads to  
 many
 absurdities. Is your argument purely academical, or do you think it
 can be used to prevent the conclusion that physics has to be  
 explained
 by the purely mathematical notion of most probable computation as
 seen from inside, among the 2^aleph_0 computations going through the
 current states, in UD* or in arithmetic?

 I agree with you. I am not terribly happy with the conclusion, because
 it seems so weird. The only way out is, as you say, if comp is false:
 the mind is not Turing emulable, or (even weirder, perhaps incoherent)
 there is no such thing as consciousness at all.


Something conscious cannot doubt about the existence of its  
consciousness, I think, although it can doubt everything else it can  
be conscious *about*.
It is the unprovable (but coverable) fixed point of Descartes  
systematic doubting procedure (this fit well with the self-reference  
logics, taking consciousness as consistency).

Someone unconscious cannot doubt either ... (A zombie can only fake  
doubts)

We live on the overlap of a subjective un-sharable certainty (the  
basic first person knowledge) and an objective doubtful but sharable  
possible reality (the third person belief).

To keep 3-comp, and to abandon consciousness *is* the correct  
materialist step, indeed. But you cannot keep 1-comp(*) then, because  
it is defined
by reference to consciousness. When you say yes to the doctor, we  
assume the yes is related to the belief that you will survive. This  
means you believe that you will not loose consciousness, not become a  
zombie, nor will you loose (by assumption) your own consciousness, by  
becoming someone else you can't identify with.




 OK, I think. Thanks for taking the time to reply!


You are welcome,

Bruno

(*)  (usual comp is a 1-comp, 3-comp is MEC-DIG-BEH in CM, for  
Digital Behaviorist Mechanism in french, in a part translated by Kim  
on the list recently)

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-05 Thread Stephen Paul King
Hi Bruno and Members,

The comment that is made below seems to only involve a single consciousness 
and an exterior reality. Could we not recover a very similar situation if we 
consider the 1-PoV and 3-PoV relation to hold to some degree over a multitude 
of consciouness (plurality). In the plurality case, the objective doubtful but 
sharable possible reality would be composed of a large intersection of sorts 
of 3-PoV aspects that can be recognized by or mapped to a statistical or 
generic notion of a 1-PoV. No?

Onward!

Stephen 
  - Original Message - 
  From: Bruno Marchal 
  To: everything-list@googlegroups.com 
  Sent: Tuesday, May 05, 2009 1:33 PM
  Subject: Re: Consciousness is information?




  snip

  Something conscious cannot doubt about the existence of its consciousness, I 
think, although it can doubt everything else it can be conscious *about*.
  It is the unprovable (but coverable) fixed point of Descartes systematic 
doubting procedure (this fit well with the self-reference logics, taking 
consciousness as consistency).


  Someone unconscious cannot doubt either ... (A zombie can only fake doubts)


  We live on the overlap of a subjective un-sharable certainty (the basic first 
person knowledge) and an objective doubtful but sharable possible reality (the 
third person belief).


  To keep 3-comp, and to abandon consciousness *is* the correct materialist 
step, indeed. But you cannot keep 1-comp(*) then, because it is defined
  by reference to consciousness. When you say yes to the doctor, we assume 
the yes is related to the belief that you will survive. This means you 
believe that you will not loose consciousness, not become a zombie, nor will 
you loose (by assumption) your own consciousness, by becoming someone else you 
can't identify with.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-05 Thread Jason Resch

On Sun, May 3, 2009 at 10:56 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 With just arithmetic, when we stop to postulate a primitive or
 ontological material world, all primitive ad-hocness is removed, given
 that the existing internal interpretations are all determined, with
 their relative frequency, by addition and multiplication rules, and
 physics will be defined by the (absolute) probability of relative
 computations (here = probability of relative number theoretical
 relations.


Bruno,

In other posts I have seen you mention that the rule of succession is
not enough, that addition and multiplication are needed.  Why is it
that it stops at multiplication, and not exponentiation or tetration?
Is it enough to say some form of iteration + succession are required?
(e.g. a for loop with succession gives addition, a for loop with
addition yields multiplication, etc.)

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-05 Thread Bruno Marchal


On 05 May 2009, at 22:31, Jason Resch wrote:


 On Sun, May 3, 2009 at 10:56 AM, Bruno Marchal marc...@ulb.ac.be  
 wrote:


 With just arithmetic, when we stop to postulate a primitive or
 ontological material world, all primitive ad-hocness is removed,  
 given
 that the existing internal interpretations are all determined, with
 their relative frequency, by addition and multiplication rules, and
 physics will be defined by the (absolute) probability of relative
 computations (here = probability of relative number theoretical
 relations.


 Bruno,

 In other posts I have seen you mention that the rule of succession is
 not enough, that addition and multiplication are needed.  Why is it
 that it stops at multiplication, and not exponentiation or tetration?
 Is it enough to say some form of iteration + succession are required?
 (e.g. a for loop with succession gives addition, a for loop with
 addition yields multiplication, etc.)

 Jason



It is due to the fact that, when formalized (in first order logic,  
say) Turing Universality begins with addition and multiplication (you  
don't even need succession). Then you can define exponentiation,  
tetration, etc. All partial recursive function can then be defined.

Succession + addition, or succession + multiplication, are not Turing  
Universal, and leads indeed to decidable theories.

For the ontology we need no more than a universal system. it  
determines the universal dovetailing.

For the epistemology we need succession, addition, multiplication  
and the axioms of induction. This gives a notion of universal system  
together with its internal self-aware substructures played by the  
Lobian machine and their consistent extensions (the believer in  
induction), simulated by the universal systems. Those internal  
machines will develop far beyond simple induction though. The  
general internal view (the first person plenitude) is not axiomatisable.

Iteration and succession? I don't think so. You need induction. With  
induction it is Turing Universal, but not without, I think. It could  
depend how you formalized the iteration rule, but without induction  
and staying in first order logic, that would astonish me.

The crazy thing, not so simple to prove, is that even without  
induction, addition + multiplication is Turing universal. You bypass  
the role of induction by defining finite sequence through Gödel bata  
function and an ingenuous use of the Chinese Lemma.

Far easier to prove, without induction, is that addition+multiplication 
+exponentiation is Turing universal, but thanks to Godel' beta  
function you can eliminate exponentiation. If you know Godel's  
original Godel's numbering you can guess why.

Bruno






 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-04 Thread Stathis Papaioannou

2009/5/4 Bruno Marchal marc...@ulb.ac.be:

 in the same way a
 message is obscured if encoded with a one-time pad that is
 subsequently destroyed and forgotten. In fact, even with the
 store-bought computer the computation is obscured if there are no
 intelligent beings around who can understand it.


 Not at all. If the computer evaluate fact(4), even alone in a room,
 the probability it gives 24 is one, in a verifiable way by a third
 person. With or without physicalism we accept the idea that the
 physical neighborhood is locally Turing universal, and does interpret
 the computation of 4.

Sure, the computer is evaluating fact(4) even when no-one can
understand it, but it is obscured because no-one can understand it. An
intelligent person who has never seen a computer before may eventually
figure it out, and computers are as a broad generalisation designed to
follow understandable patterns in their architecture, just as written
languages are designed (or evolve) to follow recognisable patterns.
But what if fact(4) is a military secret, and the engineers
deliberately tried to make the internal workings of the machine as
convoluted as possible, so that it looks like random activity to
anyone lacking the design specifications? This would be the equivalent
of taking a written message and encoding it so that it looks like
random letters; with the right key, any message can be encoded to look
like any given string (of similar or greater length). Are you saying
that obscuring the workings of the computer in this way would be
impossible?

 It seems to me that we agree that physical supervenience leads to many
 absurdities. Is your argument purely academical, or do you think it
 can be used to prevent the conclusion that physics has to be explained
 by the purely mathematical notion of most probable computation as
 seen from inside, among the 2^aleph_0 computations going through the
 current states, in UD* or in arithmetic?

I agree with you. I am not terribly happy with the conclusion, because
it seems so weird. The only way out is, as you say, if comp is false:
the mind is not Turing emulable, or (even weirder, perhaps incoherent)
there is no such thing as consciousness at all.

 With you argument, the movie-graph is conscious.  But is all
 consciousness at once, not just the consciousness corresponding to the
 filmed boolean graph. This not change the problem measure in any way.
 It makes the primitive physicalness idea even more absurd.

 It seems to me that your point just recall that in Platonia, there are
 complex sequence of universal machine which can interpret any
 computation, including the empty one, as being any other computations.
 But this is akin to white rabbits (from the probability pov) and akin
 to the fact that, with its terrible redundancy and free
 imagination,  the UD generates also conspirator interpretations.

 With just arithmetic, when we stop to postulate a primitive or
 ontological material world, all primitive ad-hocness is removed, given
 that the existing internal interpretations are all determined, with
 their relative frequency, by addition and multiplication rules, and
 physics will be defined by the (absolute) probability of relative
 computations (here = probability of relative number theoretical
 relations. to be a finite piece of computation is decidable even in
 very tiny fragment of arithmetic, and this can be used to avoid any
 starting ambiguity. This is made possible through Church thesis, and
 it eventually forces us to realize that a rock is the result of an
 infinity of computation, and the rock we see a crude local average,
 but comp makes it possible that a rock implements all computations to,
 but only by an explicit call to a sequence of universal machine in
 Platonia. Meaning; there is no room for providing an explanative power
 (both for mind and matter) to the notion of primitive substance and
 primitive substancial incarnated laws.  Due to the failure of logicism
 we need numbers or combinators and primitive immaterial laws to agree
 on, like addition and multiplication, or lambda abstraction and
 application, etc. The measure does not depend on which first universal
 system you choose, by non completely trivial application of computer
 science. And to use a primitive quantum computer for a primitive
 physics is treachery with respect to the comp mind body problem.

 OK?

OK, I think. Thanks for taking the time to reply!


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-04 Thread Bruno Marchal

On 03 May 2009, at 17:09, John Mikes wrote:

 I would like to go along with Maudlin's point emphasized in Bruno's  
 text below, adding that causal structure is restricted to the  
 limited model of which we CAN choose likely 'causes' within our  
 perceived reality, while the unlimited possibilities include wider  
 'intrusions' of domains 'beyond our present epistemic cognitive  
 inventory'. So the most likely cause - although applicable to a  
 'physical role' (which as well is figmentous) - is limited. In  
 congruence - I think - with Bruno's words below.
 Bruno's: ...the description, although containing the genuine  
 information is just not a computation at all... (AMEN!)
 continued, however by: ...It miss the logical relation between the  
 steps, made possible by the universal machine...  still does not  -  
 DO -
 those 'steps' neither OPERATE the machine.
 Looks like we want to 'assume' that if there is a possibility, it is  
 also done.


Yes. It is the trade mark of all everythingers and many worlders. Be  
them quantum or arithmetical many states/worlds/histories.
relative existence = relative consistence. Actual consciousness =  
inside view of possible existence.
Now is as well yesterday from the point of view of yesterday than  
tomorrow from the point of view of tomorrow, if ever. The everything  
idea is that such an indexical approach is conceptually simpler, and  
should be favored for Occam-like related reason.
But it is neither assume in comp (my point) nor in (quantum mechanics,  
Everett-Deutsch point). Indeed any rememorable here and now depends  
on the statistical interference on the many many many elsewhere. It  
is not an assumption, it is a consequence of the theory. You can  
change the theory by adding selection principles, but this is really  
cutting off everything that does not fit our wishful thinking. It is  
like when Niels Bohr says Quantum mechanics is false in the classical  
macroscopic world, when applying QM to Niels Borh explains why Niels  
Bohr (and all of us) can experience a third person plural collapse  
despite the SWE prevent the need for it to happen really.


 I am looking at the physical creator (haha)

... still looking for Aristotle initial motor (haha).



 keeping the contraption moving and us in it. Not to speak about  
 'making it'. (Deus ex machina?)


No worry, assuming comp, it is  Machina ex Deus.
Machines can already prove that as far as they are consistent,  
something which is not a machine, and which is not even nameable  
(arithmetical truth) transcends them (Tarski,  Askanas).

I have also discovered recently (and this has been proved by my  
student/friend the little genius (Eric Vandenbuscche)), that some  
false beliefs can enlarge the true provability spectrum. It is almost  
like de Bono said, according to Kim, it could be logical to be  
illogical, in some situation. But as I said to Kim, this belongs  
probably to the corona G* minus G, the space of the unspeakable.   
(Note that I fall myself in the same trap if I suggest this should be  
a reason to abandon prescriptive talk, yet, assuming comp, I can  
justify caution with such prescriptive talk, this because I talk  
explicitly on Machines and I talk on (ideally correct) Humans only  
through the comp HYPOTHESIS).


 Once all is there and moving, everything is fine.
 I salute the ...infinitely many such relations, ... that gives me  
 the idea of a 'physical' supervenience in terms of a restrictive  
 Occam, cutting off everything that dos not fit into our goals.


Just say No doctor. No problem. We are just studying consequences of  
an hypothesis. But I think the comp hypothesis is the less  
reductionist view possible concerning the possible first person points  
of view.  The little and simple has more degree of freedom than the  
complexe and sophisticate.
I tend to believe comp is even a vaccine against major forms of  
reductionism.





 States seem to be identified by our limited views.


Third person conceived states are indeed identify with finite  
descriptions of a (probably deep and complex) computational states  
(notion relative to the choice of a universal machine).
But then first person states, as conceivable by first persons, are  
very complex and variable things with non trivial connectedness, and  
dependence on non nameable continuum (and thus a relative measure  
problem).

But machine can prove their own relative (to consistency, to the  
existence of a reality) incompleteness theorem, and this introduces  
many deep nuances between all the possible variant of Theaetetus  
knowledge theories, up to the quasi Aristotelian (naturalist) theory  
of matter by Plotinus. I can't wait listening more to that humble  
universal machine ... 'course, today, it is still hard work: Gödel,  
Löb, Feferman, Smullyan, ..., but Solovay makes a progress by  
providing shortcuts: the modal systems G and G*.




 I feel that both the referred Maudlin-text and Jesse's 

Re: Consciousness is information?

2009-05-03 Thread Stathis Papaioannou

2009/5/3 Bruno Marchal marc...@ulb.ac.be:

 I think that if you take a real forest with birds, here and there, you
 can interpret some behavior as NAND or NOR, but you will not succeed
 ever in finding the computation of factorial(5).

But you can interpret *any* behaviour as a NAND gate, in an ad hoc
fashion. It doesn't even need to be consistent from moment. On a
Tuesday 3 birds landing could stand for 1 while on a Wednesday 3
birds landing could stand for 0, and on a Saturday it could stand
for 1 again. In this way you could take the physical activity
carried out by a store-bought computer calculating factorial(5) and
map it onto the forest with the birds. Of course, this won't give you
the answer to factorial(5) unless you already have the answer, but
that just means that the computation is obscured, in the same way a
message is obscured if encoded with a one-time pad that is
subsequently destroyed and forgotten. In fact, even with the
store-bought computer the computation is obscured if there are no
intelligent beings around who can understand it. So, if the
computation supervenes on the activity of the store-bought computer
without regard for whether any external observer is around to
understand, then it also supervenes on the activity of the forest with
the birds. Other possibilities are that the computation supervenes on
physical activity only when an external observer understands it (which
poses difficulties for a closed virtual reality with its own conscious
observers), or that the computation does not supervene on physical
activity at all.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-03 Thread John Mikes
Stathis, and listers,
I cannot help: I read the text. (Not always, sometimes it seems too obtuse
for me even to 'read' it).
The Subject?   ( Consciousness = information )
what happens to that darn 'information'? Oops, 'you' are AWARE of it!?
Meaning: you  *DO*  something with it (to be - become? aware).  Who By
what factor (energy, process, function, etc.)? By what (who's??) initiation?
Oops: by *computation of course. *
Refer all the above questions to 'computation and add: who (what?) is
providing the computer? (St.: store-bought computer - is it a binary
embryonic, or a more advanced one, maybe an (unlimited!) analogue - whatver
that may be).
Or:
forget all those questions and live happily in Wunderland.

The nature of an applicable INFORMATION is still undecided. The 'bit' has
to be part of a program to make sense (to have 'meaning' - another term to
be questioned.) Maybe in a conscious software? (robotic?)
*Information and its meaning* are concept (observer?) related.
I am hung up on 'having a computer' and 'computation' (available?) that
still does not *DO *the computation, not even *operate* the computer.
And please, say 'energy' only, if you can tell what it is (not what it does
or how it can be measured). And the construct(?) that includes it all.
I am also hung up with 'function' (activity) and the 'observer' (self, I)
what seems to be so natural in the nth level consequence using them.

John M


**



On Sun, May 3, 2009 at 3:00 AM, Stathis Papaioannou stath...@gmail.comwrote:


 2009/5/3 Bruno Marchal marc...@ulb.ac.be:

  I think that if you take a real forest with birds, here and there, you
  can interpret some behavior as NAND or NOR, but you will not succeed
  ever in finding the computation of factorial(5).

 But you can interpret *any* behaviour as a NAND gate, in an ad hoc
 fashion. It doesn't even need to be consistent from moment. On a
 Tuesday 3 birds landing could stand for 1 while on a Wednesday 3
 birds landing could stand for 0, and on a Saturday it could stand
 for 1 again. In this way you could take the physical activity
 carried out by a store-bought computer calculating factorial(5) and
 map it onto the forest with the birds. Of course, this won't give you
 the answer to factorial(5) unless you already have the answer, but
 that just means that the computation is obscured, in the same way a
 message is obscured if encoded with a one-time pad that is
 subsequently destroyed and forgotten. In fact, even with the
 store-bought computer the computation is obscured if there are no
 intelligent beings around who can understand it. So, if the
 computation supervenes on the activity of the store-bought computer
 without regard for whether any external observer is around to
 understand, then it also supervenes on the activity of the forest with
 the birds. Other possibilities are that the computation supervenes on
 physical activity only when an external observer understands it (which
 poses difficulties for a closed virtual reality with its own conscious
 observers), or that the computation does not supervene on physical
 activity at all.


 --
 Stathis Papaioannou

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-03 Thread John Mikes
I would like to go along with Maudlin's point emphasized in Bruno's text
below, adding that causal structure is restricted to the limited model of
which we *CAN *choose likely 'causes' within our perceived reality, while
the unlimited possibilities include wider 'intrusions' of domains 'beyond
our present epistemic cognitive inventory'. So the most likely *cause *-
although applicable to a 'physical role' (which as well is figmentous) - is
limited. In congruence - I think - with Bruno's words below.
Bruno's: ...the description, although containing the genuine information is
just not a computation at all... (AMEN!)
continued, however by: ...It miss the logical relation between the steps,
made possible by the universal machine...  still does not  *- DO - *
those 'steps' neither *OPERATE* the machine.
Looks like we want to 'assume' that if there is a possibility, it is also
done.
I am looking at the physical creator (haha) keeping the contraption moving
and us in it. Not to speak about 'making it'. (Deus ex machina?)
Once all is there and moving, everything is fine.
I salute the ...infinitely many such relations, ... that gives me the idea
of a 'physical' supervenience in terms of a restrictive Occam, cutting off
everything that dos not fit into our goals.

*States* seem to be identified by our limited views. I feel that both the
referred Maudlin-text and Jesse's comment are on the static side, as
'descriptive', while I can presume into Bruno's relations some sort of a
functional (operative) relation that would lend some dynamism (action?) into
the descriptional stagnancy. I still did not detect:  *HOW?*

John M




On Wed, Apr 29, 2009 at 4:19 PM, Bruno Marchal marc...@ulb.ac.be wrote:

  Maudlin's point is that the causal structure has no physical role, so if
 you maintain the association of consciousness with the causal, actually
 computational structure, you have to abandon the physical supervenience. Or
 you reintroduce some magic, like if neurons have some knowledge of the
 absence of some other neurons, to which they are not related, during some
 computations.
 But read the movie graph which shows the same thing without going through
 the question of the counterfactuals. If you believe that consciousness
 supervene on the physical implementation, or even just one universal machine
 computation, then you will associate consciousness to a description of that
 computation. but the description, although containing the genuine
 information is just not a computation at all. It miss the logical relation
 between the steps, made possible by the universal machine. So you can keep
 on with mechanism only by associating consciousness with the logical,
 immaterial, relation between the states. from inside they are infinitely
 many such relations, and this means the physical has to supervene on the sum
 of those relations as seen from inside. By Church thesis and
 self-reference logic, they have a non trivial, redundant, structure.

 Bruno


  On 29 Apr 2009, at 21:16, Jesse Mazer wrote:

  Bruno wrote:


  On 29 Apr 2009, at 00:25, Jesse Mazer wrote:

  and I think it's also the idea behind Maudlin's Olympia thought
 experiment as well.



 Maudlin's Olympia, or the Movie Graph Argument are completely different.
 Those are arguments showing that computationalism is incompatible with the
 physical supervenience thesis. They show that consciousness are not related
 to any physical activity at all. Together with UDA1-7, it shows that physics
 has to be reduced to a theory of consciousness based on a purely
 mathematical (even arithmetical) theory of computation, which exists by
 Church Thesis.
 The movie graph argument was originally only a tool for explaining how
 difficult the mind-body problem is, once we assume mechanism.




  OK, I hadn't been able to find Maudlin's paper online, but I finally
 located a pdf copy in a post from this list at
 http://www.mail-archive.com/everything-list@googlegroups.com/msg07657.html
  ...now that I read it I see the argument is distinct from Chalmers' Does
 a Rock Implement Every Finite-State Automaton, although they are
 thematically similar in that they both deal with difficulties in defining
 what it means for a given physical system to implement a given
 computation. Chalmers' idea was that the idea of a rock implementing every
 possible computer program could be avoided if we defined an implementation
 in terms of counterfactuals, but Maudlin argues that this contradicts the
 supervenience thesis which says that the presence or absence of inert,
 causally isolated objects cannot effect the presence or absence of
 phenomenal states associated with a system, since two systems may have
 different counterfactual structures merely by virtue of an inert subsystem
 in one which *would have* become active if the initial state of the system
 had been slightly different.

 It seems to me that there might be ways of defining causal structure
 which don't depend on counterfactuals, 

Re: Consciousness is information?

2009-05-03 Thread Bruno Marchal


On 03 May 2009, at 09:00, Stathis Papaioannou wrote:


 2009/5/3 Bruno Marchal marc...@ulb.ac.be:

 I think that if you take a real forest with birds, here and there,  
 you
 can interpret some behavior as NAND or NOR, but you will not succeed
 ever in finding the computation of factorial(5).

 But you can interpret *any* behaviour as a NAND gate, in an ad hoc
 fashion. It doesn't even need to be consistent from moment. On a
 Tuesday 3 birds landing could stand for 1 while on a Wednesday 3
 birds landing could stand for 0, and on a Saturday it could stand
 for 1 again.


But this makes sense only relatively to a stable universal machine  
in which you can encode what you are telling me.





 In this way you could take the physical activity
 carried out by a store-bought computer calculating factorial(5) and
 map it onto the forest with the birds.


All right, I see your point, you take any physical activity, and then  
an ad hoc sequence of universal machine which interpret each piece of  
birds behavior into a computation of fact(24). That sequence should be  
capable to be infinite and the birds behavior have to resume more and  
more complex problem dues to the adhocness of the representations.   
The complexity of the sequence of universal machine will grow  
exponentially. Hmm. perhaps. Again this will change nothing, After  
all, the UD does generate *all* implementation of all computations  
including your very complex (to encode) interpretation of rocks and  
forest.





 Of course, this won't give you
 the answer to factorial(5) unless you already have the answer, but
 that just means that the computation is obscured,

It is obscured and blurred relatively to its most probable histories.  
In normal physics (normal in the Gauss meaning) you cannot count on  
those computations. It would be like saying you win the lottery given  
that you have the right numbers, in disorder, but after all you can  
read them in the right order, and someone in Platonia does read them  
in that different order.
I could still disagree because, as you seem to accept, such physical  
implementation can reduce to zero the needed amount of physical  
activity, and an interpretation of your computation of the factorial  
of 4, in the rock, will be made by an actual computation of 24 by a  
real universal machine which does not need to be physical, in  
platonia, and which has a lot of imagination in front of the rock. You  
need something like this, for your argument to go through, but this  
*is* mainly the comp supervenience. So what you show is that indeed,  
we don't need, or cannot use in any genuine sense a primitive notion  
of physical activity to build a notion of supervenience.
Yet I think that the notion of interpretation is more constrained that  
just invoking some ad hoc sequence of platonist universal  
interpreters. At some level, we must bet on just one, if only to be  
able to talk (even to talk to oneself).



 in the same way a
 message is obscured if encoded with a one-time pad that is
 subsequently destroyed and forgotten. In fact, even with the
 store-bought computer the computation is obscured if there are no
 intelligent beings around who can understand it.


Not at all. If the computer evaluate fact(4), even alone in a room,  
the probability it gives 24 is one, in a verifiable way by a third  
person. With or without physicalism we accept the idea that the  
physical neighborhood is locally Turing universal, and does interpret  
the computation of 4.
If I put a computer evaluating Stathis here and now, under your  
substitution level, then, despite the computer being alone in the  
room, the probability that you are where you feel you are (here and  
now) or  in that room is 1/2 (accepting the usual probability). Cf  
step 5.
You will not say yes doctor, but only if you take a permanent look  
on my working artificial brain. The point of comp is that some  
program can observe themselves (at some level). And this can be made  
mathematically precise (by Kleene second recursion theorem).

Accpeting your interpretation of the rock, The probability that you  
are in the rocks, relatively to you here and now, is  
0,001, given that you have to wait the UD  
generates that immensely long sequences of more and more complex ad  
hoc universal interpreters.




 So, if the
 computation supervenes on the activity of the store-bought computer
 without regard for whether any external observer is around to
 understand, then it also supervenes on the activity of the forest with
 the birds.

You illustrate well that the only question which makes sense is the  
question of which most probable computation bears us, or which more  
probable universal machine or number executes us.  What you say is  
that the UD will generate stupid program interpreting the empty input  
like if it was a code for fact(4).




 Other possibilities are that the computation supervenes on
 physical activity 

Re: Consciousness is information?

2009-05-02 Thread Kelly

On Apr 29, 2:26 am, russell standish li...@hpcoders.com.au wrote:

 What extra information do you have in mind? I'd gladly update my
 priors with anything I can lay my hands on.

So changes to neural structure and the concentrations of various
chemicals within neurons and around neural synapses is known to change
conscious experience in humans.  Ants have neurons that work along
similar lines as human neurons.  Surely this must affect the
probability that is assigned to the question of whether ants are able
to experience things like pain in a similar way that humans do.  It
certainly seems to me to be significant.

So how does this extra information show up in your assessment of ant
consciousness?

Again, it seems to me that SSA arguments are better than nothing.  But
their usefulness fades quickly as more sources of data become
available.  They might be a good first stab at answering a question,
but ideally will never be the final word.

For instance, why would I believe your argument over something like
this:

Fish Feel Pain, Study Finds

When you hook a fish, does it hurt? Yes, a new study suggests.

Some researchers have previously concluded that fish react to painful
stimuli without actually feeling pain in the conscious way humans do.

In the new study, researchers gave morphine to one group of fish, and
injected the other group with a placebo (saline). Then the fish were
treated to burning sensations that were expected to be painful but
which did not damage any fish tissue.

Both groups reacted the same, by wriggling.

However, the fish that had been on morphine later went on about
business as if nothing had happened. The fish that had gotten the
saline were wary after the test.

They acted with defensive behaviors, indicating wariness, or fear and
anxiety, said Joseph Garner, an assistant professor at Purdue
University.

The experiment shows that fish do not only respond to painful stimuli
with reflexes, but change their behavior also after the event, said
Janicke Nordgreen, a doctoral student in the Norwegian School of
Veterinary Science. Together with what we know from experiments
carried out by other groups, this indicates that the fish consciously
perceive the test situation as painful and switch to behaviors
indicative of having been through an aversive experience.

A study last month indicated that crabs feel pain, too.

Garner and Nordgreen published their results in the online version of
the journal Applied Animal Behaviour Science.

Garner figures the morphine blocked the experience of pain, but not
behavioral responses to the heat stimulus itself, either because the
responses were reflexive or because the morphine blocked the
experience of pain, but not the experience of an unusual stimulus.

If you think back to when you have had a headache and taken a
painkiller, the pain may go away, but you can still feel the presence
or discomfort of the headache, Garner said.

The goldfish that did not get morphine experienced this painful,
stressful event. Then two hours later, they turned that pain into fear
like we do, Garner said. To me, it sounds an awful lot like how we
experience pain.

Then again, scientist don't fully understand pain in humans. It is
felt when electrical signals are sent from nerve endings to your
brain, which in turn can release painkillers called endorphins and
generate physical and emotional reactions. The details remain unclear,
which his why so many people suffer chronic pain with no relief.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-02 Thread Bruno Marchal


On 30 Apr 2009, at 18:29, Jesse Mazer wrote:

 Bruno Marchal wrote:

 On 29 Apr 2009, at 23:30, Jesse Mazer wrote:

 But I'm not convinced that the basic Olympia machine he describes  
 doesn't already have a complex causal structure--the causal  
 structure would be in the way different troughs influence each other  
 via the pipe system he describes, not in the motion of the armature.

 But Maudlin succeed in showing that in its particular running  
 history,  *that* causal structure is physically inert. Or it has  
 mysterious influence not related to the computation.



 Maudlin only showed that *if* you define causal structure in terms  
 of counterfactuals, then the machinery that ensures the proper  
 counterfactuals might be physically inert. But if you reread my post  
 at http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html 
  you can see that I was trying to come up with a definition of the  
 causal structure of a set of events that did *not* depend on  
 counterfactuals...look at these two paragraphs from that post,  
 particular the first sentence of the first paragraph and the last  
 sentence of the second paragraph:

 It seems to me that there might be ways of defining causal  
 structure which don't depend on counterfactuals, though. One idea I  
 had is that for any system which changes state in a lawlike way over  
 time, all facts about events in the system's history can be  
 represented as a collection of propositions, and then causal  
 structure might be understood in terms of logical relations between  
 propositions, given knowledge of the laws governing the system. As  
 an example, if the system was a cellular automaton, one might have a  
 collection of propositions like cell 156 is colored black at time- 
 step 36, and if you know the rules for how the cells are updated on  
 each time-step, then knowing some subsets of propositions would  
 allow you to deduce others (for example, if you have a set of  
 propositions that tell you the states of all the cells surrounding  
 cell 71 at time-step 106, in most cellular automata that would allow  
 you to figure out the state of cell 71 at the subsequent time-step  
 107). If the laws of physics in our universe are deterministic than  
 you should in principle be able to represent all facts about the  
 state of the universe at all times as a giant (probably infinite)  
 set of propositions as well, and given knowledge of the laws,  
 knowing certain subsets of these propositions would allow you to  
 deduce others.

 Causal structure could then be defined in terms of what logical  
 relations hold between the propositions, given knowledge of the laws  
 governing the system. Perhaps in one system you might find a set of  
 four propositions A, B, C, D such that if you know the system's  
 laws, you can see that AB imply C, and D implies A, but no other  
 proposition or group of propositions in this set of four are  
 sufficient to deduce any of the others in this set. Then in another  
 system you might find a set of four propositions X, Y, Z and W such  
 that WZ imply Y, and X implies W, but those are the only deductions  
 you can make from within this set. In this case you can say these  
 two different sets of four propositions represent instantiations of  
 the same causal structure, since if you map W to A, Z to B, Y to C,  
 and D to X then you can see an isomorphism in the logical relations.  
 That's obviously a very simple causal structure involving only 4  
 events, but one might define much more complex causal structures and  
 then check if there was any subset of events in a system's history  
 that matched that structure. And the propositions could be  
 restricted to ones concerning events that actually did occur in the  
 system's history, with no counterfactual propositions about what  
 would have happened if the system's initial state had been different.



 For a Turing machine running a particular program the propositions  
 might be things like at time-step 35 the Turing machine's read/ 
 write head moved to memory cell #82 and at time-step 35 the Turing  
 machine had internal state S3 and at time-step 35 memory cell #82  
 held the digit 1. I'm not sure whether the general rules for how  
 the Turing machine's internal state changes from one step to the  
 next should also be included among the propositions, my guess is  
 you'd probably need to do so in order to ensure that different  
 computations had different causal structures according to the type  
 of definition above...so, you might have a proposition expressing a  
 rule like if the Turing machine is in internal state S3 and its  
 read/write head detects the digit 1, it changes the digit in that  
 cell to a 0 and moves 2 cells to the left, also changing its  
 internal state to S5. Then this set of four propositions would be  
 sufficient to deduce some other propositions about the history of  
 this computation, 

Re: Consciousness is information?

2009-05-02 Thread Bruno Marchal


On 30 Apr 2009, at 19:39, Brent Meeker wrote:


 Bruno Marchal wrote:
 On 30 Apr 2009, at 15:49, Stathis Papaioannou wrote:
 Marchal wrote
 That is weird.

 I think that you believe that a rock implements computations, because
 you believe a computation can be decomposed in tiny computations, but
 this is not true, you need much more. You need a universal machine
 which links and complexify the states in a precise way.
 Some alive beings do some computations (like some flowers compute  
 tiny
 part of the Fibonacci function). But again, this is sophisticated and
 took time to appear. Waves do analog computations, hardly universal
 digital one, or only when put in some very special condition.
 Interesting and rich computations are relatively rare and exceptional
 until they self-multiplied, like amoebas.

 Does the universe compute its states?

Open problem, but most probably not, given that the universe  
appearance emerge from a statistic bearing on a infinite set of  
(finite and infinite) computations.



 How is the evolution of the wave
 function of the universe or of a flower not a computation?


For a reason similar to the fact that there is no algorithm capable of  
predicting if you will see an electron up or down when prepared in the  
state up+down. But comp makes the wave itself resulting from  
apparent (for the 1-person) arithmetical collapses.






 Nor do I believe the filmed movie graph do any computation, it read
 a description of one, but does not link them logically in real time.
 Today, genetical systems, brains, and computer (human or engineered)
 do concrete computations.


 But that seems like introducing a magic similar to the magic of
 physical existence, except now it is the magic of computational  
 connection.


Ok, but the magic of computational connection can be entirely reduce  
to the magic of succession, addition and multiplication of positive  
integers.
And it is magic, but it is a magic which explains why it has to be a  
magic. A TOE which does not postulate the natural numbers is a TOE  
without natural numbers. We have to assume the numbers, they cannot be  
reduced to anything simpler/

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-02 Thread Bruno Marchal


On 01 May 2009, at 17:02, Stathis Papaioannou wrote:


 2009/5/1 Bruno Marchal marc...@ulb.ac.be:

 That is, you can't say that the rock
 implements one computation but not another.

 I don't think it implements any computations. I could accept some  
 tiny
 apparition of tiny pieces of of tiny automata, but nothing big or
 sophisticated. Some very special crystals perhaps, no doubt, but  
 those
 are, then, computer.

 If your computer has to interact with the external world then that
 imposes some constraints on what counts as an implementation of a
 computation.



OK. Although you could say: if your computer has to interact with  
another computer then  But OK. It is important, not for the rise  
of consciousness, but for its relative stability with respect to some  
notion of first person splral. It is important for having local measure.





 But without this constraint you are free to interpret any
 activity as any computation.


I don't see why. Simple activity could correspond to simple  
computation (I can agree with this). But any complex computation will  
require some gobal connectness among the many simple activity.  
Especially the long and deep (in Bennett sense) computations.




 You could pick three trees and, observing
 the movement of birds on and off the trees, interpret this as  a logic
 gate. Three birds land on the first tree, and that's a zero input.
 Two birds alight from the second tree, that's a zero input also.
 Three birds land on the third tree, that's a one output. A minute
 later, five birds alight from the first tree, one bird lands on the
 second tree and two birds land on the third tree, which is interpreted
 as two one inputs giving a zero output. Looks like it might be a a
 NAND gate! Not very useful, of course, but is there any reason why my
 interpretation is wrong, or why the birds flying around won't give
 rise to whatever consciousness is associated with the operation of the
 logic gate?

Somehow you make my point, because I am willing to say that you are  
right, ONCE assuming the supervenience thesis. But your conclusion,  
that anything computation supervenes on any physical activity,  
including the empty one, is what I definitely consider as an  
absurdity, and is the reason why, keeping comp, I abandon the physical  
supervenience thesis, and eventually the very idea of primitive  
physical stuff.
A computation is just like they are defined in mathematical books on  
computation: it is a global logical relation capable of sustaining non  
trivial relations among abstract items.

Consider Ned Block's Chinese People Computer. You can (logically, not  
ethically of course) program the people of China so that each chinese,  
just by doing very simple mails, participate into a giant computation  
emulating Einstein' brain, say. The consciousness of Einstein will  
rely on the global organization of the information handled by all  
chineses, not on the physical activity of such or such particular  
person. Of course, this line ends up accepting that from the point of  
view of Einstein it is just undecidable if he is a brain in a vat, a  
body in a hospital, or an abstract (but relatively rare and  
sophisticate) pattern in Platonia, and then the comp 1 person  
indeterminacy leads to a rich non trivial relative state  
interpretation of Arithmetic.

I think that if you take a real forest with birds, here and there, you  
can interpret some behavior as NAND or NOR, but you will not succeed  
ever in finding the computation of factorial(5). Even the universal  
dovetailer has to wait (in its own step-time) billions of billions of  
steps before getting something as interesting as the factorial  
functions. For Einstein's brain the UD will already take a  
ridiculously long time (well beyond anything physically observable)  
before getting its simulation.

Even if you decide to no more interact with the external world, you  
will not say yes to a doctor who propose you a rock in place of your  
brain. This is beacuse the probability, ven and especially from your  
first person point of view, that the many NAND (that the rock could  
perhaps emulates indeed) arrange themselves into a Papaioannou healthy  
mind's state is null? You could survive in some possible world, but  
not through the rock computational power. If ever you survive with the  
rock, the probability that you will be dumb or disable will be far  
greater.

But you are right, those who believes in both comp and physical  
supervenience have to attach all consciousness to all physical  
activity, and then they does not need comp anymore and everything  
become trivial. You get a Kelly sort of physics which predict  
everything. I prefer to keep the mathematically coherent and sound  
comp, and forget physical supervenience. Even more so when you realize  
that the math for comp will explain the appearance of rocks and  
particles without assuming any metaphysical naturalism.

Bruno


Re: Consciousness is information?

2009-05-02 Thread Bruno Marchal

On 01 May 2009, at 19:36, Jesse Mazer wrote:


 I found a paper on the Mandelbrot set and computability, I  
 understand very little but maybe Bruno would be able to follow it:

 http://arxiv.org/abs/cs.CC/0604003

 The same author has a shorter outline or slides for a presentation  
 on this subject at 
 http://www.cs.swan.ac.uk/cie06/files/d37/PHP_MandelbrotCiE2006Swansea_Jul2006.pdf
  
  and at the end he asks the question If M (Mandelbrot set) not Q- 
 computable, can the Halting Problem be reduced to determining  
 membership of (intersection of M and Q^2), i.e. how powerful a  
 'hypercomputer' is the Mandelbrot set? I believe Q^2 here just  
 refers to the set of all possible pairs of rational numbers. Maybe  
 by reducing the Halting Problem he means that for any Turing  
 machine + input, there might be some rule that would translate it  
 into a pair of rational numbers such that the computation will halt  
 iff the pair is included in the Mandelbrot set? Whatever he means,  
 it sounds like he's saying it's an open question...



Thanks! Very interesting. It confirms my feeling that the result Blum,  
Smale and Shub cannot really help to figure out if the digital  
Mandelbrot Set is a compact form of a universal dovetaling ... or the  
exponential complex would already be one  Hmm

Another way to digitalize the M set would be to consider its digital,  
ste by step enlargement on the Gaussian Integers (n + mi, n, m in Z).

I will study those papers, soon or later. I really love the Mandelbrot  
set. Look at this beautiful musical zoom by Ubermari0:

http://www.youtube.com/watch?v=KgM3XJmH768feature=channel_page

Look at this new very impressive zoom by Phaumann, with a 10^333   
enlargement, in an hard to compute part of the M-set!


http://www.youtube.com/watch?v=x6DD1k4BAUgfeature=channel_page


You can see that the computations is deep in Bennett sense, like most  
object in nature plausibly: it is both very involved and  
sophisticated yet incredibly redundant, and it is itself the product  
of a very tiny algorithm. It can be used in practice to compress data.

Bruno





 Jason wrote:

 
 
  On Thu, Apr 30, 2009 at 10:35 AM, Bruno Marchal  
 marc...@ulb.ac.be wrote:
 
 
  The mathematical Universal Dovetailer, the splashed universal  
 Turing
  Machine, the rational Mandelbrot set, or any creative sets in the
  sense of Emil Post, does all computations. Really all, with Church
  thesis. This is a theorem in math. The rock? Show me just the 30  
 first
  steps of a computation of square-root(2). ...
 
  Bruno,
 
  I am interested about your statement regarding the Mandelbrot set
  implementing all computations, could you elaborate on this?
 
  Thank you,
 
  Jason
 
 
 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness is information?

2009-05-01 Thread Jason Resch

On Thu, Apr 30, 2009 at 10:35 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 The mathematical Universal Dovetailer, the splashed universal Turing
 Machine, the rational Mandelbrot set, or any creative sets in the
 sense of Emil Post, does all computations. Really all, with Church
 thesis. This is a theorem in math. The rock? Show me just the 30 first
 steps of a computation of square-root(2).   ...

Bruno,

I am interested about your statement regarding the Mandelbrot set
implementing all computations, could you elaborate on this?

Thank you,

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



RE: Consciousness is information?

2009-05-01 Thread Jesse Mazer


I found a paper on the Mandelbrot set and computability, I understand very 
little but maybe Bruno would be able to follow it:
http://arxiv.org/abs/cs.CC/0604003

The same author has a shorter outline or slides for a presentation on this 
subject at 
http://www.cs.swan.ac.uk/cie06/files/d37/PHP_MandelbrotCiE2006Swansea_Jul2006.pdf
 and at the end he asks the question If M (Mandelbrot set) not Q-computable, 
can the Halting Problem be reduced to determining membership of (intersection 
of M and Q^2), i.e. how powerful a 'hypercomputer' is the Mandelbrot set? I 
believe Q^2 here just refers to the set of all possible pairs of rational 
numbers. Maybe by reducing the Halting Problem he means that for any Turing 
machine + input, there might be some rule that would translate it into a pair 
of rational numbers such that the computation will halt iff the pair is 
included in the Mandelbrot set? Whatever he means, it sounds like he's saying 
it's an open question...
Jesse
 
 
 On Thu, Apr 30, 2009 at 10:35 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 The mathematical Universal Dovetailer, the splashed universal Turing
 Machine, the rational Mandelbrot set, or any creative sets in the
 sense of Emil Post, does all computations. Really all, with Church
 thesis. This is a theorem in math. The rock? Show me just the 30 first
 steps of a computation of square-root(2).   ...
 
 Bruno,
 
 I am interested about your statement regarding the Mandelbrot set
 implementing all computations, could you elaborate on this?
 
 Thank you,
 
 Jason
 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



  1   2   3   >