Re: Olympia's Beautiful and Profound Mind

2005-05-16 Thread Stathis Papaioannou
 from these to mind-body 
interaction. Even had they found a technically plausible solution to their 
puzzle, mind-body interaction would presumably still have been regarded as 
secondary to body-body interaction. We have reversed that priority.
   One might not expect mind-body duality as a mere philosophical problem 
to address any urgent need outside of philosophy. Nevertheless we have 
offered solutions to the following practical problems that could be 
construed as particular applications of our general solution to Descartes' 
mind-body problem, broadly construed to allow scarecrows and everything 
else to have minds.

   There are his own words!

Stephen
- Original Message - From: Bruno Marchal [EMAIL PROTECTED]
To: Stephen Paul King [EMAIL PROTECTED]
Cc: Stathis Papaioannou [EMAIL PROTECTED]; 
everything-list@eskimo.com
Sent: Sunday, May 15, 2005 10:18 AM
Subject: Re: Olympia's Beautiful and Profound Mind


Le 15-mai-05, à 15:40, Stephen Paul King a écrit :
   Two points: I am pointing out that the non-interactional idea of 
computation and any form of monism will fail to account for the 
necessity of 1st person viewpoints.
You know that the necessity of 1st person viewpoints is what I
consider the most easily explained (through the translation of the
Theaetetus in arithmetic or in any language of a lobian machine).
You refer to paper as hard and technical as my thesis. You should
explain why you still believe the 1 person is dismissed in comp or any
monism.
Also, Pratt seems to me monist, and its mathematical dualism does not
address the main question in philosphy of mind/cognitive science. Its
paper is interesting but could hardly be refer as an authority on those
question at this stage.
Bruno
http://iridia.ulb.ac.be/~marchal/
_
Sell your car for $9 on carpoint.com.au   
http://www.carpoint.com.au/sellyourcar



Re: Olympia's Beautiful and Profound Mind

2005-05-15 Thread Stathis Papaioannou
I appreciate that there are genuine problems in the theory of computation as 
applied to intelligent and/or conscious minds. However, we know that 
intelligent and conscious minds do in fact exist, running on biological 
hardware. The situation is a bit like seeing an aeroplane in the sky then 
trying to figure out the physics of heavier than air flight; if you prove 
that it's impossible, then there has to be something wrong with your proof.

If it does turn out that the brain is not Turing emulable, what are the 
implications of this? Could we still build a conscious machine with 
appropriate soldering and coding, or would we have to surrender to dualism/ 
an immaterial soul/ Roger Penrose or what?

--Stathis Papaioannou
From: Stephen Paul King [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
CC: everything-list@eskimo.com
Subject: Re: Olympia's Beautiful and Profound Mind
Date: Sat, 14 May 2005 20:41:04 -0400
Dear Lee,
   Let me use your post to continue our offline conversation here for the 
benefit of all.

   The idea of a computation, is it well or not-well founded? Usually TMs 
and other finite (or infinite!) state machines are assume to have a well 
founded set of states such that there are no circularities nor infinite 
sequences in their specifications. See:

http://www.answers.com/topic/non-well-founded-set-theory
   One of the interesting features that arises when we consider if it is 
possible to faithfully represent the 1st person experiences of the world 
-being in the world as Sartre wrote - in terms of computationally 
generated simulations is that circularities arise almost everywhere.

   Jon Barwise, Peter Wegner and others have pointed out that the usual 
notions of computation fail to properly take into consideration the 
necessity to deal with this issue and have been operating in a state of 
Denial about a crucial aspect of the notion of conscious awareness: how can 
an a priori specifiable computation contain an internal representational 
model of itself that is dependent on its choices and interactions with 
others, when these others are not specified within the computation?

http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/b/Barwise:Jon.html
http://www.cs.brown.edu/people/pw/
   Another aspect of this is the problem of concurrency.
http://www.cs.auckland.ac.nz/compsci340s2c/lectures/lecture10.pdf
http://boole.stanford.edu/abstracts.html
   I am sure that I am being a fooling tyro is this post. ;-)
Kindest regards,
Stephen
- Original Message - From: Lee Corbin [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: everything-list@eskimo.com
Sent: Saturday, May 14, 2005 2:00 AM
Subject: RE: Olympia's Beautiful and Profound Mind

Hal writes
We had some discussion of Maudlin's paper on the everything-list in 1999.
I summarized the paper at http://www.escribe.com/science/theory/m898.html 
.
Subsequent discussion under the thread title implementation followed
...
I suggested a flaw in Maudlin's argument at
http://www.escribe.com/science/theory/m1010.html with followup
http://www.escribe.com/science/theory/m1015.html .

In a nutshell, my point was that Maudlin fails to show that physical
supervenience (that is, the principle that whether a system is
conscious or not depends solely on the physical activity of the system)
is inconsistent with computationalism.
It seemed to me that he made a leap at the end.
(In fact, I argued that the new computation is very plausibly conscious,
but that doesn't even matter, because it is sufficient to consider that
it might be, in order to see that Maudlin's argument doesn't go through.
To repair his argument it would be necessary to prove that the altered
computation is unconscious.)
I know that Hal participated in a discussion on Extropians in 2002 or 2003
concerning Giant Look-Up Tables. I'm surprised that either in the course
of those discussions he didn't mention Maudlin's argument, or that I have
forgotten it.
Doesn't it all seem of a piece?  We have, again, an entity that either
does not compute its subsequent states, (or as Jesse Mazer points out,
does so in a way that looks suspiciously like a recording of an actual
prior calculation).
The GLUT was a device that seemed to me to do the same thing, that is,
portray subsequent states without engaging in bonafide computations.
Is all this really the same underlying issue, or not?
Lee


 Yahoo! Groups Sponsor ~--
In low income neighborhoods, 84% do not own computers.
At Network for Good, help bridge the Digital Divide!
http://us.click.yahoo.com/S.QlOD/3MnJAA/Zx0JAA/pyIolB/TM
~-
Yahoo! Groups Links
* To visit your group on the web, go to:
   http://groups.yahoo.com/group/Fabric-of-Reality/
* To unsubscribe from this group, send an email to:
   [EMAIL PROTECTED]
* Your use of Yahoo! Groups is subject to:
   http://docs.yahoo.com/info/terms

Re: Olympia's Beautiful and Profound Mind

2005-05-15 Thread Stephen Paul King
Dear Stathis,
   Two points: I am pointing out that the non-interactional idea of 
computation and any form of monism will fail to account for the necessity 
of 1st person viewpoints. I am advocating a form of dualism, a process 
dualism based on the work of Vaughan Pratt.

http://boole.stanford.edu/pub/ratmech.pdf
   What in interesting about this form of dualism is that the dual aspects 
become identical to each other (automorphic?) in the limit of infinite Minds 
and Bodies. I don't have time to explain the details of this right now but 
your fears can be assuaged: there is no coherent notion of an immaterial 
soul nor a mindless body. An example of the former is a complete atomic 
Boolean algebra that can not be instantiated physically by any means and an 
example the latter is found in considering a physical object that has no 
possible representation.

Stephen
- Original Message - 
From: Stathis Papaioannou [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: everything-list@eskimo.com
Sent: Sunday, May 15, 2005 8:32 AM
Subject: Re: Olympia's Beautiful and Profound Mind


I appreciate that there are genuine problems in the theory of computation 
as applied to intelligent and/or conscious minds. However, we know that 
intelligent and conscious minds do in fact exist, running on biological 
hardware. The situation is a bit like seeing an aeroplane in the sky then 
trying to figure out the physics of heavier than air flight; if you prove 
that it's impossible, then there has to be something wrong with your proof.

If it does turn out that the brain is not Turing emulable, what are the 
implications of this? Could we still build a conscious machine with 
appropriate soldering and coding, or would we have to surrender to 
dualism/ an immaterial soul/ Roger Penrose or what?

--Stathis Papaioannou



Re: Olympia's Beautiful and Profound Mind

2005-05-15 Thread Bruno Marchal
Le 15-mai-05, à 15:40, Stephen Paul King a écrit :
   Two points: I am pointing out that the non-interactional idea of 
computation and any form of monism will fail to account for the 
necessity of 1st person viewpoints.
You know that the necessity of 1st person viewpoints is what I 
consider the most easily explained (through the translation of the 
Theaetetus in arithmetic or in any language of a lobian machine).
You refer to paper as hard and technical as my thesis. You should 
explain why you still believe the 1 person is dismissed in comp or any 
monism.
Also, Pratt seems to me monist, and its mathematical dualism does not 
address the main question in philosphy of mind/cognitive science. Its 
paper is interesting but could hardly be refer as an authority on those 
question at this stage.

Bruno
http://iridia.ulb.ac.be/~marchal/



Re: Olympia's Beautiful and Profound Mind

2005-05-15 Thread Stephen Paul King
Dear Bruno,
   As for your showng of necessity of a 1st personviewpoint , I still do 
not understand your argument and that is a failure on my part. ;-) As to 
Pratt's ideas, let me quote directly from one of his papers:

http://boole.stanford.edu/pub/ratmech.pdf
   Some of the questions however remain philosophically challenging even 
today. A central tenet of Cartesianism is mind-body dualism, the principle 
that mind and body are the two basic substances of which reality is 
constituted. Each can exist separately, body as realized in inanimate 
objects and lower forms of life, mind as realized in abstract concepts and 
mathematical certainties. According to Descartes the two come together only 
in humans, where they undergo causal interaction, the mind reflecting on 
sensory perceptions while orchestrating the physical motions of the limbs 
and other organs of the body.

   The crucial problem for the causal interaction theory of mind and body 
was its mechanism: how did it work?

   Descartes hypothesized the pineal gland, near the center of the brain, 
as the seat of causal interaction. The objection was raised that the mental 
and physical planes were of such a fundamentally dissimilar character as to 
preclude any ordinary notion of causal interaction. But the part about a 
separate yet joint reality of mind and body seemed less objectionable, and 
various commentators offered their own explanations for the undeniably 
strong correlations of mental and physical phenomena.

   Malebranche insisted that these were only correlations and not true 
interactions, whose appearance of interaction was arranged in every detail 
by God by divine intervention on every occasion of correlation, a theory 
that naturally enough came to be called occasionalism. Spinoza freed God 
from this demanding schedule by organizing the parallel behavior of mind and 
matter as a preordained apartheid emanating from God as the source of 
everything. Leibniz postulated monads, cosmic chronometers miraculously 
keeping perfect time with each other yet not interacting.

   These patently untestable answers only served to give dualism a bad 
name, and it gave way in due course to one or another form of monism: either 
mind or matter but not both as distinct real substances. Berkeley opined 
that matter did not exist and that the universe consisted solely of ideas. 
Hobbes ventured the opposite: mind did not exist except as an artifact of 
matter. Russell [Rus27] embraced neutral monism, which reconciled Berkeley's 
and Hobbes' viewpoints as compatible dual accounts of a common neutral 
Leibnizian monad.

   This much of the history of mind-body dualism will suffice as a 
convenient point of reference for the sequel. R. Watson's Britannica article 
[Wat86] is a conveniently accessible starting point for further reading. The 
thesis of this paper is that mind-body dualism can be made to work via a 
theory that we greatly prefer to its monist competitors. Reflecting an era 
of reduced expectations for the superiority of humans, we have implemented 
causal interaction not with the pineal gland but with machinery freely 
available to all classical entities, whether newt, pet rock, electron, or 
theorem (but not quantum mechanical wavefunction, which is sibling to if not 
an actual instance of our machinery).

and
   We have advanced a mechanism for the causal interaction of mind and 
body, and argued that separate additional mechanisms for body-body and 
mind-mind interaction can be dispensed with; mind-body interaction is all 
that is needed. This is a very different outcome from that contemplated by 
17th century Cartesianists, who took body-body and mind-mind interaction as 
given and who could find no satisfactory passage from these to mind-body 
interaction. Even had they found a technically plausible solution to their 
puzzle, mind-body interaction would presumably still have been regarded as 
secondary to body-body interaction. We have reversed that priority.
   One might not expect mind-body duality as a mere philosophical problem 
to address any urgent need outside of philosophy. Nevertheless we have 
offered solutions to the following practical problems that could be 
construed as particular applications of our general solution to Descartes' 
mind-body problem, broadly construed to allow scarecrows and everything else 
to have minds.

   There are his own words!

Stephen
- Original Message - 
From: Bruno Marchal [EMAIL PROTECTED]
To: Stephen Paul King [EMAIL PROTECTED]
Cc: Stathis Papaioannou [EMAIL PROTECTED]; 
everything-list@eskimo.com
Sent: Sunday, May 15, 2005 10:18 AM
Subject: Re: Olympia's Beautiful and Profound Mind


Le 15-mai-05, à 15:40, Stephen Paul King a écrit :
   Two points: I am pointing out that the non-interactional idea of 
computation and any form of monism will fail to account for the 
necessity of 1st person viewpoints.
You know that the necessity of 1st person viewpoints is what I
consider

RE: Olympia's Beautiful and Profound Mind

2005-05-14 Thread Lee Corbin
Hal writes

 We had some discussion of Maudlin's paper on the everything-list in 1999.
 I summarized the paper at http://www.escribe.com/science/theory/m898.html .
 Subsequent discussion under the thread title implementation followed
 ...
 I suggested a flaw in Maudlin's argument at
 http://www.escribe.com/science/theory/m1010.html with followup
 http://www.escribe.com/science/theory/m1015.html .
 
 In a nutshell, my point was that Maudlin fails to show that physical
 supervenience (that is, the principle that whether a system is
 conscious or not depends solely on the physical activity of the system)
 is inconsistent with computationalism.

It seemed to me that he made a leap at the end.

 (In fact, I argued that the new computation is very plausibly conscious,
 but that doesn't even matter, because it is sufficient to consider that
 it might be, in order to see that Maudlin's argument doesn't go through.
 To repair his argument it would be necessary to prove that the altered
 computation is unconscious.)

I know that Hal participated in a discussion on Extropians in 2002 or 2003
concerning Giant Look-Up Tables. I'm surprised that either in the course
of those discussions he didn't mention Maudlin's argument, or that I have
forgotten it.

Doesn't it all seem of a piece?  We have, again, an entity that either
does not compute its subsequent states, (or as Jesse Mazer points out,
does so in a way that looks suspiciously like a recording of an actual
prior calculation).

The GLUT was a device that seemed to me to do the same thing, that is,
portray subsequent states without engaging in bonafide computations.

Is all this really the same underlying issue, or not?

Lee



RE: Olympia's Beautiful and Profound Mind

2005-05-14 Thread Lee Corbin
Jesse comments on Brian's remarkable and exceedingly valuable
explication (thanks, Brian!), even if some old-timers are
having deja-vu all over again, and are wondering if indeed
the universe isn't hopelessly cyclic after all.

  triggering tape locations. To make it even simpler, the read/write head can
  be replaced by a armature that moves from left to right triggering tape
  locations. We have a very lazy machine! It's name is Olympia.
 
 The main objection that comes to my mind is that in order to plan ahead of 
 time what number should be in each tape location before the armature begins 
 moving and flipping bits, you need to have already done the computation in 
 the regular way--so Olympia is not really computing anything, it's basically 
 just a playback device for showing us a *recording* of what happened during 
 the original computation.

It seems to me that you are exactly correct!  Admittedly I'm not
very articulate on this, partly because it's such a mystery, BUT:

It seems to me that there must be real information flow between
states of a real computation. When things are just laid out in a
way that circumvents this information flow, this causality, then
neither consciousness nor observer-moments obtain.

I admit that this is most peculiar. I admit that this may be
just another way of saying that time is mysterious. I admit
that it is logically possible that ditching the real universe
and regarding it as only a certain backwater of Platonia could
be correct. But so far:  I can't accept it, and partly for the
*moral* aspects that Jesse brings up later.

 I don't think Olympia contributes anything more to the measure
 of the observer-moment that was associated with the original 
 computation, any more than playing a movie showing the workings
 of each neuron in my brain would contribute to the measure of
 the observer-moment associated with what my brain was doing
 during that time.

Just so. But don't you find this difficult, as I do?  Don't
we need stronger arguments than these to counter those who
believe that Platonia explains everything?  Don't you also
feel that time and causality are linked here strongly, and
that somehow you and me and people like us seem to have a
faith that time and causality are real and independent of
such performances of Olympia, or performances by the
timeless Universal Dovetailer?

 But this would still be just a playback device, it wouldn't have the same 
 causal structure (although I don't know precisely how to define that term, 
 so this is probably the weakest part of my argument)

yes, we seem to have the same misgivings after all

 This actually brings up an interesting ethical problem. Suppose we put a 
 deterministic A.I. in a simulated environment with a simulated puppet body 
 whose motions are controlled by a string of numbers, and simulate what 
 happens with every possible input string to the puppet body of a certain 
 length.

An old idea, the Giant LookUp Table, or GLUT, did what to me
amounts to the same thing.

 The vast majority of copies of the A.I. will observer the puppet 
 body flailing around in a completely random way, of course. But once we have 
 a database of what happened in the simulation with every possible input 
 string, we could then create an interactive playback device where I put on a 
 VR suit and my movements are translated into input strings for the puppet 
 body, and then the device responds by playing back the next bit of the 
 appropriate recording in its database. The question is, would it be morally 
 wrong for me to make the puppet body torture the A.I. in the simulation?

In my opinion: NO.  It would not be morally wrong, because (as
you too believe) there are no observer moments in the playback.
When subsequent events are *causally* calculated from prior ones,
then, and only then, can moral issues arise, because then, and
only then, does an entity either benefit or suffer.

 I'm really just playing back a recording of a simulation that already
 happened,

that's right!
 
 so it seems like I'm not contributing anything to the measure of painful 
 observer-moments for the A.I., and of course the fraction of histories where 
 the A.I. experienced the puppet body acting in a coherent manner would have 
 been extremely small.

Yes.

 I guess one answer to why it's still wrong, [IT IS???]  besides 
 the fact that simulated torture might have a corrupting effect on my own 
 mind,

yeah, but that's not relevant. Let's hypothesize that you're
a very strong adult and can watch, say, violent or really scary
films or other portrayals, and not be too disturbed

 is that there's bound to be some fraction of worlds where I was 
 tricked or deluded into believing the VR system was just playing back 
 recordings from a preexisting database, when in reality I'm actually 
 interacting with a computation being done in realtime, so if I engage in 
 torture I am at least slightly contributing to the measure of 
 observer-moments who 

RE: Olympia's Beautiful and Profound Mind

2005-05-14 Thread Brian Scurfield
Jesse wrote:

 The main objection that comes to my mind is that in order to plan ahead of
 time what number should be in each tape location before the armature
 begins moving and flipping bits, you need to have already done the
 computation in the regular way--so Olympia is not really computing
 anything, it's basically just a playback device for showing us a
 *recording* of what happened during the original computation. 

 I don't think Olympia contributes anything more to the measure of the
 observer-moment that was associated with the original computation, any
 more than playing a movie showing the workings of each neuron in my brain
 would contribute to the measure of the observer-moment associated with
 what my brain was doing during that time.

You have to be careful, however, that you are not maintaining that when we
perform a computation and then do it in exactly the same way again that we
can only consider the first run to be genuine. And taking note of the
machine states and tape configuration during each step of the first run does
not mean that the second run is any less of a computation. 

This aside, you are right that we need knowledge of the steps of the
original computation to construct Olympia. We could have just filmed the
steps of the original computation and played back the film, claiming that
the film is also performing a computation. I would agree that this would not
fit the bill. Information does not flow. But is it correct to say that
information does not flow in Olympia? Consider the following. Olympia's tape
configuration is causally dependent on the armature being in a certain
state; that is, being at a certain location. And because the tape is
multiply-located, what happens at one point can affect what is happening
at another point. Furthermore, Olympia goes through as many state changes as
the original machine and the machine state has been defined without
reference to the state of the tape and vice-versa. Lastly, unlike a film,
Olympia is sensitive to counterfactuals when in the unblocked state. Do
these indicate the flow of information? I'm tempted to say yes!

Brian Scurfield




Re: Olympia's Beautiful and Profound Mind

2005-05-14 Thread Stephen Paul King
Dear Lee,
   Let me use your post to continue our offline conversation here for the 
benefit of all.

   The idea of a computation, is it well or not-well founded? Usually TMs 
and other finite (or infinite!) state machines are assume to have a well 
founded set of states such that there are no circularities nor infinite 
sequences in their specifications. See:

http://www.answers.com/topic/non-well-founded-set-theory
   One of the interesting features that arises when we consider if it is 
possible to faithfully represent the 1st person experiences of the 
world -being in the world as Sartre wrote - in terms of computationally 
generated simulations is that circularities arise almost everywhere.

   Jon Barwise, Peter Wegner and others have pointed out that the usual 
notions of computation fail to properly take into consideration the 
necessity to deal with this issue and have been operating in a state of 
Denial about a crucial aspect of the notion of conscious awareness: how can 
an a priori specifiable computation contain an internal representational 
model of itself that is dependent on its choices and interactions with 
others, when these others are not specified within the computation?

http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/b/Barwise:Jon.html
http://www.cs.brown.edu/people/pw/
   Another aspect of this is the problem of concurrency.
http://www.cs.auckland.ac.nz/compsci340s2c/lectures/lecture10.pdf
http://boole.stanford.edu/abstracts.html
   I am sure that I am being a fooling tyro is this post. ;-)
Kindest regards,
Stephen
- Original Message - 
From: Lee Corbin [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: everything-list@eskimo.com
Sent: Saturday, May 14, 2005 2:00 AM
Subject: RE: Olympia's Beautiful and Profound Mind


Hal writes
We had some discussion of Maudlin's paper on the everything-list in 1999.
I summarized the paper at http://www.escribe.com/science/theory/m898.html 
.
Subsequent discussion under the thread title implementation followed
...
I suggested a flaw in Maudlin's argument at
http://www.escribe.com/science/theory/m1010.html with followup
http://www.escribe.com/science/theory/m1015.html .

In a nutshell, my point was that Maudlin fails to show that physical
supervenience (that is, the principle that whether a system is
conscious or not depends solely on the physical activity of the system)
is inconsistent with computationalism.
It seemed to me that he made a leap at the end.
(In fact, I argued that the new computation is very plausibly conscious,
but that doesn't even matter, because it is sufficient to consider that
it might be, in order to see that Maudlin's argument doesn't go through.
To repair his argument it would be necessary to prove that the altered
computation is unconscious.)
I know that Hal participated in a discussion on Extropians in 2002 or 2003
concerning Giant Look-Up Tables. I'm surprised that either in the course
of those discussions he didn't mention Maudlin's argument, or that I have
forgotten it.
Doesn't it all seem of a piece?  We have, again, an entity that either
does not compute its subsequent states, (or as Jesse Mazer points out,
does so in a way that looks suspiciously like a recording of an actual
prior calculation).
The GLUT was a device that seemed to me to do the same thing, that is,
portray subsequent states without engaging in bonafide computations.
Is all this really the same underlying issue, or not?
Lee


 Yahoo! Groups Sponsor ~--
In low income neighborhoods, 84% do not own computers.
At Network for Good, help bridge the Digital Divide!
http://us.click.yahoo.com/S.QlOD/3MnJAA/Zx0JAA/pyIolB/TM
~-
Yahoo! Groups Links
* To visit your group on the web, go to:
   http://groups.yahoo.com/group/Fabric-of-Reality/
* To unsubscribe from this group, send an email to:
   [EMAIL PROTECTED]
* Your use of Yahoo! Groups is subject to:
   http://docs.yahoo.com/info/terms/




Olympia's Beautiful and Profound Mind

2005-05-13 Thread Brian Scurfield
Bruno recently urged me to read up on Tim Maudlin's movie-graph argument
against the computational hypothesis. I did so. Here is my version of the
argument.


According to the computational hypothesis, consciousness supervenes on brain
activity and the important level of organization in the brain is its
computational structure. So the same consciousness can supervene on two
different physical systems provided that they support the same computational
structure. For example, we could replace every neuron in your brain with a
functionally equivalent silicon chip and you would not notice the
difference.

Computational structure is an abstract concept. The machine table of a
Turing Machine does not specify any physical requirements and different
physical implementations of the same machine may not be comparable in terms
of the amount of physical activity each must engage in. We might enquire:
what is the minimal amount of physical activity that can support a given
computation, and, in particular, consciousness?

Consider that we have a physical Turing Machine that instantiates the
phenomenal state of a conscious observer. To do this, it starts with a
prepared tape and runs through a sequence of state changes, writing symbols
to the tape, and moving the read-write as it does so. It engages in a lot of
physical activity. By assumption, the phenomenal state supervenes on this
physical computational activity. Each time we run the machine we will get
the same phenomenal state.

Let's try to minimise the amount of computational activity that the Turing
Machine must engage in. We note that many possible pathways through the
machine state table are not used in our particular computation because
certain counterfactuals are not true. For example, on the first step, the
machine might actually go from S_0 to S_8 because the data location on the
tape contained 0. Had the tape contained a 1, it might have gone to S_10,
but this doesn't obtain because the 1 was not actually present.

So let's unravel the actual computational path taken by the machine when it
starts with the prepared tape. Here are the actual machine states and tape
locations at each step:

S_0   s_8   s_7   s_7   s_3   s_2 . . . s_1023
t_0   t_1   t_2   t_1   t_2   t_3 . . . t_2032

Re-label these as follows:

s_[0] s_[1] s_[2] s_[3] s_[4] s_[5] . . .s_[N]
t_[0] t_[1] t_[2] t_[3] t_[4] t_[5] . . .t_[N]

Note that t_[1] and t_[3] are the same tape location, namely t_1. Similarly,
t_[2] and t_[4] are both tape location t_2. These tape locations are
multiply-located.

The tape locations t_[0], t[1], t[2], ..., can be arranged in physical
sequence provided that a mechanism is provided to link the multiply-located
locations. Thus t[1] and t[3] might be joined by a circuit that turns both
on when a 1 is written and both off when a 0 is written. Now when the
machine runs, it has to take account of the remapped tape locations when
computing what state to go into next. Nevertheless, the net-effect of all
this is that it just runs from left to right. 

If the machine just runs from left to right, why bother computing the state
changes? We could just arrange for each tape location to turn on (1 = on) or
off (0 = off) when the read/write head arrives. For example, if t_[2] would
have been turned on in the original computation, then there would be a local
mechanism that turns that location on when the read/write head arrives (note
that t_[4] would also turn on because it is linked to t_[2]). The state
S_[i] is then defined to occur when the machine is at tape location t_[i]
(this machine therefore undergoes as many state changes as the original
machine). Now we have a machine that just moves from left to right
triggering tape locations. To make it even simpler, the read/write head can
be replaced by a armature that moves from left to right triggering tape
locations. We have a very lazy machine! It's name is Olympia.

What, then, is the physical activity on which the phenomenal state
supervenes? It cannot be in the activity of the armature moving from
left to right. That doesn't seem to have the required complexity. Is it in
the turning on and off of the tape locations as the armature moves?
Again that does not seem to have the required degree of complexity.

It might be objected that in stripping out the computational pathway that we
did, we have neglected all the other pathways that could have been executed
but never in fact were. But what difference do these pathways make? We could
construct similar left-right machines for each of these pathways. These
machines would be triggered when a counterfactual occurs at a tape location.
The triggering mechanism is simple. If, say, t_[3] was originally on just
prior to the arrival of the read/write head but is now in fact off, then we
can freeze the original machine and arrange for another left-right machine
to start from that tape location. This triggering and freezing 

Re: Olympia's Beautiful and Profound Mind

2005-05-13 Thread Bruno Marchal
Thanks for that very nice summary. I let people think about it. We have discussed it a long time before on the Everything-list. A keyword to find that discussion in the everything list archive is crackpot as Jacques Mallah named the argument.
Good we can come back on this, because we didn't conclude our old discussion, and for the new people in the list, as for the for-list people, it is a quite important step to figure out that the UDA is a ``proof, not just an ``argument. Well, at least I think so. Also, thanks to Maudlin taking into account the necessity of the counterfactuals in the notion of computation, and thanks to another (more technical) paper by Hardegree, it is possible to use it to motivate some equivalent but technically different path toward an arithmetical quantum logic. I propose we talk on Hardegree later. But I give the reference of Hardegree for those who are impatient ;) (also, compare to many paper on quantum logic, this one is quite readable, and constitutes perhaps a nice introduction to quantum logic, and I would add, especially for Many-Wordlers. Hardegree shows that the most standard implication connective available in quantum logic is formally (at least) equivalent to a Stalnaker-Lewis notion of counterfactual. It is the David Lewis of plurality of worlds and Counterfactuals. Two books which deserves some room on the shell of For-Lister and Everythingers, imo.
Also, I didn't knew but late David Lewis did write a paper on Everett (communicated to me by Adrien Barton). Alas, I have not yet find the time to read it.

 Hardegree, G. M. (1976). The Conditional in Quantum Logic. In Suppes, P., editor, Logic and Probability in Quantum  Mechanics, volume 78 of Synthese Library, pages 55-72. D. Reidel  Publishing Company, Dordrecht-Holland.

Bruno

Le 13-mai-05, à 09:50, Brian Scurfield a écrit :

Bruno recently urged me to read up on Tim Maudlin's movie-graph argument
against the computational hypothesis. I did so. Here is my version of the
argument.


According to the computational hypothesis, consciousness supervenes on brain
activity and the important level of organization in the brain is its
computational structure. So the same consciousness can supervene on two
different physical systems provided that they support the same computational
structure. For example, we could replace every neuron in your brain with a
functionally equivalent silicon chip and you would not notice the
difference.

Computational structure is an abstract concept. The machine table of a
Turing Machine does not specify any physical requirements and different
physical implementations of the same machine may not be comparable in terms
of the amount of physical activity each must engage in. We might enquire:
what is the minimal amount of physical activity that can support a given
computation, and, in particular, consciousness?

Consider that we have a physical Turing Machine that instantiates the
phenomenal state of a conscious observer. To do this, it starts with a
prepared tape and runs through a sequence of state changes, writing symbols
to the tape, and moving the read-write as it does so. It engages in a lot of
physical activity. By assumption, the phenomenal state supervenes on this
physical computational activity. Each time we run the machine we will get
the same phenomenal state.

Let's try to minimise the amount of computational activity that the Turing
Machine must engage in. We note that many possible pathways through the
machine state table are not used in our particular computation because
certain counterfactuals are not true. For example, on the first step, the
machine might actually go from S_0 to S_8 because the data location on the
tape contained 0. Had the tape contained a 1, it might have gone to S_10,
but this doesn't obtain because the 1 was not actually present.

So let's unravel the actual computational path taken by the machine when it
starts with the prepared tape. Here are the actual machine states and tape
locations at each step:

S_0   s_8   s_7   s_7   s_3   s_2 . . . s_1023
t_0   t_1   t_2   t_1   t_2   t_3 . . . t_2032

Re-label these as follows:

s_[0] s_[1] s_[2] s_[3] s_[4] s_[5] . . .s_[N]
t_[0] t_[1] t_[2] t_[3] t_[4] t_[5] . . .t_[N]

Note that t_[1] and t_[3] are the same tape location, namely t_1. Similarly,
t_[2] and t_[4] are both tape location t_2. These tape locations are
multiply-located.

The tape locations t_[0], t[1], t[2], ..., can be arranged in physical
sequence provided that a mechanism is provided to link the multiply-located
locations. Thus t[1] and t[3] might be joined by a circuit that turns both
on when a 1 is written and both off when a 0 is written. Now when the
machine runs, it has to take account of the remapped tape locations when
computing what state to go into next. Nevertheless, the net-effect of all
this is that it just runs from left to right. 

If the machine just runs from left to right, why bother computing the state

Re: Olympia's Beautiful and Profound Mind

2005-05-13 Thread Hal Finney
We had some discussion of Maudlin's paper on the everything-list in 1999.
I summarized the paper at http://www.escribe.com/science/theory/m898.html .
Subsequent discussion under the thread title implementation followed
up; I will point to my posting at
http://www.escribe.com/science/theory/m962.html regarding Bruno's version
of Maudlin's result.

I suggested a flaw in Maudlin's argument at
http://www.escribe.com/science/theory/m1010.html with followup
http://www.escribe.com/science/theory/m1015.html .

In a nutshell, my point was that Maudlin fails to show that physical
supervenience (that is, the principle that whether a system is
conscious or not depends solely on the physical activity of the system)
is inconsistent with computationalism.  What he does show is that you
can change the computation implemented by a system without altering it
physically (by some definition).  But his desired conclusion does not
follow logically, because it is possible that the new computation is
also conscious.

(In fact, I argued that the new computation is very plausibly conscious,
but that doesn't even matter, because it is sufficient to consider that
it might be, in order to see that Maudlin's argument doesn't go through.
To repair his argument it would be necessary to prove that the altered
computation is unconscious.)

You can follow the thread and date index links off the messages above
to see much more discussion of the issue of implementation.

Hal Finney



RE: Olympia's Beautiful and Profound Mind

2005-05-13 Thread Brian Scurfield
Hal wrote:

 We had some discussion of Maudlin's paper on the everything-list in 1999.
 I summarized the paper at http://www.escribe.com/science/theory/m898.html
 .
 Subsequent discussion under the thread title implementation followed
 up; I will point to my posting at
 http://www.escribe.com/science/theory/m962.html regarding Bruno's version
 of Maudlin's result.

Thanks for those links.
 
 I suggested a flaw in Maudlin's argument at
 http://www.escribe.com/science/theory/m1010.html with followup
 http://www.escribe.com/science/theory/m1015.html .
 
 In a nutshell, my point was that Maudlin fails to show that physical
 supervenience (that is, the principle that whether a system is
 conscious or not depends solely on the physical activity of the system)
 is inconsistent with computationalism.  What he does show is that you
 can change the computation implemented by a system without altering it
 physically (by some definition).  But his desired conclusion does not
 follow logically, because it is possible that the new computation is
 also conscious.

So the system instantiates two different computations, when all things are
considered. The first instantiation is when the counterfactuals are enabled
(block removed) and the second instantiation is when the counterfactuals are
disabled (block added). Because there are two different computations, we
can't conclude that the second instantiation does not lead to a phenomenal
state of consciousness. But would you agree though that there does not
appear to be sufficient physical activity taking place in the second
instantiation to sustain phenomenal awareness? After all, Maudlin went to a
lot of trouble to construct a lazy machine! To carry out the second
computation, all that needs to happen is that the armature travel from left
to right emptying or filling troughs (or, as in my summary, triggering tape
locations). It is supposed to be transparently obvious from the lack of
activity that if it is conscious then it can't be as a result of physical
activity. Now you maintain it is conscious, so wherein lies the
consciousness?

 (In fact, I argued that the new computation is very plausibly conscious,
 but that doesn't even matter, because it is sufficient to consider that
 it might be, in order to see that Maudlin's argument doesn't go through.
 To repair his argument it would be necessary to prove that the altered
 computation is unconscious.)
 
 You can follow the thread and date index links off the messages above
 to see much more discussion of the issue of implementation.

OK, I'm making my way through those. Apologies to the list if the points I
raise have been covered previously.

Brian Scurfield



RE: Olympia's Beautiful and Profound Mind

2005-05-13 Thread Jesse Mazer
Brian Scurfield wrote:
Bruno recently urged me to read up on Tim Maudlin's movie-graph argument
against the computational hypothesis. I did so. Here is my version of the
argument.

According to the computational hypothesis, consciousness supervenes on 
brain
activity and the important level of organization in the brain is its
computational structure. So the same consciousness can supervene on two
different physical systems provided that they support the same 
computational
structure. For example, we could replace every neuron in your brain with a
functionally equivalent silicon chip and you would not notice the
difference.

Computational structure is an abstract concept. The machine table of a
Turing Machine does not specify any physical requirements and different
physical implementations of the same machine may not be comparable in terms
of the amount of physical activity each must engage in. We might enquire:
what is the minimal amount of physical activity that can support a given
computation, and, in particular, consciousness?
Consider that we have a physical Turing Machine that instantiates the
phenomenal state of a conscious observer. To do this, it starts with a
prepared tape and runs through a sequence of state changes, writing symbols
to the tape, and moving the read-write as it does so. It engages in a lot 
of
physical activity. By assumption, the phenomenal state supervenes on this
physical computational activity. Each time we run the machine we will get
the same phenomenal state.

Let's try to minimise the amount of computational activity that the Turing
Machine must engage in. We note that many possible pathways through the
machine state table are not used in our particular computation because
certain counterfactuals are not true. For example, on the first step, the
machine might actually go from S_0 to S_8 because the data location on the
tape contained 0. Had the tape contained a 1, it might have gone to S_10,
but this doesn't obtain because the 1 was not actually present.
So let's unravel the actual computational path taken by the machine when it
starts with the prepared tape. Here are the actual machine states and tape
locations at each step:
S_0   s_8   s_7   s_7   s_3   s_2 . . . s_1023
t_0   t_1   t_2   t_1   t_2   t_3 . . . t_2032
Re-label these as follows:
s_[0] s_[1] s_[2] s_[3] s_[4] s_[5] . . .s_[N]
t_[0] t_[1] t_[2] t_[3] t_[4] t_[5] . . .t_[N]
Note that t_[1] and t_[3] are the same tape location, namely t_1. 
Similarly,
t_[2] and t_[4] are both tape location t_2. These tape locations are
multiply-located.

The tape locations t_[0], t[1], t[2], ..., can be arranged in physical
sequence provided that a mechanism is provided to link the multiply-located
locations. Thus t[1] and t[3] might be joined by a circuit that turns both
on when a 1 is written and both off when a 0 is written. Now when the
machine runs, it has to take account of the remapped tape locations when
computing what state to go into next. Nevertheless, the net-effect of all
this is that it just runs from left to right.
If the machine just runs from left to right, why bother computing the state
changes? We could just arrange for each tape location to turn on (1 = on) 
or
off (0 = off) when the read/write head arrives. For example, if t_[2] would
have been turned on in the original computation, then there would be a 
local
mechanism that turns that location on when the read/write head arrives 
(note
that t_[4] would also turn on because it is linked to t_[2]). The state
S_[i] is then defined to occur when the machine is at tape location t_[i]
(this machine therefore undergoes as many state changes as the original
machine). Now we have a machine that just moves from left to right
triggering tape locations. To make it even simpler, the read/write head can
be replaced by a armature that moves from left to right triggering tape
locations. We have a very lazy machine! It's name is Olympia.
The main objection that comes to my mind is that in order to plan ahead of 
time what number should be in each tape location before the armature begins 
moving and flipping bits, you need to have already done the computation in 
the regular way--so Olympia is not really computing anything, it's basically 
just a playback device for showing us a *recording* of what happened during 
the original computation. I don't think Olympia contributes anything more to 
the measure of the observer-moment that was associated with the original 
computation, any more than playing a movie showing the workings of each 
neuron in my brain would contribute to the measure of the observer-moment 
associated with what my brain was doing during that time.

What, then, is the physical activity on which the phenomenal state
supervenes? It cannot be in the activity of the armature moving from
left to right. That doesn't seem to have the required complexity. Is it in
the turning on and off of the tape