Re: UDA query

2010-01-15 Thread Bruno Marchal


On 15 Jan 2010, at 03:52, Brent Meeker wrote:


Bruno Marchal wrote:


Le 14-janv.-10, à 09:01, Brent Meeker a écrit :

I think there may be different kinds of consciousness, so a look- 
up-table (like Searle's Chinese Room) may be conscious but in a  
different way.


In a way distinguishable by the person? From its own (first person)  
perspective?


Also,

I don't think it makes sense to attribute consciousness to anything  
which "do" the computation, but only to the (abstract or  
immaterial) person supervening on the logical and arithmetical  
relations defining those computations, (infinitely many exist).


Persons need to be self-referentially correct relatively to their  
most probable computations, only.


I don't understand what "self-referentially correct" means nor in what
sense computations can be "theirs"?



A machine (number) x is self-referentially correct relatively to a  
history/computation y if the proposition asserted by x are true  
relatively to y.


Rough example: the machine is an altimeter in a plane. The plane is  
500 miles above the ground. The altimeter asserts "1000 miles". Given  
that the altimeter *is* in the plane, it is not self-referentially  
correct relatively to the most probable computation (already well  
approximatized by Newton, best described (if ever) by appropriated  
quantum fields, but (if comp is correct and uda valid) only correctly  
described by the fields emerging from the numbers in fine.



Despite the "self" in self-referentially correct, it is a third person  
form of self-reference, it is not the first person, but a first person  
can be attached to it by the Theaetetus definition as I attempt to  
sketch below.








Persons are conscious, not machine, nor computation, nor states,  
nor numbers, except in a metaphorical way.


So you take "person" as well as arithmetic to be fundamental.



Not at all. I take "person" as an important concept, even a key  
concept. But it is not "fundamental". It does not belong to the  
ontology. Matter and mind, histories and persons, realities and  
consciousness,  emerge already from addition and multiplication,  
assuming comp.


Digital Mechanism obviously assumes the existence of consciousness and  
persons. This has to be done, if only implictly, by the doctor. It  
would be annoying, if not frightening, if the doctor tells you that  
after examining you he has come to the conclusion that you are a  
zombie and that you will got a digital brain independently of you  
saying "yes" or "no".


So comp assumes persons, like it assumes some amount of consensual  
reality. But not necessarily as fundamental.


But then ...

... the uda reasoning leads to the conclusion that, ONCE we assume  
comp, the best TOE we can ever dream of is elementary arithmetic.  Or,  
you can chose your favorite universal system, and consider its minimal  
first order logical specification. Ontologically it is enough. Any  
Sigma_1-complete segment of arithmetic is enough. From the "persons"  
inside this will already be immeasurably *bigger*.


I prefer elementary arithmetic because it is virtually believed by all  
those who have been lucky enough to have followed good primary school.  
It is hardly the case for java, lisp, the combinators or quantum  
topology!


So what is a person?

In auda I opt for a minimalist conception.  A first person is defined  
by its true beliefs. If it is a correct machine it inherits a  
"theology" from the subtraction TARSKI (truth theory) \minus GODEL  
(provability theory). That gives the 8 hypostases/universal person  
points of view. That theology is correct for all "correct lobian  
numbers"


I limit myself to correct machine.
You may though that for such machines "true belief" = "belief". And  
you are, of course correct.


But   (important "but"!) ...

... the machine is correct, and thus consistent, and so is prevented  
by Gödel to prove or believe its correctness, so the machine doesn't  
know, nor believes that "true belief" = "belief", and the logic of  
true beliefs of the machine is different from the machine logic of  
beliefs.
All the 6 + 2 * infinity universal machine person points of view  
differentiates through that gap between truth and provability.


For the ontology, we need no more than Sigma_1 completeness. Like  
Robinson Arithmetic.


For the epistemology (and the unravelling of the internal views), we  
need no more than "provable Sigma_1 completeness". Like Peano  
Arithmetic, Zermelo-Fraenkel, etc.


It is a pure first person,I am talking here,  like a soul before the  
fall. It is before getting its self entangled to deep probable  
universal histories, with many universal being capable of sharing the  
most normal (probable) "video games".


No machine can represent its "first person notion". Correct machines  
will compulsively NOT believe their are machines. Yet they will  
understand (prove) that if they are correct machine, it is normal,  
even necessary, th

Re: UDA query

2010-01-14 Thread Brent Meeker

Bruno Marchal wrote:


Le 14-janv.-10, à 09:01, Brent Meeker a écrit :

I think there may be different kinds of consciousness, so a 
look-up-table (like Searle's Chinese Room) may be conscious but in a 
different way.


In a way distinguishable by the person? From its own (first person) 
perspective?


Also,

I don't think it makes sense to attribute consciousness to anything 
which "do" the computation, but only to the (abstract or immaterial) 
person supervening on the logical and arithmetical relations defining 
those computations, (infinitely many exist).


Persons need to be self-referentially correct relatively to their most 
probable computations, only.


I don't understand what "self-referentially correct" means nor in what
sense computations can be "theirs"?



Persons are conscious, not machine, nor computation, nor states, nor 
numbers, except in a metaphorical way.


So you take "person" as well as arithmetic to be fundamental.

Brent


A universal machine, or number inherits a notion of first person 
plausibly when the machine can, qua computatio, infer its own 
ignorance (G-G* gap), that is when the machine is Löbian (like Peano 
Arithmetic). Then a physics can be associated too. (8 "hypostases" 
appear, or 6 + 2 * infinity, actually).


Is Peano Arithmetic conscious? No! That would be the same mistake. But 
by Lobianity it defines a "natural" (Theaetetical) first person view, 
and its physics and metaphysics. (or then it is a metaphor or a short 
cut).


Bruno

http://iridia.ulb.ac.be/~marchal/




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-14 Thread Bruno Marchal


Le 14-janv.-10, à 09:01, Brent Meeker a écrit :

I think there may be different kinds of consciousness, so a 
look-up-table (like Searle's Chinese Room) may be conscious but in a 
different way.


In a way distinguishable by the person? From its own (first person) 
perspective?


Also,

I don't think it makes sense to attribute consciousness to anything 
which "do" the computation, but only to the (abstract or immaterial) 
person supervening on the logical and arithmetical relations defining 
those computations, (infinitely many exist).


Persons need to be self-referentially correct relatively to their most 
probable computations, only.


Persons are conscious, not machine, nor computation, nor states, nor 
numbers, except in a metaphorical way.


A universal machine, or number inherits a notion of first person 
plausibly when the machine can, qua computatio, infer its own ignorance 
(G-G* gap), that is when the machine is Löbian (like Peano Arithmetic). 
Then a physics can be associated too. (8 "hypostases" appear, or 6 + 2 
* infinity, actually).


Is Peano Arithmetic conscious? No! That would be the same mistake. But 
by Lobianity it defines a "natural" (Theaetetical) first person view, 
and its physics and metaphysics. (or then it is a metaphor or a short 
cut).


Bruno

http://iridia.ulb.ac.be/~marchal/

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-14 Thread Stathis Papaioannou
2010/1/14 Brent Meeker :

>> I think it would be enough for the AI to reproduce the I/O of the
>> whole brain in aggregate. That would involve computing a function
>> controlling each efferent nerve, accepting as data input from the
>> afferent nerves. The behaviour would have to be the same as the brain
>> for all possible inputs, otherwise the AI might fail the Turing test.
>>
>
> To have the same output for all possible inputs is a very strong condition
> and seems to go beyond functionalism.  Suppose (as seems likely) there are
> inputs that "crash" the brain (e.g. induce epileptic seizures).  Would the
> AI brain be less conscious because it didn't experience these seizures?
>  Passing or failing the Turing test is a rather crude measure - after all
> interlocutor might simply guess right.

It would depend on whether the aim was to reproduce a particular
person (which you would want if you were thinking of replacing your
own brain) or just a generic human level intelligence. If we want to
reproduce a particular person the I/O behaviour would be allowed to
vary as much as your behaviour might vary from day to day without
those who know being alarmed. If we want to make a generic AI the
allowed variation could be greater.

>> It's not clear if the modelling would have to be at the molecular,
>> cellular or some higher level in order to achieve this, but in any
>> case I expect that there would be many different programs that could
>> do the job even if the hardware and operating system are kept the
>> same. It could therefore be a case of multiple computations leading to
>> the same experience. Pinning down a thought to a location in time and
>> space would pose no more of a problem for the AI than for the brain.
>>
>
> Then among those AI brains with different computations but the same I/O, you
> would have to find the same OMs constituted by different sequences of
> computational steps.
>
> My intuition is that having the same O for "most" (some very large set of )
> I would be enough to instantiate consciousness - just not the same
> consciousness.  I think there may be different kinds of consciousness, so a
> look-up-table (like Searle's Chinese Room) may be conscious but in a
> different way.

Yes.

-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-14 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/14 Brent Meeker :

  

Yes, I can see that.  By aggregating the brain into one computation do you
mean replacing it with a synchronous digital computer whose program would
not only reproduce the I/O of individual neurons, but also the instantaneous
state on signals which were traveling between them (since presumably timing
is important to the neurons function)?  Or do you mean replacing it with a
synchronous digital computer which produces the same I/O at the afferent and
efferent nerves?  In the former case, it seems that "thoughts" would be
distributed over many, not necessarily sequential, computational steps.  In
the later it would not be possible to map the the computational steps to
brain states at all since they are only required to be the same at the I/O;
and hence difficult to say what constituted a thought.
Given these to two possible models of functionalism, I'm not clear on what
"the same computation" means.  Are these two doing the same computation
because they have the same I/O?  Over what range of I does the O have to be
the same - all possible?  all actually experienced?  those experienced in
the last 2minutes?



I think it would be enough for the AI to reproduce the I/O of the
whole brain in aggregate. That would involve computing a function
controlling each efferent nerve, accepting as data input from the
afferent nerves. The behaviour would have to be the same as the brain
for all possible inputs, otherwise the AI might fail the Turing test.
  


To have the same output for all possible inputs is a very strong 
condition and seems to go beyond functionalism.  Suppose (as seems 
likely) there are inputs that "crash" the brain (e.g. induce epileptic 
seizures).  Would the AI brain be less conscious because it didn't 
experience these seizures?  Passing or failing the Turing test is a 
rather crude measure - after all interlocutor might simply guess right.



It's not clear if the modelling would have to be at the molecular,
cellular or some higher level in order to achieve this, but in any
case I expect that there would be many different programs that could
do the job even if the hardware and operating system are kept the
same. It could therefore be a case of multiple computations leading to
the same experience. Pinning down a thought to a location in time and
space would pose no more of a problem for the AI than for the brain.
  
Then among those AI brains with different computations but the same I/O, 
you would have to find the same OMs constituted by different sequences 
of computational steps.


My intuition is that having the same O for "most" (some very large set 
of ) I would be enough to instantiate consciousness - just not the same 
consciousness.  I think there may be different kinds of consciousness, 
so a look-up-table (like Searle's Chinese Room) may be conscious but in 
a different way.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-13 Thread Stathis Papaioannou
2010/1/14 Brent Meeker :

> Yes, I can see that.  By aggregating the brain into one computation do you
> mean replacing it with a synchronous digital computer whose program would
> not only reproduce the I/O of individual neurons, but also the instantaneous
> state on signals which were traveling between them (since presumably timing
> is important to the neurons function)?  Or do you mean replacing it with a
> synchronous digital computer which produces the same I/O at the afferent and
> efferent nerves?  In the former case, it seems that "thoughts" would be
> distributed over many, not necessarily sequential, computational steps.  In
> the later it would not be possible to map the the computational steps to
> brain states at all since they are only required to be the same at the I/O;
> and hence difficult to say what constituted a thought.
> Given these to two possible models of functionalism, I'm not clear on what
> "the same computation" means.  Are these two doing the same computation
> because they have the same I/O?  Over what range of I does the O have to be
> the same - all possible?  all actually experienced?  those experienced in
> the last 2minutes?

I think it would be enough for the AI to reproduce the I/O of the
whole brain in aggregate. That would involve computing a function
controlling each efferent nerve, accepting as data input from the
afferent nerves. The behaviour would have to be the same as the brain
for all possible inputs, otherwise the AI might fail the Turing test.
It's not clear if the modelling would have to be at the molecular,
cellular or some higher level in order to achieve this, but in any
case I expect that there would be many different programs that could
do the job even if the hardware and operating system are kept the
same. It could therefore be a case of multiple computations leading to
the same experience. Pinning down a thought to a location in time and
space would pose no more of a problem for the AI than for the brain.

-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-13 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/13 Brent Meeker :

  

You're asserting that neuron I/O replication is the "appropriate level" to
make "brain behavior" the same; and I tend to agree that would be sufficient
(though perhaps not necessary).  But that's preserving a particular
algorithm; one more specific than the Platonic computation of its
equivalence class.



Any algorithm would do, implemented on any hardware, as long as it did
the job. Ten engineers working independently on the problem would
probably come up with ten different solutions, even if they worked
with the same theoretical model of a neuron.

  

I suppose a Turing machine could perform the same
computation, but it would perform it very differently.  And I wonder how the
Turing machine would manage perception.  The organs of perception would have
their responses digitized into bit strings and these would be written to the
TM on different tapes?  I think this illustrates my point that, while
preservation of consciousness under the digital neuron substitution seems
plausible, there is still another leap in substituting an abstract
computation for the digital neurons.



There is a leap involved in eliminating the hardware but the first
step is establishing computationalism: that in principal you could
replace the brain with a digital computer and preserve the mind. If
the artificial neurons work as described then doesn't that prove this?

The level of the neuron is an arbitrary one. We could instead consider
replacing volumes of brain tissue with a computer-controlled device
that replicates the I/O behaviour at the surface of the volume, where
it interfaces with normal brain tissue, and expand the size of the
volume until the whole brain is replaced. One linear processor could
then do all the work, and it wouldn't matter what processor it was (as
long as it was fast enough and had enough memory), what language the
program was written in, or even what program it was. Multiple
realisability is a basic feature of functionalism.

  

Also, such an AI brain would not permit slicing the computations into
arbitrarily short time periods because there is communication time involved
and neurons run asynchronously.



The whole brain could be aggregated into one computation, and a
virtual environment could run as a subroutine.


  
Yes, I can see that.  By aggregating the brain into one computation do 
you mean replacing it with a synchronous digital computer whose program 
would not only reproduce the I/O of individual neurons, but also the 
instantaneous state on signals which were traveling between them (since 
presumably timing is important to the neurons function)?  Or do you mean 
replacing it with a synchronous digital computer which produces the same 
I/O at the afferent and efferent nerves?  In the former case, it seems 
that "thoughts" would be distributed over many, not necessarily 
sequential, computational steps.  In the later it would not be possible 
to map the the computational steps to brain states at all since they are 
only required to be the same at the I/O; and hence difficult to say what 
constituted a thought. 

Given these to two possible models of functionalism, I'm not clear on 
what "the same computation" means.  Are these two doing the same 
computation because they have the same I/O?  Over what range of I does 
the O have to be the same - all possible?  all actually experienced?  
those experienced in the last 2minutes?


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-13 Thread Stathis Papaioannou
2010/1/13 Brent Meeker :

> You're asserting that neuron I/O replication is the "appropriate level" to
> make "brain behavior" the same; and I tend to agree that would be sufficient
> (though perhaps not necessary).  But that's preserving a particular
> algorithm; one more specific than the Platonic computation of its
> equivalence class.

Any algorithm would do, implemented on any hardware, as long as it did
the job. Ten engineers working independently on the problem would
probably come up with ten different solutions, even if they worked
with the same theoretical model of a neuron.

> I suppose a Turing machine could perform the same
> computation, but it would perform it very differently.  And I wonder how the
> Turing machine would manage perception.  The organs of perception would have
> their responses digitized into bit strings and these would be written to the
> TM on different tapes?  I think this illustrates my point that, while
> preservation of consciousness under the digital neuron substitution seems
> plausible, there is still another leap in substituting an abstract
> computation for the digital neurons.

There is a leap involved in eliminating the hardware but the first
step is establishing computationalism: that in principal you could
replace the brain with a digital computer and preserve the mind. If
the artificial neurons work as described then doesn't that prove this?

The level of the neuron is an arbitrary one. We could instead consider
replacing volumes of brain tissue with a computer-controlled device
that replicates the I/O behaviour at the surface of the volume, where
it interfaces with normal brain tissue, and expand the size of the
volume until the whole brain is replaced. One linear processor could
then do all the work, and it wouldn't matter what processor it was (as
long as it was fast enough and had enough memory), what language the
program was written in, or even what program it was. Multiple
realisability is a basic feature of functionalism.

> Also, such an AI brain would not permit slicing the computations into
> arbitrarily short time periods because there is communication time involved
> and neurons run asynchronously.

The whole brain could be aggregated into one computation, and a
virtual environment could run as a subroutine.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-12 Thread m.a.
Interesting how the repeated copying and recopying of emails ends up 
resembling the typography of modern poetry. m.a.




I know.  I'm trying to see
what exactly is being
   assumed
  about the
  computation being "the same".  Is it the
same Platonic
  algorithm?   Is it
  one that has the same steps as described in
   FORTRAN, but
  not those in LISP?
   Is it just one that has the same
input-output?  I
   think
  these are questions
  that have been bypassed in the "yes doctor"
scenario.
   Saying "yes" to the
  doctor seems unproblematic when you think of
   replacing a
  few neurons with
  artificial ones - all you care about is the
   input-output.
   But then when you
  jump to replacing a whole brain maybe you care
   about the
  FORTRAN/LISP
  differences. Yet on this list there seems to
be an
  assumption that you can
  just jump to the Platonic algorithm or even
a Platonic
  computation that's
  independent of the algorithm.   Bruno pushes
all this
  aside by referring to
  "at the appropriate level" and by doing all
possible
  algorithms.  But I'm
  more interested in the question of what would I
   have to do
  to make a
  conscious AI.  Also, it is the assumption of
a Platonic
  computation that
  allows one to slice it discretely into OMs.

  Start by replacing neurons with artificial neurons
   which are
  driven by
  a computer program and whose defining
characteristic is
   that
  they copy
  the I/O behaviour of biological neurons. The
program has to
  model the
  internal workings of a neuron down to a certain
level.
   It may
  be that
  the position and configuration of every molecule
needs
   to be
  modelled,
  or it may be that shortcuts such as a single
parameter
   for the
  permeability of ion channels in the cell
membrane make no
  difference
  to the final result. In any case, there are many
   possible programs
  even if the same physical model of a neuron is 
used,

   and the same
  basic program can be written in any language and
   implemented
  on any
  computer: all that matters is that the artificial
   neuron works
  properly. (As an aside, we don't need to worry 
about

   whether these
  artificial neurons are zombies, since that would
lead
   to absurd
  conclusions about the nature of consciousness.)
From the
  single neuron
  we can progress to replacing the whole brain,
the end
   result
  being a
  computer program interacting with the outside
world through
  sensors
  and effectors. The program can be implemented in 
any

   way - any
  language, any hardware - and the consciousness
of the
   subject will
  remain the same as long as the brain behaviour
remains
   the same.


You're asserting that neuron I/O
replication is the
   "appropriate
  level" to make "brain behavior" the same; and I tend to
   agree that
  would be sufficient (though perhaps not necessary).
 But that's
  preserving a particular algorithm; one more specific
than the
  Platonic computation of its equivalence class.  I
suppose a
   Turing
  machine could perform the same computation, but it 
would

   per

Re: UDA query

2010-01-12 Thread Brent Meeker

Quentin Anciaux wrote:



2010/1/12 Brent Meeker >


Quentin Anciaux wrote:



2010/1/12 Brent Meeker mailto:meeke...@dslextreme.com>
>>

   Quentin Anciaux wrote:



   2010/1/12 Brent Meeker mailto:meeke...@dslextreme.com>
   >
   

   
   >
  

   

Re: UDA query

2010-01-12 Thread Quentin Anciaux
2010/1/12 Brent Meeker 

> Quentin Anciaux wrote:
>
>>
>>
>> 2010/1/12 Brent Meeker > meeke...@dslextreme.com>>
>>
>>Quentin Anciaux wrote:
>>
>>
>>
>>2010/1/12 Brent Meeker >
>>>
>>>>
>>
>>
>>   Stathis Papaioannou wrote:
>>
>>   2010/1/12 Brent Meeker >
>>   >
>>>>:
>>
>>
>> I know.  I'm trying to see what exactly is
>> being
>>assumed
>>   about the
>>   computation being "the same".  Is it the same Platonic
>>   algorithm?   Is it
>>   one that has the same steps as described in
>>FORTRAN, but
>>   not those in LISP?
>>Is it just one that has the same input-output?  I
>>think
>>   these are questions
>>   that have been bypassed in the "yes doctor" scenario.
>>Saying "yes" to the
>>   doctor seems unproblematic when you think of
>>replacing a
>>   few neurons with
>>   artificial ones - all you care about is the
>>input-output.
>>But then when you
>>   jump to replacing a whole brain maybe you care
>>about the
>>   FORTRAN/LISP
>>   differences. Yet on this list there seems to be an
>>   assumption that you can
>>   just jump to the Platonic algorithm or even a Platonic
>>   computation that's
>>   independent of the algorithm.   Bruno pushes all this
>>   aside by referring to
>>   "at the appropriate level" and by doing all possible
>>   algorithms.  But I'm
>>   more interested in the question of what would I
>>have to do
>>   to make a
>>   conscious AI.  Also, it is the assumption of a Platonic
>>   computation that
>>   allows one to slice it discretely into OMs.
>>
>>   Start by replacing neurons with artificial neurons
>>which are
>>   driven by
>>   a computer program and whose defining characteristic is
>>that
>>   they copy
>>   the I/O behaviour of biological neurons. The program has to
>>   model the
>>   internal workings of a neuron down to a certain level.
>>It may
>>   be that
>>   the position and configuration of every molecule needs
>>to be
>>   modelled,
>>   or it may be that shortcuts such as a single parameter
>>for the
>>   permeability of ion channels in the cell membrane make no
>>   difference
>>   to the final result. In any case, there are many
>>possible programs
>>   even if the same physical model of a neuron is used,
>>and the same
>>   basic program can be written in any language and
>>implemented
>>   on any
>>   computer: all that matters is that the artificial
>>neuron works
>>   properly. (As an aside, we don't need to worry about
>>whether these
>>   artificial neurons are zombies, since that would lead
>>to absurd
>>   conclusions about the nature of consciousness.) From the
>>   single neuron
>>   we can progress to replacing the whole brain, the end
>>result
>>   being a
>>   computer program interacting with the outside world through
>>   sensors
>>   and effectors. The program can be implemented in any
>>way - any
>>   language, any hardware - and the consciousness of the
>>subject will
>>   remain the same as long as the brain behaviour remains
>>the same.
>>
>>
>> You're asserting that neuron I/O replication is
>> the
>>"appropriate
>>   level" to make "brain behavior" the same; and I tend to
>>agree that
>>   would be sufficient (though perhaps not necessary).  But that's
>>   preserving a particular algorithm; one more specific than the
>>   Platonic computation of its equivalence class.  I suppose a
>>Turing
>>   machine could perform the same computation, but it would
>>perform
>>   it very differently.  And I wonder how the Turing machine would
>>   manage perception.  The organs of perception would have their
>>   responses digitized into bit strings and these wou

Re: UDA query

2010-01-12 Thread Brent Meeker

Quentin Anciaux wrote:



2010/1/12 Brent Meeker >


Quentin Anciaux wrote:



2010/1/12 Brent Meeker mailto:meeke...@dslextreme.com>
>>


   Stathis Papaioannou wrote:

   2010/1/12 Brent Meeker mailto:meeke...@dslextreme.com>
   >>:


   
   I know.  I'm trying to see what exactly is being

assumed
   about the
   computation being "the same".  Is it the same Platonic
   algorithm?   Is it
   one that has the same steps as described in
FORTRAN, but
   not those in LISP?
Is it just one that has the same input-output?  I
think
   these are questions
   that have been bypassed in the "yes doctor" scenario.
Saying "yes" to the
   doctor seems unproblematic when you think of
replacing a
   few neurons with
   artificial ones - all you care about is the
input-output.
But then when you
   jump to replacing a whole brain maybe you care
about the
   FORTRAN/LISP
   differences. Yet on this list there seems to be an
   assumption that you can
   just jump to the Platonic algorithm or even a Platonic
   computation that's
   independent of the algorithm.   Bruno pushes all this
   aside by referring to
   "at the appropriate level" and by doing all possible
   algorithms.  But I'm
   more interested in the question of what would I
have to do
   to make a
   conscious AI.  Also, it is the assumption of a Platonic
   computation that
   allows one to slice it discretely into OMs.
 


   Start by replacing neurons with artificial neurons
which are
   driven by
   a computer program and whose defining characteristic is
that
   they copy
   the I/O behaviour of biological neurons. The program has to
   model the
   internal workings of a neuron down to a certain level.
It may
   be that
   the position and configuration of every molecule needs
to be
   modelled,
   or it may be that shortcuts such as a single parameter
for the
   permeability of ion channels in the cell membrane make no
   difference
   to the final result. In any case, there are many
possible programs
   even if the same physical model of a neuron is used,
and the same
   basic program can be written in any language and
implemented
   on any
   computer: all that matters is that the artificial
neuron works
   properly. (As an aside, we don't need to worry about
whether these
   artificial neurons are zombies, since that would lead
to absurd
   conclusions about the nature of consciousness.) From the
   single neuron
   we can progress to replacing the whole brain, the end
result
   being a
   computer program interacting with the outside world through
   sensors
   and effectors. The program can be implemented in any
way - any
   language, any hardware - and the consciousness of the
subject will
   remain the same as long as the brain behaviour remains
the same.


   
   You're asserting that neuron I/O replication is the

"appropriate
   level" to make "brain behavior" the same; and I tend to
agree that
   would be sufficient (though perhaps not necessary).  But that's
   preserving a particular algorithm; one more specific than the
   Platonic computation of its equivalence class.  I suppose a
Turing
   machine could perform the same computation, but it would
perform
   it very differently.  And I wonder how the Turing machine would
   manage perception.  The organs of perception would have their
   responses digitized into bit strings and these would be
written to
   the TM on different tapes?  I think this illustrates my point
   that, while preservation of consciousness under the digital
neuron
   substitution seems plausible, there is still another leap in

Re: UDA query

2010-01-12 Thread Quentin Anciaux
2010/1/12 Brent Meeker 

> Quentin Anciaux wrote:
>
>>
>>
>> 2010/1/12 Brent Meeker > meeke...@dslextreme.com>>
>>
>>
>>Stathis Papaioannou wrote:
>>
>>2010/1/12 Brent Meeker >>:
>>
>>
>>
>>I know.  I'm trying to see what exactly is being assumed
>>about the
>>computation being "the same".  Is it the same Platonic
>>algorithm?   Is it
>>one that has the same steps as described in FORTRAN, but
>>not those in LISP?
>> Is it just one that has the same input-output?  I think
>>these are questions
>>that have been bypassed in the "yes doctor" scenario.
>> Saying "yes" to the
>>doctor seems unproblematic when you think of replacing a
>>few neurons with
>>artificial ones - all you care about is the input-output.
>> But then when you
>>jump to replacing a whole brain maybe you care about the
>>FORTRAN/LISP
>>differences. Yet on this list there seems to be an
>>assumption that you can
>>just jump to the Platonic algorithm or even a Platonic
>>computation that's
>>independent of the algorithm.   Bruno pushes all this
>>aside by referring to
>>"at the appropriate level" and by doing all possible
>>algorithms.  But I'm
>>more interested in the question of what would I have to do
>>to make a
>>conscious AI.  Also, it is the assumption of a Platonic
>>computation that
>>allows one to slice it discretely into OMs.
>>
>>
>>Start by replacing neurons with artificial neurons which are
>>driven by
>>a computer program and whose defining characteristic is that
>>they copy
>>the I/O behaviour of biological neurons. The program has to
>>model the
>>internal workings of a neuron down to a certain level. It may
>>be that
>>the position and configuration of every molecule needs to be
>>modelled,
>>or it may be that shortcuts such as a single parameter for the
>>permeability of ion channels in the cell membrane make no
>>difference
>>to the final result. In any case, there are many possible programs
>>even if the same physical model of a neuron is used, and the same
>>basic program can be written in any language and implemented
>>on any
>>computer: all that matters is that the artificial neuron works
>>properly. (As an aside, we don't need to worry about whether these
>>artificial neurons are zombies, since that would lead to absurd
>>conclusions about the nature of consciousness.) From the
>>single neuron
>>we can progress to replacing the whole brain, the end result
>>being a
>>computer program interacting with the outside world through
>>sensors
>>and effectors. The program can be implemented in any way - any
>>language, any hardware - and the consciousness of the subject will
>>remain the same as long as the brain behaviour remains the same.
>>
>>
>>
>>You're asserting that neuron I/O replication is the "appropriate
>>level" to make "brain behavior" the same; and I tend to agree that
>>would be sufficient (though perhaps not necessary).  But that's
>>preserving a particular algorithm; one more specific than the
>>Platonic computation of its equivalence class.  I suppose a Turing
>>machine could perform the same computation, but it would perform
>>it very differently.  And I wonder how the Turing machine would
>>manage perception.  The organs of perception would have their
>>responses digitized into bit strings and these would be written to
>>the TM on different tapes?  I think this illustrates my point
>>that, while preservation of consciousness under the digital neuron
>>substitution seems plausible, there is still another leap in
>>substituting an abstract computation for the digital neurons.
>>
>>Also, such an AI brain would not permit slicing the computations
>>into arbitrarily short time periods because there is communication
>>time involved and neurons run asynchronously.
>>
>>
>> Yes you can, freeze the computation, dump memory... then load memory back,
>> and defreeze. If the time inside the computation is an internal feature (a
>> counter inside the program), the AI associated to the computation cannot
>> notice anything if on the other hand the time inside of the computation is
>> an input parameter from some external then it can notice... but I always can
>> englobe the whole thing and feed that external time from another program or
>> whatever.
>>
> That assumes that the AI brain is running synchronously, i.e. at a clock
> rate small

Re: UDA query

2010-01-12 Thread Brent Meeker

Quentin Anciaux wrote:



2010/1/12 Brent Meeker >


Stathis Papaioannou wrote:

2010/1/12 Brent Meeker mailto:meeke...@dslextreme.com>>:

 


I know.  I'm trying to see what exactly is being assumed
about the
computation being "the same".  Is it the same Platonic
algorithm?   Is it
one that has the same steps as described in FORTRAN, but
not those in LISP?
 Is it just one that has the same input-output?  I think
these are questions
that have been bypassed in the "yes doctor" scenario.
 Saying "yes" to the
doctor seems unproblematic when you think of replacing a
few neurons with
artificial ones - all you care about is the input-output.
 But then when you
jump to replacing a whole brain maybe you care about the
FORTRAN/LISP
differences. Yet on this list there seems to be an
assumption that you can
just jump to the Platonic algorithm or even a Platonic
computation that's
independent of the algorithm.   Bruno pushes all this
aside by referring to
"at the appropriate level" and by doing all possible
algorithms.  But I'm
more interested in the question of what would I have to do
to make a
conscious AI.  Also, it is the assumption of a Platonic
computation that
allows one to slice it discretely into OMs.
   



Start by replacing neurons with artificial neurons which are
driven by
a computer program and whose defining characteristic is that
they copy
the I/O behaviour of biological neurons. The program has to
model the
internal workings of a neuron down to a certain level. It may
be that
the position and configuration of every molecule needs to be
modelled,
or it may be that shortcuts such as a single parameter for the
permeability of ion channels in the cell membrane make no
difference
to the final result. In any case, there are many possible programs
even if the same physical model of a neuron is used, and the same
basic program can be written in any language and implemented
on any
computer: all that matters is that the artificial neuron works
properly. (As an aside, we don't need to worry about whether these
artificial neurons are zombies, since that would lead to absurd
conclusions about the nature of consciousness.) From the
single neuron
we can progress to replacing the whole brain, the end result
being a
computer program interacting with the outside world through
sensors
and effectors. The program can be implemented in any way - any
language, any hardware - and the consciousness of the subject will
remain the same as long as the brain behaviour remains the same.


 


You're asserting that neuron I/O replication is the "appropriate
level" to make "brain behavior" the same; and I tend to agree that
would be sufficient (though perhaps not necessary).  But that's
preserving a particular algorithm; one more specific than the
Platonic computation of its equivalence class.  I suppose a Turing
machine could perform the same computation, but it would perform
it very differently.  And I wonder how the Turing machine would
manage perception.  The organs of perception would have their
responses digitized into bit strings and these would be written to
the TM on different tapes?  I think this illustrates my point
that, while preservation of consciousness under the digital neuron
substitution seems plausible, there is still another leap in
substituting an abstract computation for the digital neurons.

Also, such an AI brain would not permit slicing the computations
into arbitrarily short time periods because there is communication
time involved and neurons run asynchronously.


Yes you can, freeze the computation, dump memory... then load memory 
back, and defreeze. If the time inside the computation is an internal 
feature (a counter inside the program), the AI associated to the 
computation cannot notice anything if on the other hand the time 
inside of the computation is an input parameter from some external 
then it can notice... but I always can englobe the whole thing and 
feed that external time from another program or whatever.
That assumes that the AI brain is running synchronously, i.e. at a clock 
rate small compared to c/R where R is the radius of the brain.  But I 
think the real brain runs asynchronously, so if the AI brain must do the 
simulation at a lower level to take account of transmission times,

Re: UDA query

2010-01-12 Thread Quentin Anciaux
2010/1/12 Brent Meeker 

> Stathis Papaioannou wrote:
>
>> 2010/1/12 Brent Meeker :
>>
>>
>>
>>> I know.  I'm trying to see what exactly is being assumed about the
>>> computation being "the same".  Is it the same Platonic algorithm?   Is it
>>> one that has the same steps as described in FORTRAN, but not those in
>>> LISP?
>>>  Is it just one that has the same input-output?  I think these are
>>> questions
>>> that have been bypassed in the "yes doctor" scenario.  Saying "yes" to
>>> the
>>> doctor seems unproblematic when you think of replacing a few neurons with
>>> artificial ones - all you care about is the input-output.  But then when
>>> you
>>> jump to replacing a whole brain maybe you care about the FORTRAN/LISP
>>> differences. Yet on this list there seems to be an assumption that you
>>> can
>>> just jump to the Platonic algorithm or even a Platonic computation that's
>>> independent of the algorithm.   Bruno pushes all this aside by referring
>>> to
>>> "at the appropriate level" and by doing all possible algorithms.  But I'm
>>> more interested in the question of what would I have to do to make a
>>> conscious AI.  Also, it is the assumption of a Platonic computation that
>>> allows one to slice it discretely into OMs.
>>>
>>>
>>
>> Start by replacing neurons with artificial neurons which are driven by
>> a computer program and whose defining characteristic is that they copy
>> the I/O behaviour of biological neurons. The program has to model the
>> internal workings of a neuron down to a certain level. It may be that
>> the position and configuration of every molecule needs to be modelled,
>> or it may be that shortcuts such as a single parameter for the
>> permeability of ion channels in the cell membrane make no difference
>> to the final result. In any case, there are many possible programs
>> even if the same physical model of a neuron is used, and the same
>> basic program can be written in any language and implemented on any
>> computer: all that matters is that the artificial neuron works
>> properly. (As an aside, we don't need to worry about whether these
>> artificial neurons are zombies, since that would lead to absurd
>> conclusions about the nature of consciousness.) From the single neuron
>> we can progress to replacing the whole brain, the end result being a
>> computer program interacting with the outside world through sensors
>> and effectors. The program can be implemented in any way - any
>> language, any hardware - and the consciousness of the subject will
>> remain the same as long as the brain behaviour remains the same.
>>
>>
>>
>>
> You're asserting that neuron I/O replication is the "appropriate level" to
> make "brain behavior" the same; and I tend to agree that would be sufficient
> (though perhaps not necessary).  But that's preserving a particular
> algorithm; one more specific than the Platonic computation of its
> equivalence class.  I suppose a Turing machine could perform the same
> computation, but it would perform it very differently.  And I wonder how the
> Turing machine would manage perception.  The organs of perception would have
> their responses digitized into bit strings and these would be written to the
> TM on different tapes?  I think this illustrates my point that, while
> preservation of consciousness under the digital neuron substitution seems
> plausible, there is still another leap in substituting an abstract
> computation for the digital neurons.
>
> Also, such an AI brain would not permit slicing the computations into
> arbitrarily short time periods because there is communication time involved
> and neurons run asynchronously.
>

Yes you can, freeze the computation, dump memory... then load memory back,
and defreeze. If the time inside the computation is an internal feature (a
counter inside the program), the AI associated to the computation cannot
notice anything if on the other hand the time inside of the computation is
an input parameter from some external then it can notice... but I always can
englobe the whole thing and feed that external time from another program or
whatever.

The fact that you can disrupt a computation and restart it with some
different parameters doesn't mean you can't restart it with *exactly* the
same parameters as when you froze it.

Quentin



>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
>


-- 
All those moments will be lost in time, like tears in rain.
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email

Re: UDA query

2010-01-12 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/12 Brent Meeker :

  

I know.  I'm trying to see what exactly is being assumed about the
computation being "the same".  Is it the same Platonic algorithm?   Is it
one that has the same steps as described in FORTRAN, but not those in LISP?
 Is it just one that has the same input-output?  I think these are questions
that have been bypassed in the "yes doctor" scenario.  Saying "yes" to the
doctor seems unproblematic when you think of replacing a few neurons with
artificial ones - all you care about is the input-output.  But then when you
jump to replacing a whole brain maybe you care about the FORTRAN/LISP
differences. Yet on this list there seems to be an assumption that you can
just jump to the Platonic algorithm or even a Platonic computation that's
independent of the algorithm.   Bruno pushes all this aside by referring to
"at the appropriate level" and by doing all possible algorithms.  But I'm
more interested in the question of what would I have to do to make a
conscious AI.  Also, it is the assumption of a Platonic computation that
allows one to slice it discretely into OMs.



Start by replacing neurons with artificial neurons which are driven by
a computer program and whose defining characteristic is that they copy
the I/O behaviour of biological neurons. The program has to model the
internal workings of a neuron down to a certain level. It may be that
the position and configuration of every molecule needs to be modelled,
or it may be that shortcuts such as a single parameter for the
permeability of ion channels in the cell membrane make no difference
to the final result. In any case, there are many possible programs
even if the same physical model of a neuron is used, and the same
basic program can be written in any language and implemented on any
computer: all that matters is that the artificial neuron works
properly. (As an aside, we don't need to worry about whether these
artificial neurons are zombies, since that would lead to absurd
conclusions about the nature of consciousness.) From the single neuron
we can progress to replacing the whole brain, the end result being a
computer program interacting with the outside world through sensors
and effectors. The program can be implemented in any way - any
language, any hardware - and the consciousness of the subject will
remain the same as long as the brain behaviour remains the same.


  
You're asserting that neuron I/O replication is the "appropriate level" 
to make "brain behavior" the same; and I tend to agree that would be 
sufficient (though perhaps not necessary).  But that's preserving a 
particular algorithm; one more specific than the Platonic computation of 
its equivalence class.  I suppose a Turing machine could perform the 
same computation, but it would perform it very differently.  And I 
wonder how the Turing machine would manage perception.  The organs of 
perception would have their responses digitized into bit strings and 
these would be written to the TM on different tapes?  I think this 
illustrates my point that, while preservation of consciousness under the 
digital neuron substitution seems plausible, there is still another leap 
in substituting an abstract computation for the digital neurons.


Also, such an AI brain would not permit slicing the computations into 
arbitrarily short time periods because there is communication time 
involved and neurons run asynchronously.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-12 Thread Stathis Papaioannou
2010/1/12 Brent Meeker :

> I know.  I'm trying to see what exactly is being assumed about the
> computation being "the same".  Is it the same Platonic algorithm?   Is it
> one that has the same steps as described in FORTRAN, but not those in LISP?
>  Is it just one that has the same input-output?  I think these are questions
> that have been bypassed in the "yes doctor" scenario.  Saying "yes" to the
> doctor seems unproblematic when you think of replacing a few neurons with
> artificial ones - all you care about is the input-output.  But then when you
> jump to replacing a whole brain maybe you care about the FORTRAN/LISP
> differences. Yet on this list there seems to be an assumption that you can
> just jump to the Platonic algorithm or even a Platonic computation that's
> independent of the algorithm.   Bruno pushes all this aside by referring to
> "at the appropriate level" and by doing all possible algorithms.  But I'm
> more interested in the question of what would I have to do to make a
> conscious AI.  Also, it is the assumption of a Platonic computation that
> allows one to slice it discretely into OMs.

Start by replacing neurons with artificial neurons which are driven by
a computer program and whose defining characteristic is that they copy
the I/O behaviour of biological neurons. The program has to model the
internal workings of a neuron down to a certain level. It may be that
the position and configuration of every molecule needs to be modelled,
or it may be that shortcuts such as a single parameter for the
permeability of ion channels in the cell membrane make no difference
to the final result. In any case, there are many possible programs
even if the same physical model of a neuron is used, and the same
basic program can be written in any language and implemented on any
computer: all that matters is that the artificial neuron works
properly. (As an aside, we don't need to worry about whether these
artificial neurons are zombies, since that would lead to absurd
conclusions about the nature of consciousness.) From the single neuron
we can progress to replacing the whole brain, the end result being a
computer program interacting with the outside world through sensors
and effectors. The program can be implemented in any way - any
language, any hardware - and the consciousness of the subject will
remain the same as long as the brain behaviour remains the same.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-11 Thread Brent Meeker

Quentin Anciaux wrote:



2010/1/11 Brent Meeker >


Stathis Papaioannou wrote:

2010/1/11 Brent Meeker mailto:meeke...@dslextreme.com>>:

 


But aren't you assuming that consciousness is produced by
the abstract
Platonic computation - rather than by the actual physical
process (which is
not the same) - in other words assuming the thing being
argued?
   



No, I'm at this point assuming only that consciousness is
produced by
the physical process. We can assume for simplicity that the two
machines M1 and M2 have similar architecture and similar operating
systems. Once the program is loaded into M2 from the disc, S2
proceeds
exactly the same as it would have had the computation been
allowed to
continue running on M1. Therefore, at least after the first few
milliseconds, the subjective content of S2 must be the same as it
would have been on the one machine. Could the subjective
content be
different at the transition between S1 and S2 if the
computation is
split up? If there is a subjective difference it won't be
something
the subject can notice because, later in the course of S2, he
can have
no memory of it.


But if you're only assuming that consciousness is produced by the
physical process then the process of downloading and uploading the
microstates and shifting the data into registers in the CPU and
memory could produce a difference in consciousness.  These are all
computations too, done by the operating system.  And why can't
there be memory of it in the sense that it effects some later
conscious state?  There are traces of the transfer process left on
the original computer, the disc, and the second computer. Some
subsequent program could retrieve these traces, as is done in
forensic cases.  If physical processes instantiate consciousness,
why shouldn't these make a difference.


Because those states are not part of the "computation" you sliced on 
the two computers.


They are not part of the abstract Platonic computation - but they are 
part of the physical computation.  So the question is, on which does 
consciousness depend?  My point is not to argue against Bruno's theory, 
but only to point out that saying "yes" to the doctor may not be the 
same as betting that consciousness=(Platonic) computation.  If the 
doctor proposed to replace your brain with an abstract computation you'd 
probably say "no".  If he proposed to replace you, your brain, and the 
whole world with which you will ever interact, i.e. a virtual you in a 
virtual world, would you say "yes"?  You'd probably wonder how he was 
going to compute that whole world with which you will interact?


And also assuming computationalism... Any implementation that does the 
job... effectively does the job. That means while it's true there are 
additionnal steps in the two case computer... it's just another 
*valid* implementation of the same computation on one computer, 
assuming computationalism that change *nothing*, arguing otherwise is 
denying computationalism (maybe it's right and computationalism is false).




It also can't be a difference that would disrupt the
completion of a task or thought that requires continuity of
consciousness spanning S1-S2, since again the subject cannot
have any
evidence that such a disruption occurred.
 



Unless we have a theory of how consciousness is related to the
physical computation I don't think we can conclude that.  We
already know that subliminal perceptions can affect conscious
thoughts - so why not subliminal memories.

We don't, but what Bruno is showing is the consequences *if* we are 
turing emulable.


Bruno gives himself the luxury of considering turing emulablity at 
arbitrarily low levels, including emulating your whole world.  In fact 
he sidesteps the doctors problem above, by simply emulating all 
(arithmetically) possible worlds.


But this thread started with my questioning the idea of discrete 
computational states, which are inherent in a turing emulation, as being 
"thoughts" or "observer moments" and such moments having no order except 
something inherent based on their content.  I thing that thoughts have 
duration in time and in computational steps and therefore can overlap 
with other thoughts and this can provide an ordering not dependent on 
the content of single computational states.  I find this more convincing 
because it doesn't rely on memories which in general are not part of 
consciousness.


As I said I'm interested in what it takes to make a conscious AI.  In 
terms of pure computational capacity I expect that producing human level 
consciousness may be within the capacity of

Re: UDA query

2010-01-11 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/11 Brent Meeker :

  

No, I'm at this point assuming only that consciousness is produced by
the physical process. We can assume for simplicity that the two
machines M1 and M2 have similar architecture and similar operating
systems. Once the program is loaded into M2 from the disc, S2 proceeds
exactly the same as it would have had the computation been allowed to
continue running on M1. Therefore, at least after the first few
milliseconds, the subjective content of S2 must be the same as it
would have been on the one machine. Could the subjective content be
different at the transition between S1 and S2 if the computation is
split up? If there is a subjective difference it won't be something
the subject can notice because, later in the course of S2, he can have
no memory of it.
  

But if you're only assuming that consciousness is produced by the physical
process then the process of downloading and uploading the microstates and
shifting the data into registers in the CPU and memory could produce a
difference in consciousness.  These are all computations too, done by the
operating system.  And why can't there be memory of it in the sense that it
effects some later conscious state?  There are traces of the transfer
process left on the original computer, the disc, and the second computer.
Some subsequent program could retrieve these traces, as is done in forensic
cases.  If physical processes instantiate consciousness, why shouldn't these
make a difference.



It's taken for granted even by unsophisticated end users of computers
that the hardware won't affect the computation. A calculator
application wouldn't be much use if it gave a different answer
depending on what brand of machine it was running on. It wouldn't be
difficult to write a program that takes input from the environment,
including information on what sort of hardware it's running on, and in
that case there could be a difference between running S1 and S2 on the
one machine and running them on separate machines. A real time clock,
for example, would alert the subject to the fact that there had been a
discontinuity, and S2 would then *not* proceed the same in both cases.
However, this would not happen automatically: it would have to be
specifically programmed, and the hardware would have to be capable of
feeding the appropriate input into the program.

  

It also can't be a difference that would disrupt the
completion of a task or thought that requires continuity of
consciousness spanning S1-S2, since again the subject cannot have any
evidence that such a disruption occurred.

  

Unless we have a theory of how consciousness is related to the physical
computation I don't think we can conclude that.  We already know that
subliminal perceptions can affect conscious thoughts - so why not subliminal
memories.



The theory is that if the computation is the same then the
consciousness is the same, regardless of what hardware it is being
implemented on. 
I know.  I'm trying to see what exactly is being assumed about the 
computation being "the same".  Is it the same Platonic algorithm?   Is 
it one that has the same steps as described in FORTRAN, but not those in 
LISP?  Is it just one that has the same input-output?  I think these are 
questions that have been bypassed in the "yes doctor" scenario.  Saying 
"yes" to the doctor seems unproblematic when you think of replacing a 
few neurons with artificial ones - all you care about is the 
input-output.  But then when you jump to replacing a whole brain maybe 
you care about the FORTRAN/LISP differences. Yet on this list there 
seems to be an assumption that you can just jump to the Platonic 
algorithm or even a Platonic computation that's independent of the 
algorithm.   Bruno pushes all this aside by referring to "at the 
appropriate level" and by doing all possible algorithms.  But I'm more 
interested in the question of what would I have to do to make a 
conscious AI.  Also, it is the assumption of a Platonic computation that 
allows one to slice it discretely into OMs.



If you don't accept this then you don't accept
computationalism, 


I don't accept it.  I only entertain it.

Brent


for it is difficult to imagine a more drastic
hardware change than that involved in going from a biological brain to
a digital computer.


  


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-11 Thread Quentin Anciaux
2010/1/11 Brent Meeker 

> Stathis Papaioannou wrote:
>
>> 2010/1/11 Brent Meeker :
>>
>>
>>
>>> But aren't you assuming that consciousness is produced by the abstract
>>> Platonic computation - rather than by the actual physical process (which
>>> is
>>> not the same) - in other words assuming the thing being argued?
>>>
>>>
>>
>> No, I'm at this point assuming only that consciousness is produced by
>> the physical process. We can assume for simplicity that the two
>> machines M1 and M2 have similar architecture and similar operating
>> systems. Once the program is loaded into M2 from the disc, S2 proceeds
>> exactly the same as it would have had the computation been allowed to
>> continue running on M1. Therefore, at least after the first few
>> milliseconds, the subjective content of S2 must be the same as it
>> would have been on the one machine. Could the subjective content be
>> different at the transition between S1 and S2 if the computation is
>> split up? If there is a subjective difference it won't be something
>> the subject can notice because, later in the course of S2, he can have
>> no memory of it.
>>
>
> But if you're only assuming that consciousness is produced by the physical
> process then the process of downloading and uploading the microstates and
> shifting the data into registers in the CPU and memory could produce a
> difference in consciousness.  These are all computations too, done by the
> operating system.  And why can't there be memory of it in the sense that it
> effects some later conscious state?  There are traces of the transfer
> process left on the original computer, the disc, and the second computer.
> Some subsequent program could retrieve these traces, as is done in forensic
> cases.  If physical processes instantiate consciousness, why shouldn't these
> make a difference.


Because those states are not part of the "computation" you sliced on the two
computers. And also assuming computationalism... Any implementation that
does the job... effectively does the job. That means while it's true there
are additionnal steps in the two case computer... it's just another *valid*
implementation of the same computation on one computer, assuming
computationalism that change *nothing*, arguing otherwise is denying
computationalism (maybe it's right and computationalism is false).

>
>
>  It also can't be a difference that would disrupt the
>> completion of a task or thought that requires continuity of
>> consciousness spanning S1-S2, since again the subject cannot have any
>> evidence that such a disruption occurred.
>>
>>
>
> Unless we have a theory of how consciousness is related to the physical
> computation I don't think we can conclude that.  We already know that
> subliminal perceptions can affect conscious thoughts - so why not subliminal
> memories.
>
> We don't, but what Bruno is showing is the consequences *if* we are turing
emulable. If we are turing emulable, all your above objections are not valid
because your objections are a level way too high (they are completely valid
objections at the level you describe, but assuming comp, those are *still*
computed at a lower level and hence are *part* of a computation that
generate consciousness, see the generalized brain argument of Bruno).

Quentin



> Brent
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
>


-- 
All those moments will be lost in time, like tears in rain.
-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: UDA query

2010-01-10 Thread Stathis Papaioannou
2010/1/11 Brent Meeker :

>> No, I'm at this point assuming only that consciousness is produced by
>> the physical process. We can assume for simplicity that the two
>> machines M1 and M2 have similar architecture and similar operating
>> systems. Once the program is loaded into M2 from the disc, S2 proceeds
>> exactly the same as it would have had the computation been allowed to
>> continue running on M1. Therefore, at least after the first few
>> milliseconds, the subjective content of S2 must be the same as it
>> would have been on the one machine. Could the subjective content be
>> different at the transition between S1 and S2 if the computation is
>> split up? If there is a subjective difference it won't be something
>> the subject can notice because, later in the course of S2, he can have
>> no memory of it.
>
> But if you're only assuming that consciousness is produced by the physical
> process then the process of downloading and uploading the microstates and
> shifting the data into registers in the CPU and memory could produce a
> difference in consciousness.  These are all computations too, done by the
> operating system.  And why can't there be memory of it in the sense that it
> effects some later conscious state?  There are traces of the transfer
> process left on the original computer, the disc, and the second computer.
> Some subsequent program could retrieve these traces, as is done in forensic
> cases.  If physical processes instantiate consciousness, why shouldn't these
> make a difference.

It's taken for granted even by unsophisticated end users of computers
that the hardware won't affect the computation. A calculator
application wouldn't be much use if it gave a different answer
depending on what brand of machine it was running on. It wouldn't be
difficult to write a program that takes input from the environment,
including information on what sort of hardware it's running on, and in
that case there could be a difference between running S1 and S2 on the
one machine and running them on separate machines. A real time clock,
for example, would alert the subject to the fact that there had been a
discontinuity, and S2 would then *not* proceed the same in both cases.
However, this would not happen automatically: it would have to be
specifically programmed, and the hardware would have to be capable of
feeding the appropriate input into the program.

>> It also can't be a difference that would disrupt the
>> completion of a task or thought that requires continuity of
>> consciousness spanning S1-S2, since again the subject cannot have any
>> evidence that such a disruption occurred.
>>
>
> Unless we have a theory of how consciousness is related to the physical
> computation I don't think we can conclude that.  We already know that
> subliminal perceptions can affect conscious thoughts - so why not subliminal
> memories.

The theory is that if the computation is the same then the
consciousness is the same, regardless of what hardware it is being
implemented on. If you don't accept this then you don't accept
computationalism, for it is difficult to imagine a more drastic
hardware change than that involved in going from a biological brain to
a digital computer.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/11 Brent Meeker :

  

But aren't you assuming that consciousness is produced by the abstract
Platonic computation - rather than by the actual physical process (which is
not the same) - in other words assuming the thing being argued?



No, I'm at this point assuming only that consciousness is produced by
the physical process. We can assume for simplicity that the two
machines M1 and M2 have similar architecture and similar operating
systems. Once the program is loaded into M2 from the disc, S2 proceeds
exactly the same as it would have had the computation been allowed to
continue running on M1. Therefore, at least after the first few
milliseconds, the subjective content of S2 must be the same as it
would have been on the one machine. Could the subjective content be
different at the transition between S1 and S2 if the computation is
split up? If there is a subjective difference it won't be something
the subject can notice because, later in the course of S2, he can have
no memory of it. 


But if you're only assuming that consciousness is produced by the 
physical process then the process of downloading and uploading the 
microstates and shifting the data into registers in the CPU and memory 
could produce a difference in consciousness.  These are all computations 
too, done by the operating system.  And why can't there be memory of it 
in the sense that it effects some later conscious state?  There are 
traces of the transfer process left on the original computer, the disc, 
and the second computer. Some subsequent program could retrieve these 
traces, as is done in forensic cases.  If physical processes instantiate 
consciousness, why shouldn't these make a difference.



It also can't be a difference that would disrupt the
completion of a task or thought that requires continuity of
consciousness spanning S1-S2, since again the subject cannot have any
evidence that such a disruption occurred.
  


Unless we have a theory of how consciousness is related to the physical 
computation I don't think we can conclude that.  We already know that 
subliminal perceptions can affect conscious thoughts - so why not 
subliminal memories.


Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Stathis Papaioannou
2010/1/11 Brent Meeker :

> But aren't you assuming that consciousness is produced by the abstract
> Platonic computation - rather than by the actual physical process (which is
> not the same) - in other words assuming the thing being argued?

No, I'm at this point assuming only that consciousness is produced by
the physical process. We can assume for simplicity that the two
machines M1 and M2 have similar architecture and similar operating
systems. Once the program is loaded into M2 from the disc, S2 proceeds
exactly the same as it would have had the computation been allowed to
continue running on M1. Therefore, at least after the first few
milliseconds, the subjective content of S2 must be the same as it
would have been on the one machine. Could the subjective content be
different at the transition between S1 and S2 if the computation is
split up? If there is a subjective difference it won't be something
the subject can notice because, later in the course of S2, he can have
no memory of it. It also can't be a difference that would disrupt the
completion of a task or thought that requires continuity of
consciousness spanning S1-S2, since again the subject cannot have any
evidence that such a disruption occurred.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/11 Brent Meeker :

  

S1 and S2 can be precisely delimited as machine states but only more
loosely as mental states. This is because, as you say, there may be a
thought that spans S1 and S2, and is therefore partly generated by M1
and partly by M2. I don't see this as an issue since even if the
computer was just doing arithmetic it could be broken up and
distributed across two machines and the final answer would still be
the same.
  

The answer would be the same, but the computation would not.  So the person
with the AI brain might add up numbers the same, but have a different
conscious experience.  Consider for example your conscious experience at age
six when asked to add 120 and 280 as compared to how you do it now.



I was initially considering the case of a computer doing the
calculation directly, not generating a mind that does the calculation.
The computation would have to span the two machines, and it would
still be the same computation.
  


I suppose it could be "the same computation" in the Platonic sense that 
adding 2+2 is a computation, but as realized on two computers it 
couldn't be the same as realized on a single computer.  At a minimum it 
would take some extra steps to transfer data in the registers.
  

Similarly, if the subject in the virtual environment was
doing mental arithmetic he would still get the right answer despite
the physical discontinuity introduced mid-calculation, and how would
that be possible if the discontinuity caused a disruption in
consciousness?
  

Because addition, like most thought, is mostly unconscious?



I certainly have to think about it consciously. In the example you
gave I look at the 20 and the 80 and notice that they add to 100, 


How do you "notice" that?  Is it not an unconscious fact recalled into 
consciousness?



and
the 100 and 200 add to 300, so the answer is 120 + 280 = 100 + 300 =
400. If this thought was interrupted I might get the wrong answer, or
at the very least I would know it was interrupted. But the subject in
the proposed experiment by definition does not notice any
interruption, since S2 proceeds deterministically whether the
computation is on the one machine or spread over two machines


But aren't you assuming that consciousness is produced by the abstract 
Platonic computation - rather than by the actual physical process (which 
is not the same) - in other words assuming the thing being argued?


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Stathis Papaioannou
2010/1/11 Brent Meeker :

>> S1 and S2 can be precisely delimited as machine states but only more
>> loosely as mental states. This is because, as you say, there may be a
>> thought that spans S1 and S2, and is therefore partly generated by M1
>> and partly by M2. I don't see this as an issue since even if the
>> computer was just doing arithmetic it could be broken up and
>> distributed across two machines and the final answer would still be
>> the same.
>
> The answer would be the same, but the computation would not.  So the person
> with the AI brain might add up numbers the same, but have a different
> conscious experience.  Consider for example your conscious experience at age
> six when asked to add 120 and 280 as compared to how you do it now.

I was initially considering the case of a computer doing the
calculation directly, not generating a mind that does the calculation.
The computation would have to span the two machines, and it would
still be the same computation.

>> Similarly, if the subject in the virtual environment was
>> doing mental arithmetic he would still get the right answer despite
>> the physical discontinuity introduced mid-calculation, and how would
>> that be possible if the discontinuity caused a disruption in
>> consciousness?
>
> Because addition, like most thought, is mostly unconscious?

I certainly have to think about it consciously. In the example you
gave I look at the 20 and the 80 and notice that they add to 100, and
the 100 and 200 add to 300, so the answer is 120 + 280 = 100 + 300 =
400. If this thought was interrupted I might get the wrong answer, or
at the very least I would know it was interrupted. But the subject in
the proposed experiment by definition does not notice any
interruption, since S2 proceeds deterministically whether the
computation is on the one machine or spread over two machines.

-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/11 Brent Meeker :

  

It seems that you're saying the observer would notice that something
odd had happened if his program were paused and restarted in the way
described, but how is that possible when S1 and S2 are identical
whether generated continuously or discontinuously?


  

I think you're assuming what is to be proven, i.e. that S1 and S2 are a)
states of consciousness, i.e. thoughts or "observer moments" and b) are
successive and contiguous without overlap.  Suppose that states of
consciousness have durations of 10msec (or 1e8 microstates of computation at
the appropriate level - I don't want to assume a transcendent continuous
time) and successive states overlap by 3msec.  Then identifying some 10msec
period as state S2 is arbitrary and generating it will only be identical
with what the brain did for the middle 4msec (where there was no overlap
with) S1 or S3.  But, ex hypothesi, 4msec isn't enough to constitute a OM.



S1 and S2 can be precisely delimited as machine states but only more
loosely as mental states. This is because, as you say, there may be a
thought that spans S1 and S2, and is therefore partly generated by M1
and partly by M2. I don't see this as an issue since even if the
computer was just doing arithmetic it could be broken up and
distributed across two machines and the final answer would still be
the same. 


The answer would be the same, but the computation would not.  So the 
person with the AI brain might add up numbers the same, but have a 
different conscious experience.  Consider for example your conscious 
experience at age six when asked to add 120 and 280 as compared to how 
you do it now.



Similarly, if the subject in the virtual environment was
doing mental arithmetic he would still get the right answer despite
the physical discontinuity introduced mid-calculation, and how would
that be possible if the discontinuity caused a disruption in
consciousness?
  


Because addition, like most thought, is mostly unconscious?

Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Stathis Papaioannou
2010/1/11 Brent Meeker :

>> It seems that you're saying the observer would notice that something
>> odd had happened if his program were paused and restarted in the way
>> described, but how is that possible when S1 and S2 are identical
>> whether generated continuously or discontinuously?
>>
>>
>
> I think you're assuming what is to be proven, i.e. that S1 and S2 are a)
> states of consciousness, i.e. thoughts or "observer moments" and b) are
> successive and contiguous without overlap.  Suppose that states of
> consciousness have durations of 10msec (or 1e8 microstates of computation at
> the appropriate level - I don't want to assume a transcendent continuous
> time) and successive states overlap by 3msec.  Then identifying some 10msec
> period as state S2 is arbitrary and generating it will only be identical
> with what the brain did for the middle 4msec (where there was no overlap
> with) S1 or S3.  But, ex hypothesi, 4msec isn't enough to constitute a OM.

S1 and S2 can be precisely delimited as machine states but only more
loosely as mental states. This is because, as you say, there may be a
thought that spans S1 and S2, and is therefore partly generated by M1
and partly by M2. I don't see this as an issue since even if the
computer was just doing arithmetic it could be broken up and
distributed across two machines and the final answer would still be
the same. Similarly, if the subject in the virtual environment was
doing mental arithmetic he would still get the right answer despite
the physical discontinuity introduced mid-calculation, and how would
that be possible if the discontinuity caused a disruption in
consciousness?


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-10 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/10 Brent Meeker :

  

Suppose S1 is being generated by a virtual reality program on machine
M1, then after a minute the human operator saves the program and data
to disc and shuts down M1, walks over to machine M2, loads the data
from the disc and runs the program, which then generates S2. There is
a clear causal connection here even though M1 and M2 are separate
machines. Do you think there would be normal continuity of
consciousness in this case?

  

No, at least I can see reasons to doubt it.  Of course if the start-up of
the program on M2 were very fast it might not be very noticeable and a
rational person might still say "yes" to the doctor.  But that wouldn't
generalize to the infinitesimal "observer moment".



It seems that you're saying the observer would notice that something
odd had happened if his program were paused and restarted in the way
described, but how is that possible when S1 and S2 are identical
whether generated continuously or discontinuously?

  


I think you're assuming what is to be proven, i.e. that S1 and S2 are a) 
states of consciousness, i.e. thoughts or "observer moments" and b) are 
successive and contiguous without overlap.  Suppose that states of 
consciousness have durations of 10msec (or 1e8 microstates of 
computation at the appropriate level - I don't want to assume a 
transcendent continuous time) and successive states overlap by 3msec.  
Then identifying some 10msec period as state S2 is arbitrary and 
generating it will only be identical with what the brain did for the 
middle 4msec (where there was no overlap with) S1 or S3.  But, ex 
hypothesi, 4msec isn't enough to constitute a OM.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-09 Thread Stathis Papaioannou
2010/1/10 Brent Meeker :

>> Suppose S1 is being generated by a virtual reality program on machine
>> M1, then after a minute the human operator saves the program and data
>> to disc and shuts down M1, walks over to machine M2, loads the data
>> from the disc and runs the program, which then generates S2. There is
>> a clear causal connection here even though M1 and M2 are separate
>> machines. Do you think there would be normal continuity of
>> consciousness in this case?
>>
>
> No, at least I can see reasons to doubt it.  Of course if the start-up of
> the program on M2 were very fast it might not be very noticeable and a
> rational person might still say "yes" to the doctor.  But that wouldn't
> generalize to the infinitesimal "observer moment".

It seems that you're saying the observer would notice that something
odd had happened if his program were paused and restarted in the way
described, but how is that possible when S1 and S2 are identical
whether generated continuously or discontinuously?


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-09 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/8 Brent Meeker :

  

You've made this point in the past but I still don't understand it. If
S1 and S2 are periods of experience generated consecutively in your
brain in the usual manner, do you agree that you would still be
experience them as consecutive if they were generated by chance by
causally disconnected processes?
  

No, I don't.  Of course if they had durations of seconds or minutes, I
would experience much the same thing.  But it is not at all convincing
to me that the experience at the beginning and end of the period would
be identical - and hence in the limit of infinitesimal duration, discrete
states I'm not sure what the experience would be, if any at all.



We should consider experiences of long duration, say a minute, before
going on to infinitesimals. I think you are saying that there is a
problem with the connection between S1 and S2 if they are generated by
causally disconnected processes, but not if they are generated in the
usual manner by causally connected processes. Is that right?
  


No.  I'm not sure that causal connection is enough - and in any case 
causality is hard to define in physics at a fundamental level where it 
seems to be time-symmetric and QM is unitary (one of the motivations for 
"everything" explanations).  I think the connection can be that S1 and 
S2 overlap, since at the level of substitution each one consists of many 
thousands of computation states.



Suppose S1 is being generated by a virtual reality program on machine
M1, then after a minute the human operator saves the program and data
to disc and shuts down M1, walks over to machine M2, loads the data
from the disc and runs the program, which then generates S2. There is
a clear causal connection here even though M1 and M2 are separate
machines. Do you think there would be normal continuity of
consciousness in this case?
  


No, at least I can see reasons to doubt it.  Of course if the start-up 
of the program on M2 were very fast it might not be very noticeable and 
a rational person might still say "yes" to the doctor.  But that 
wouldn't generalize to the infinitesimal "observer moment".



In a second experiment the operator finds when he gets to M2 that the
data on the disc is completely corrupted. The only information he can
be sure of is that the data comprised a maximum of n bits, this being
the capacity of the disc. Worried that he might be responsible for the
death of a conscious being, the operator decides to systematically
load into M2 all 2^n possible sets of data that the disc could have
contained. Do you think that this time there will be a discontinuity
between S1 and S2 when S2 is eventually generated?

  


I think there will be difference except in the case where he has loaded 
S1 and S2 is generated from it.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-09 Thread Stathis Papaioannou
2010/1/8 Brent Meeker :

>> You've made this point in the past but I still don't understand it. If
>> S1 and S2 are periods of experience generated consecutively in your
>> brain in the usual manner, do you agree that you would still be
>> experience them as consecutive if they were generated by chance by
>> causally disconnected processes?
>
> No, I don't.  Of course if they had durations of seconds or minutes, I
> would experience much the same thing.  But it is not at all convincing
> to me that the experience at the beginning and end of the period would
> be identical - and hence in the limit of infinitesimal duration, discrete
> states I'm not sure what the experience would be, if any at all.

We should consider experiences of long duration, say a minute, before
going on to infinitesimals. I think you are saying that there is a
problem with the connection between S1 and S2 if they are generated by
causally disconnected processes, but not if they are generated in the
usual manner by causally connected processes. Is that right?

Suppose S1 is being generated by a virtual reality program on machine
M1, then after a minute the human operator saves the program and data
to disc and shuts down M1, walks over to machine M2, loads the data
from the disc and runs the program, which then generates S2. There is
a clear causal connection here even though M1 and M2 are separate
machines. Do you think there would be normal continuity of
consciousness in this case?

In a second experiment the operator finds when he gets to M2 that the
data on the disc is completely corrupted. The only information he can
be sure of is that the data comprised a maximum of n bits, this being
the capacity of the disc. Worried that he might be responsible for the
death of a conscious being, the operator decides to systematically
load into M2 all 2^n possible sets of data that the disc could have
contained. Do you think that this time there will be a discontinuity
between S1 and S2 when S2 is eventually generated?


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-08 Thread russell standish
On Fri, Jan 08, 2010 at 11:00:19AM -0800, Johnathan Corgan wrote:
> 
> It's plausible that "observer moments" correspond to what are called
> "chaotic attractors" in complex systems theory.
> 

Well attractors in general - they don't have to be chaotic (or strange
as the terminology actually is). More likely the attractors are point or
limit cycles, but are only metastable (they will be pushed out of
their basic of attraction by longer range coupling within the brain).

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 hpco...@hpcoders.com.au
Australiahttp://www.hpcoders.com.au

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-08 Thread Brent Meeker

Johnathan Corgan wrote:

On Fri, Jan 8, 2010 at 10:03 AM, Brent Meeker  wrote:

  

Isn't it?  Bruno presents "comp" as equivalent to betting that replacing
your brain with a digitial device at the appropriate level of substitution
will leave your stream of consciousness unaffected.  From this people are
inferring that the discrete states of this digital brain instantiate
"observer moments".  But suppose (which I consider likely) the digital brain
would have to have a cycle time of a billionth of a second or less.  I don't
think you believe you have a different conscious thought every billionth of
a second.  What it means is that "a state of your consciousness" corresponds
to a million or so successive states of the digitial computation.  These
sets of a million states can then of course overlap.  So the idea of
discrete "observer moments" doesn't follow from "yes doctor".



It's plausible that "observer moments" correspond to what are called
"chaotic attractors" in complex systems theory.

The brain passes through a complex, dynamic trajectory of states.  A
stable attractor is a cycle of discrete states that repeats exactly,
in the case of a "limit cycle", or more often, retraces a similar but
not exact trajectory, in the case of a "chaotic attractor".  Chaotic
attractors are robust to perturbation, up to a point, and many complex
systems can be characterized by a succession of chaotic attractors
separated by rapid transitions driven by external perturbations
exceeding some threshold.  I use the term "meta-state" as a synonym
for chaotic attractor in this context.

My working hypothesis is that nervous systems developed into complex
systems capable of generating quasi-stable meta-states which were
evolutionarily advantageous, and over (evolutionary) time, were able
to reach a level of organization which eventually produced
consciousness.

In this model, brains are continuously cycling through patterns of
firing, which, absent external stimuli, are self-sustaining in some
sort quasi-stable chaotic fashion, or meta-state.  Sensory input of
various types may be "ignored" if it doesn't reach a threshold of
activation which tips the brain into a new meta-state.  Or, "novel"
sensations may drive the system into a new meta-state (dynamic cycle)
that corresponds to some classification of that input in the context
of whatever the current meta-state is.

Observer moments, then, correspond to some subset of meta-states in
the brain.  They aren't discrete states of zero duration, but
trajectories of states in a chaotic cycle.  A succession of these
meta-states would then make up a stream-of-consciousness.

As an aside, I strongly suspect that in practice, our sensory input
serves to constrain the brain into a (relatively) small set of
meta-states that has allowed us to survive in a harsh evolutionary
context, and produces what may be called "consensus reality" (I think
Bruno calls this 1st-person plural.)  Other chaotic systems do spend
most of their time in a small subset of possible states.  Yet there is
evidence that perturbing the brain in a variety of ways (fasting,
breathing exercises, meditation, religious contemplation, drugs,
disease, injury, etc.) can allow it to wander off into meta-states
that are quite subjectively different from the typical states
associated with "normal" functioning.

All of the above speculation could still hold true in a
non-physicalist, computationalism-based view of consciousness, where
one would replace "brain" with "computational substrate at appropriate
level of substitution."

Johnathan Corgan
  


That would correspond to my intuition about consciousness.  I remember 
reading in the '60s, when sensory deprivation experiments were the fad, 
that if one remained long enough in a sensory deprivation tank (more 
than about 45min) one's mind went into a loop.  I've not been able to 
find a reference to this, but that's what I remember.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-08 Thread Quentin Anciaux
2010/1/8 Brent Meeker 

>  Quentin Anciaux wrote:
>
>
>
> 2010/1/8 Brent Meeker 
>
>>  Quentin Anciaux wrote:
>>
>>
>>
>> 2010/1/8 Brent Meeker 
>>
>>> Quentin Anciaux wrote:
>>>


 2010/1/8 Brent Meeker >>> meeke...@dslextreme.com>>


Stathis Papaioannou wrote:

2010/1/7 Brent Meeker >>> >:



A program that generates S2 as it were out of nowhere,
with false
memories of an S1 that has not yet happened or may
never happen, is a
perfectly legitimate program and the UD will generate
it along with
all the others. If the UD is allowed to run forever,
this program will
be a lower measure contributor to S2 than the program
that generates
it sequentially;

How do you know this?


Why S2 is unlikely to appear out of nowhere is equivalent to
the White
Rabbit problem in ensemble theories, which has been often
discussed
over the years on this list. Russell's "Theory of Nothing" book
provides a summary. The general idea is that structures
generated by
simpler algorithms have higher measure, and it is simpler to
write a
program that computes a series of mental states iteratively
than one
that computes a set of disconnected mental states from ad hoc
data.


and similarly in any physicalist theory. But although
S2 may guess from such considerations that he is more
likely to have
been generated sequentially, the point remains that
there is nothing
in the nature of his experience to indicate this. That
is, the fact
that S2 remembers S1 as being in the past and
remembers a smooth
transition from S1 to S2 is no guarantee that S1
really did happen in
the past, or even at all.

We're assuming that thought is a kind of computation, a
processing of
information.  And we're also assuming that this processing
can consist of
static states placed in order.  So given two static
states, what is the
relation  that makes their ordering into a computational
process?  One
answer would be that they are successive states generated
by some program.
But you seem to reject that.  To say that S2 remembers S1
doesn't seem to
answer the question because "remembering" is itself a
process, not a static
state.  I tried to phrase it in terms of the entropy, or
information
content, of S1 and S2 which would be a static property -
as for example, if
S2 simply contained S1.  But that hardly seems a proper
representation of
states of consciousness - I'm certainly not conscious of
my memories most of
the time.  Even as I type this I obviously remember how to
type (though
maybe not how to spell :-) ) but I'm not conscious of it.


You've made this point in the past but I still don't
understand it. If
S1 and S2 are periods of experience generated consecutively in
your
brain in the usual manner, do you agree that you would still be
experience them as consecutive if they were generated by chance
 by
causally disconnected processes?


No, I don't.  Of course if they had durations of seconds or minutes,
 I
would experience much the same thing.  But it is not at all
 convincing
to me that the experience at the beginning and end of the period
 would
be identical - and hence in the limit of infinitesimal duration,
discrete states I'm not sure what the experience would be, if any
at all.


The requirement would be only that
the respective experiences have the same subjective content in
both
cases. Memory is only one aspect of subjective content, if an
important one. If S1-S2 spans the typing of a sentence, then
both S1
and S2 have to remember how to type and what the sentence they
 are
typing is.


But here you have allowed S1 and S2 to be processes with 

Re: UDA query

2010-01-08 Thread Brent Meeker




Quentin Anciaux wrote:

  
  2010/1/8 Brent Meeker 
  


Quentin Anciaux wrote:

  
  2010/1/8 Brent Meeker 
  Quentin
Anciaux wrote:

  
  
2010/1/8 Brent Meeker >
  
  
   Stathis Papaioannou wrote:
  
       2010/1/7 Brent Meeker >:
  
  
  
        
               A program that generates S2 as it were out of nowhere,
               with false
               memories of an S1 that has not yet happened or may
               never happen, is a
               perfectly legitimate program and the UD will generate
               it along with
               all the others. If the UD is allowed to run forever,
               this program will
               be a lower measure contributor to S2 than the program
               that generates
               it sequentially;
                    
           How do you know this?
              
  
       Why S2 is unlikely to appear out of nowhere is equivalent to
       the White
       Rabbit problem in ensemble theories, which has been often
       discussed
       over the years on this list. Russell's "Theory of Nothing" book
       provides a summary. The general idea is that structures
       generated by
       simpler algorithms have higher measure, and it is simpler to
       write a
       program that computes a series of mental states iteratively
       than one
       that computes a set of disconnected mental states from ad hoc
       data.
  
        
               and similarly in any physicalist theory. But although
               S2 may guess from such considerations that he is more
               likely to have
               been generated sequentially, the point remains that
               there is nothing
               in the nature of his experience to indicate this. That
               is, the fact
               that S2 remembers S1 as being in the past and
               remembers a smooth
               transition from S1 to S2 is no guarantee that S1
               really did happen in
               the past, or even at all.
                    
           We're assuming that thought is a kind of computation, a
           processing of
           information.  And we're also assuming that this processing
           can consist of
           static states placed in order.  So given two static
           states, what is the
           relation  that makes their ordering into a computational
           process?  One
           answer would be that they are successive states generated
           by some program.
           But you seem to reject that.  To say that S2 remembers S1
           doesn't seem to
           answer the question because "remembering" is itself a
           process, not a static
           state.  I tried to phrase it in terms of the entropy, or
           information
           content, of S1 and S2 which would be a static property -
           as for example, if
           S2 simply contained S1.  But that hardly seems a proper
           representation of
           states of consciousness - I'm certainly not conscious of
           my memories most of
           the time.  Even as I type this I obviously remember how to
           type (though
           maybe not how to spell :-) ) but I'm not conscious of it.
              
  
       You've made this point in the past but I still don't
       understand it. If
       S1 and S2 are periods of experience generated consecutively in
       your
       brain in the usual manner, do you agree that you would still be
       experience them as consecutive if they were generated by chance
by
       causally disconnected processes?
  
  
   No, I don't.  Of course if they had durations of seconds or minutes,
I
   would experience much the same thing.  But it is not at all
convincing
   to me that the experience at the beginning and end of the period
would
   be identical - and hence in the limit of infinitesimal duration,
   discrete states I'm not sure what the experience would be, if any
   at all.
  
  
       The requirement would be only that
       the respective experiences have the same subjective content in
       both
       cases. Memory is only one aspect of subjective content, if an
       important one. If S1-S2 spans the typing of a sentence, then
       both S1
       and S2 have to remember how to type and what the sentence they
are
       typing is.
  
  
   But here you have allowed S1 and S2 to be processes with significant
   duration and even overlap.  They are no longer discrete, static
   states.
  
  
       It may seem to be unconscious but obviously it can't be
       completely unconsciou

Re: UDA query

2010-01-08 Thread Johnathan Corgan
On Fri, Jan 8, 2010 at 10:03 AM, Brent Meeker  wrote:

> Isn't it?  Bruno presents "comp" as equivalent to betting that replacing
> your brain with a digitial device at the appropriate level of substitution
> will leave your stream of consciousness unaffected.  From this people are
> inferring that the discrete states of this digital brain instantiate
> "observer moments".  But suppose (which I consider likely) the digital brain
> would have to have a cycle time of a billionth of a second or less.  I don't
> think you believe you have a different conscious thought every billionth of
> a second.  What it means is that "a state of your consciousness" corresponds
> to a million or so successive states of the digitial computation.  These
> sets of a million states can then of course overlap.  So the idea of
> discrete "observer moments" doesn't follow from "yes doctor".

It's plausible that "observer moments" correspond to what are called
"chaotic attractors" in complex systems theory.

The brain passes through a complex, dynamic trajectory of states.  A
stable attractor is a cycle of discrete states that repeats exactly,
in the case of a "limit cycle", or more often, retraces a similar but
not exact trajectory, in the case of a "chaotic attractor".  Chaotic
attractors are robust to perturbation, up to a point, and many complex
systems can be characterized by a succession of chaotic attractors
separated by rapid transitions driven by external perturbations
exceeding some threshold.  I use the term "meta-state" as a synonym
for chaotic attractor in this context.

My working hypothesis is that nervous systems developed into complex
systems capable of generating quasi-stable meta-states which were
evolutionarily advantageous, and over (evolutionary) time, were able
to reach a level of organization which eventually produced
consciousness.

In this model, brains are continuously cycling through patterns of
firing, which, absent external stimuli, are self-sustaining in some
sort quasi-stable chaotic fashion, or meta-state.  Sensory input of
various types may be "ignored" if it doesn't reach a threshold of
activation which tips the brain into a new meta-state.  Or, "novel"
sensations may drive the system into a new meta-state (dynamic cycle)
that corresponds to some classification of that input in the context
of whatever the current meta-state is.

Observer moments, then, correspond to some subset of meta-states in
the brain.  They aren't discrete states of zero duration, but
trajectories of states in a chaotic cycle.  A succession of these
meta-states would then make up a stream-of-consciousness.

As an aside, I strongly suspect that in practice, our sensory input
serves to constrain the brain into a (relatively) small set of
meta-states that has allowed us to survive in a harsh evolutionary
context, and produces what may be called "consensus reality" (I think
Bruno calls this 1st-person plural.)  Other chaotic systems do spend
most of their time in a small subset of possible states.  Yet there is
evidence that perturbing the brain in a variety of ways (fasting,
breathing exercises, meditation, religious contemplation, drugs,
disease, injury, etc.) can allow it to wander off into meta-states
that are quite subjectively different from the typical states
associated with "normal" functioning.

All of the above speculation could still hold true in a
non-physicalist, computationalism-based view of consciousness, where
one would replace "brain" with "computational substrate at appropriate
level of substitution."

Johnathan Corgan
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-08 Thread Quentin Anciaux
2010/1/8 Brent Meeker 

>  Quentin Anciaux wrote:
>
>
>
> 2010/1/8 Brent Meeker 
>
>> Quentin Anciaux wrote:
>>
>>>
>>>
>>> 2010/1/8 Brent Meeker >> meeke...@dslextreme.com>>
>>>
>>>
>>>Stathis Papaioannou wrote:
>>>
>>>2010/1/7 Brent Meeker >> >:
>>>
>>>
>>>
>>>A program that generates S2 as it were out of nowhere,
>>>with false
>>>memories of an S1 that has not yet happened or may
>>>never happen, is a
>>>perfectly legitimate program and the UD will generate
>>>it along with
>>>all the others. If the UD is allowed to run forever,
>>>this program will
>>>be a lower measure contributor to S2 than the program
>>>that generates
>>>it sequentially;
>>>
>>>How do you know this?
>>>
>>>
>>>Why S2 is unlikely to appear out of nowhere is equivalent to
>>>the White
>>>Rabbit problem in ensemble theories, which has been often
>>>discussed
>>>over the years on this list. Russell's "Theory of Nothing" book
>>>provides a summary. The general idea is that structures
>>>generated by
>>>simpler algorithms have higher measure, and it is simpler to
>>>write a
>>>program that computes a series of mental states iteratively
>>>than one
>>>that computes a set of disconnected mental states from ad hoc
>>>data.
>>>
>>>
>>>and similarly in any physicalist theory. But although
>>>S2 may guess from such considerations that he is more
>>>likely to have
>>>been generated sequentially, the point remains that
>>>there is nothing
>>>in the nature of his experience to indicate this. That
>>>is, the fact
>>>that S2 remembers S1 as being in the past and
>>>remembers a smooth
>>>transition from S1 to S2 is no guarantee that S1
>>>really did happen in
>>>the past, or even at all.
>>>
>>>We're assuming that thought is a kind of computation, a
>>>processing of
>>>information.  And we're also assuming that this processing
>>>can consist of
>>>static states placed in order.  So given two static
>>>states, what is the
>>>relation  that makes their ordering into a computational
>>>process?  One
>>>answer would be that they are successive states generated
>>>by some program.
>>>But you seem to reject that.  To say that S2 remembers S1
>>>doesn't seem to
>>>answer the question because "remembering" is itself a
>>>process, not a static
>>>state.  I tried to phrase it in terms of the entropy, or
>>>information
>>>content, of S1 and S2 which would be a static property -
>>>as for example, if
>>>S2 simply contained S1.  But that hardly seems a proper
>>>representation of
>>>states of consciousness - I'm certainly not conscious of
>>>my memories most of
>>>the time.  Even as I type this I obviously remember how to
>>>type (though
>>>maybe not how to spell :-) ) but I'm not conscious of it.
>>>
>>>
>>>You've made this point in the past but I still don't
>>>understand it. If
>>>S1 and S2 are periods of experience generated consecutively in
>>>your
>>>brain in the usual manner, do you agree that you would still be
>>>experience them as consecutive if they were generated by chance by
>>>causally disconnected processes?
>>>
>>>
>>>No, I don't.  Of course if they had durations of seconds or minutes, I
>>>would experience much the same thing.  But it is not at all convincing
>>>to me that the experience at the beginning and end of the period would
>>>be identical - and hence in the limit of infinitesimal duration,
>>>discrete states I'm not sure what the experience would be, if any
>>>at all.
>>>
>>>
>>>The requirement would be only that
>>>the respective experiences have the same subjective content in
>>>both
>>>cases. Memory is only one aspect of subjective content, if an
>>>important one. If S1-S2 spans the typing of a sentence, then
>>>both S1
>>>and S2 have to remember how to type and what the sentence they are
>>>typing is.
>>>
>>>
>>>But here you have allowed S1 and S2 to be processes with significant
>>>duration and even overlap.  They are no longer discrete, static
>>>states.
>>>
>>>
>>>It may seem to be unconscious but obviously it can't be
>>>completely unconsci

Re: UDA query

2010-01-08 Thread Brent Meeker




Quentin Anciaux wrote:

  
  2010/1/8 Brent Meeker 
  Quentin
Anciaux wrote:

  
  
2010/1/8 Brent Meeker >
  
  
   Stathis Papaioannou wrote:
  
       2010/1/7 Brent Meeker >:
  
  
  
        
               A program that generates S2 as it were out of nowhere,
               with false
               memories of an S1 that has not yet happened or may
               never happen, is a
               perfectly legitimate program and the UD will generate
               it along with
               all the others. If the UD is allowed to run forever,
               this program will
               be a lower measure contributor to S2 than the program
               that generates
               it sequentially;
                    
           How do you know this?
              
  
       Why S2 is unlikely to appear out of nowhere is equivalent to
       the White
       Rabbit problem in ensemble theories, which has been often
       discussed
       over the years on this list. Russell's "Theory of Nothing" book
       provides a summary. The general idea is that structures
       generated by
       simpler algorithms have higher measure, and it is simpler to
       write a
       program that computes a series of mental states iteratively
       than one
       that computes a set of disconnected mental states from ad hoc
       data.
  
        
               and similarly in any physicalist theory. But although
               S2 may guess from such considerations that he is more
               likely to have
               been generated sequentially, the point remains that
               there is nothing
               in the nature of his experience to indicate this. That
               is, the fact
               that S2 remembers S1 as being in the past and
               remembers a smooth
               transition from S1 to S2 is no guarantee that S1
               really did happen in
               the past, or even at all.
                    
           We're assuming that thought is a kind of computation, a
           processing of
           information.  And we're also assuming that this processing
           can consist of
           static states placed in order.  So given two static
           states, what is the
           relation  that makes their ordering into a computational
           process?  One
           answer would be that they are successive states generated
           by some program.
           But you seem to reject that.  To say that S2 remembers S1
           doesn't seem to
           answer the question because "remembering" is itself a
           process, not a static
           state.  I tried to phrase it in terms of the entropy, or
           information
           content, of S1 and S2 which would be a static property -
           as for example, if
           S2 simply contained S1.  But that hardly seems a proper
           representation of
           states of consciousness - I'm certainly not conscious of
           my memories most of
           the time.  Even as I type this I obviously remember how to
           type (though
           maybe not how to spell :-) ) but I'm not conscious of it.
              
  
       You've made this point in the past but I still don't
       understand it. If
       S1 and S2 are periods of experience generated consecutively in
       your
       brain in the usual manner, do you agree that you would still be
       experience them as consecutive if they were generated by chance
by
       causally disconnected processes?
  
  
   No, I don't.  Of course if they had durations of seconds or minutes,
I
   would experience much the same thing.  But it is not at all
convincing
   to me that the experience at the beginning and end of the period
would
   be identical - and hence in the limit of infinitesimal duration,
   discrete states I'm not sure what the experience would be, if any
   at all.
  
  
       The requirement would be only that
       the respective experiences have the same subjective content in
       both
       cases. Memory is only one aspect of subjective content, if an
       important one. If S1-S2 spans the typing of a sentence, then
       both S1
       and S2 have to remember how to type and what the sentence they
are
       typing is.
  
  
   But here you have allowed S1 and S2 to be processes with significant
   duration and even overlap.  They are no longer discrete, static
   states.
  
  
       It may seem to be unconscious but obviously it can't be
       completely unconscious, otherwise it could be left out without
       making
       any difference. Your digestion is an example of a completely
       unconscious process that need not be taken into account in a
  

Re: UDA query

2010-01-08 Thread Quentin Anciaux
2010/1/8 Brent Meeker 

> Quentin Anciaux wrote:
>
>>
>>
>> 2010/1/8 Brent Meeker > meeke...@dslextreme.com>>
>>
>>
>>Stathis Papaioannou wrote:
>>
>>2010/1/7 Brent Meeker >>:
>>
>>
>>
>>A program that generates S2 as it were out of nowhere,
>>with false
>>memories of an S1 that has not yet happened or may
>>never happen, is a
>>perfectly legitimate program and the UD will generate
>>it along with
>>all the others. If the UD is allowed to run forever,
>>this program will
>>be a lower measure contributor to S2 than the program
>>that generates
>>it sequentially;
>>
>>How do you know this?
>>
>>
>>Why S2 is unlikely to appear out of nowhere is equivalent to
>>the White
>>Rabbit problem in ensemble theories, which has been often
>>discussed
>>over the years on this list. Russell's "Theory of Nothing" book
>>provides a summary. The general idea is that structures
>>generated by
>>simpler algorithms have higher measure, and it is simpler to
>>write a
>>program that computes a series of mental states iteratively
>>than one
>>that computes a set of disconnected mental states from ad hoc
>>data.
>>
>>
>>and similarly in any physicalist theory. But although
>>S2 may guess from such considerations that he is more
>>likely to have
>>been generated sequentially, the point remains that
>>there is nothing
>>in the nature of his experience to indicate this. That
>>is, the fact
>>that S2 remembers S1 as being in the past and
>>remembers a smooth
>>transition from S1 to S2 is no guarantee that S1
>>really did happen in
>>the past, or even at all.
>>
>>We're assuming that thought is a kind of computation, a
>>processing of
>>information.  And we're also assuming that this processing
>>can consist of
>>static states placed in order.  So given two static
>>states, what is the
>>relation  that makes their ordering into a computational
>>process?  One
>>answer would be that they are successive states generated
>>by some program.
>>But you seem to reject that.  To say that S2 remembers S1
>>doesn't seem to
>>answer the question because "remembering" is itself a
>>process, not a static
>>state.  I tried to phrase it in terms of the entropy, or
>>information
>>content, of S1 and S2 which would be a static property -
>>as for example, if
>>S2 simply contained S1.  But that hardly seems a proper
>>representation of
>>states of consciousness - I'm certainly not conscious of
>>my memories most of
>>the time.  Even as I type this I obviously remember how to
>>type (though
>>maybe not how to spell :-) ) but I'm not conscious of it.
>>
>>
>>You've made this point in the past but I still don't
>>understand it. If
>>S1 and S2 are periods of experience generated consecutively in
>>your
>>brain in the usual manner, do you agree that you would still be
>>experience them as consecutive if they were generated by chance by
>>causally disconnected processes?
>>
>>
>>No, I don't.  Of course if they had durations of seconds or minutes, I
>>would experience much the same thing.  But it is not at all convincing
>>to me that the experience at the beginning and end of the period would
>>be identical - and hence in the limit of infinitesimal duration,
>>discrete states I'm not sure what the experience would be, if any
>>at all.
>>
>>
>>The requirement would be only that
>>the respective experiences have the same subjective content in
>>both
>>cases. Memory is only one aspect of subjective content, if an
>>important one. If S1-S2 spans the typing of a sentence, then
>>both S1
>>and S2 have to remember how to type and what the sentence they are
>>typing is.
>>
>>
>>But here you have allowed S1 and S2 to be processes with significant
>>duration and even overlap.  They are no longer discrete, static
>>states.
>>
>>
>>It may seem to be unconscious but obviously it can't be
>>completely unconscious, otherwise it could be left out without
>>making
>>any difference. Your digestion is an example of a completely
>>unconscious process that need not be 

Re: UDA query

2010-01-07 Thread Brent Meeker

Quentin Anciaux wrote:



2010/1/8 Brent Meeker >


Stathis Papaioannou wrote:

2010/1/7 Brent Meeker mailto:meeke...@dslextreme.com>>:

 


A program that generates S2 as it were out of nowhere,
with false
memories of an S1 that has not yet happened or may
never happen, is a
perfectly legitimate program and the UD will generate
it along with
all the others. If the UD is allowed to run forever,
this program will
be a lower measure contributor to S2 than the program
that generates
it sequentially;
 


How do you know this?
   



Why S2 is unlikely to appear out of nowhere is equivalent to
the White
Rabbit problem in ensemble theories, which has been often
discussed
over the years on this list. Russell's "Theory of Nothing" book
provides a summary. The general idea is that structures
generated by
simpler algorithms have higher measure, and it is simpler to
write a
program that computes a series of mental states iteratively
than one
that computes a set of disconnected mental states from ad hoc
data.

 


and similarly in any physicalist theory. But although
S2 may guess from such considerations that he is more
likely to have
been generated sequentially, the point remains that
there is nothing
in the nature of his experience to indicate this. That
is, the fact
that S2 remembers S1 as being in the past and
remembers a smooth
transition from S1 to S2 is no guarantee that S1
really did happen in
the past, or even at all.
 


We're assuming that thought is a kind of computation, a
processing of
information.  And we're also assuming that this processing
can consist of
static states placed in order.  So given two static
states, what is the
relation  that makes their ordering into a computational
process?  One
answer would be that they are successive states generated
by some program.
But you seem to reject that.  To say that S2 remembers S1
doesn't seem to
answer the question because "remembering" is itself a
process, not a static
state.  I tried to phrase it in terms of the entropy, or
information
content, of S1 and S2 which would be a static property -
as for example, if
S2 simply contained S1.  But that hardly seems a proper
representation of
states of consciousness - I'm certainly not conscious of
my memories most of
the time.  Even as I type this I obviously remember how to
type (though
maybe not how to spell :-) ) but I'm not conscious of it.
   



You've made this point in the past but I still don't
understand it. If
S1 and S2 are periods of experience generated consecutively in
your
brain in the usual manner, do you agree that you would still be
experience them as consecutive if they were generated by chance by
causally disconnected processes?


No, I don't.  Of course if they had durations of seconds or minutes, I
would experience much the same thing.  But it is not at all convincing
to me that the experience at the beginning and end of the period would
be identical - and hence in the limit of infinitesimal duration,
discrete states I'm not sure what the experience would be, if any
at all.


The requirement would be only that
the respective experiences have the same subjective content in
both
cases. Memory is only one aspect of subjective content, if an
important one. If S1-S2 spans the typing of a sentence, then
both S1
and S2 have to remember how to type and what the sentence they are
typing is.


But here you have allowed S1 and S2 to be processes with significant
duration and even overlap.  They are no longer discrete, static
states.


It may seem to be unconscious but obviously it can't be
completely unconscious, otherwise it could be left out without
making
any difference. Your digestion is an example of a completely
unconscious process that need not be taken into account in a
simulation of your mind. Another example is your name: you may
have no
awareness at all of your name during S1-S2 so i

Re: UDA query

2010-01-07 Thread Quentin Anciaux
2010/1/8 Brent Meeker 

> Stathis Papaioannou wrote:
>
>> 2010/1/7 Brent Meeker :
>>
>>
>>
>>> A program that generates S2 as it were out of nowhere, with false
 memories of an S1 that has not yet happened or may never happen, is a
 perfectly legitimate program and the UD will generate it along with
 all the others. If the UD is allowed to run forever, this program will
 be a lower measure contributor to S2 than the program that generates
 it sequentially;


>>> How do you know this?
>>>
>>>
>>
>> Why S2 is unlikely to appear out of nowhere is equivalent to the White
>> Rabbit problem in ensemble theories, which has been often discussed
>> over the years on this list. Russell's "Theory of Nothing" book
>> provides a summary. The general idea is that structures generated by
>> simpler algorithms have higher measure, and it is simpler to write a
>> program that computes a series of mental states iteratively than one
>> that computes a set of disconnected mental states from ad hoc data.
>>
>>
>>
>>> and similarly in any physicalist theory. But although
 S2 may guess from such considerations that he is more likely to have
 been generated sequentially, the point remains that there is nothing
 in the nature of his experience to indicate this. That is, the fact
 that S2 remembers S1 as being in the past and remembers a smooth
 transition from S1 to S2 is no guarantee that S1 really did happen in
 the past, or even at all.


>>> We're assuming that thought is a kind of computation, a processing of
>>> information.  And we're also assuming that this processing can consist of
>>> static states placed in order.  So given two static states, what is the
>>> relation  that makes their ordering into a computational process?  One
>>> answer would be that they are successive states generated by some
>>> program.
>>> But you seem to reject that.  To say that S2 remembers S1 doesn't seem to
>>> answer the question because "remembering" is itself a process, not a
>>> static
>>> state.  I tried to phrase it in terms of the entropy, or information
>>> content, of S1 and S2 which would be a static property - as for example,
>>> if
>>> S2 simply contained S1.  But that hardly seems a proper representation of
>>> states of consciousness - I'm certainly not conscious of my memories most
>>> of
>>> the time.  Even as I type this I obviously remember how to type (though
>>> maybe not how to spell :-) ) but I'm not conscious of it.
>>>
>>>
>>
>> You've made this point in the past but I still don't understand it. If
>> S1 and S2 are periods of experience generated consecutively in your
>> brain in the usual manner, do you agree that you would still be
>> experience them as consecutive if they were generated by chance by
>> causally disconnected processes?
>>
>
> No, I don't.  Of course if they had durations of seconds or minutes, I
> would experience much the same thing.  But it is not at all convincing
> to me that the experience at the beginning and end of the period would
> be identical - and hence in the limit of infinitesimal duration, discrete
> states I'm not sure what the experience would be, if any at all.
>
>
>  The requirement would be only that
>> the respective experiences have the same subjective content in both
>> cases. Memory is only one aspect of subjective content, if an
>> important one. If S1-S2 spans the typing of a sentence, then both S1
>> and S2 have to remember how to type and what the sentence they are
>> typing is.
>>
>
> But here you have allowed S1 and S2 to be processes with significant
> duration and even overlap.  They are no longer discrete, static states.
>
>
>  It may seem to be unconscious but obviously it can't be
>> completely unconscious, otherwise it could be left out without making
>> any difference. Your digestion is an example of a completely
>> unconscious process that need not be taken into account in a
>> simulation of your mind. Another example is your name: you may have no
>> awareness at all of your name during S1-S2 so it could safely be left
>> out of the simulation, although at S3 when you reach the end of your
>> post and you need to sign it you need to remember what it is.
>>
>>
>>
>
> You are relying on the idea of a digital simulation which is described
> by a sequence of discrete states.  But in an actual realization of such
> a simulation the discrete states are realized by causal sequences in
> time which are not of infinitesimal duration and overlap.
>

This as no impact on the computational level, what is important is the logic
state which is discrete. What is running on an actual computer is a
program... that the physical computer use 3V or 1V or less or that it can
handle 5*10^9 instructions per second or 5000 doesn't change that fact, the
program will run the same (with regard to the (external) execution speed).
If consciousness is "digitalisable" then it follows that it is composed of
discrete states with no dur

Re: UDA query

2010-01-07 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/7 Brent Meeker :

  

A program that generates S2 as it were out of nowhere, with false
memories of an S1 that has not yet happened or may never happen, is a
perfectly legitimate program and the UD will generate it along with
all the others. If the UD is allowed to run forever, this program will
be a lower measure contributor to S2 than the program that generates
it sequentially;
  

How do you know this?



Why S2 is unlikely to appear out of nowhere is equivalent to the White
Rabbit problem in ensemble theories, which has been often discussed
over the years on this list. Russell's "Theory of Nothing" book
provides a summary. The general idea is that structures generated by
simpler algorithms have higher measure, and it is simpler to write a
program that computes a series of mental states iteratively than one
that computes a set of disconnected mental states from ad hoc data.

  

and similarly in any physicalist theory. But although
S2 may guess from such considerations that he is more likely to have
been generated sequentially, the point remains that there is nothing
in the nature of his experience to indicate this. That is, the fact
that S2 remembers S1 as being in the past and remembers a smooth
transition from S1 to S2 is no guarantee that S1 really did happen in
the past, or even at all.
  

We're assuming that thought is a kind of computation, a processing of
information.  And we're also assuming that this processing can consist of
static states placed in order.  So given two static states, what is the
relation  that makes their ordering into a computational process?  One
answer would be that they are successive states generated by some program.
But you seem to reject that.  To say that S2 remembers S1 doesn't seem to
answer the question because "remembering" is itself a process, not a static
state.  I tried to phrase it in terms of the entropy, or information
content, of S1 and S2 which would be a static property - as for example, if
S2 simply contained S1.  But that hardly seems a proper representation of
states of consciousness - I'm certainly not conscious of my memories most of
the time.  Even as I type this I obviously remember how to type (though
maybe not how to spell :-) ) but I'm not conscious of it.



You've made this point in the past but I still don't understand it. If
S1 and S2 are periods of experience generated consecutively in your
brain in the usual manner, do you agree that you would still be
experience them as consecutive if they were generated by chance by
causally disconnected processes? 


No, I don't.  Of course if they had durations of seconds or minutes, I
would experience much the same thing.  But it is not at all convincing
to me that the experience at the beginning and end of the period would
be identical - and hence in the limit of infinitesimal duration, 
discrete states I'm not sure what the experience would be, if any at all.



The requirement would be only that
the respective experiences have the same subjective content in both
cases. Memory is only one aspect of subjective content, if an
important one. If S1-S2 spans the typing of a sentence, then both S1
and S2 have to remember how to type and what the sentence they are
typing is. 


But here you have allowed S1 and S2 to be processes with significant
duration and even overlap.  They are no longer discrete, static states.


It may seem to be unconscious but obviously it can't be
completely unconscious, otherwise it could be left out without making
any difference. Your digestion is an example of a completely
unconscious process that need not be taken into account in a
simulation of your mind. Another example is your name: you may have no
awareness at all of your name during S1-S2 so it could safely be left
out of the simulation, although at S3 when you reach the end of your
post and you need to sign it you need to remember what it is.

  


You are relying on the idea of a digital simulation which is described
by a sequence of discrete states.  But in an actual realization of such
a simulation the discrete states are realized by causal sequences in
time which are not of infinitesimal duration and overlap.

Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-07 Thread Nick Prince


On Jan 7, 12:09 pm, Bruno Marchal  wrote:
> Hi Nick,
>
> On 07 Jan 2010, at 01:39, Nick Prince wrote:
>
>
>
> > Hi Bruno
> > OK so there is a good deal of the technical stuff that I've got to
> > catch up on yet before I can interpret what you are saying  (although
> > I think I can understand why the everettian imperative based on comp +
> > UDA is there).
>
> Nice. It is already a big part.
>
> >  However if I could for the moment get an intuitive
> > understanding of what you mean by a consistent extension then perhaps
> > that would help with what Brent brought up.  From what I gather you
> > are saying our next observer moment is based not on the laws of
> > physics but on what possibilities the UD brings up in UD*.
>
> Our "next" first person observer moment. This comes simply from the  
> fact that the UD generates my current state (at the doctor  
> substitution level or below) an infinity of times. In each computation  
> I have a well defined third person next state, but my next 1-state is  
> defined olny statistically on all my next 3-states in all computations  
> going through my current state.
>
> > As an
> > analogy, in conways game of life, the next screen output display (=OM
> > for the little inhabitants) depends on the rules put into the cellular
> > automata (I know this only accounts for a single little universe here
> > and there would be an infinity of universal numbers for the real
> > universe etc, but lets try to keep it simple for the sake of clarity).
>
> OK, but the distinction between 1-state and 3-state forces us to NOT  
> make that simplification. You will encounter a problem.
>
> > So in this game any (little) laws of physics (regularities in the
> > game) are emergent and would become evident to a conscious entity that
> > arose in the game.
>
> Only if you implement the game in an already  "self-multiplying"  
> computations. If not, then, from the first person points of view of  
> the little entities appearing in your game, they will survive  
> somewhere else in the UD*. They will survive "here" (in your game)  
> only from *your* point of view. But "your reality" is a white rabbit  
> universe from *their* point of view.
> Of course, if you do it concretely, what you will build is most  
> probably a quantum object implementing the game of life, and as such,  
> it could gives the right measure. But this is "accidental" in the  
> reasoning, and based on the fact that we know already our  
> neighborhoods are quantum (and/or comp) multiplied.
>
> > So here is a case where physics (regularities in
> > the little world) arise from "a program".  Is there any simple way
> > this analogy or example  can be adapted to demonstrate how the
> > consistent extensions we experience come about.  Does it have
> > something to do with the prescription of the UD.  If not then how does
> > my existence pick its next consistent extension.
>
> It is really the consciousness which picks the consistent extension.  
> It is your consciousness in Moscow which will pick up the consistent  
> extension "Nick + "I am in Moscow"". Similarly, your consciousness in  
> Washington will pick the Washington consistent extension. All the  
> consistent extension are picked, that is why we have to isolate a  
> measure on those extensions.
>
> > It's all to do with
> > what makes extensions "consistent".
>
> Not really. A non consistent extension does not exist, simply. Unless  
> 0 = 1. In auda, we can see that some extension lead to a belief into  
> inconsistency: those are the cul-de-sac worlds. They are consistent  
> ("0 = 1" does not belong to them, but "provable ("0 = 1")" belongs to  
> them, and they are dead end, they have no consistent extensions.
>
> This is subtle and related to the second incompleteness theorem (and  
> Löb theorem). Consistency entails the consistency of inconsistency.  
> Provable(false) does not entails false, because we cannot prove our  
> consistency (if we are consistent).
>
> If we are inconsistent we can prove everything (including the false, 0  
> = 1).
> But if we prove our inconsistency, we still cannot prove everything.  
> We may prove  only that we can prove everything, and that is  
> different, and that difference eventually plays a key role.
>
> You may think to buy the Davis Dover book "The undecidable". It  
> contains the original paper by Gödel, Turing, Church, Rosser and  
> Kleene, and also the formidable paper by Post (which initiate the  
> whole recursion theory), and also its incredible 1920-24 anticipation  
> (up to my thesis!).  And it is cheap.
>
> http://www.amazon.com/Undecidable-Propositions-Unsolvable-Computable-...
>
> His little other Dover book "computability and unsolvability" is  
> rather nice too, but you don't need it if you have the Mendelson or  
> the Cutland book.
>
> > If it's not physics then it must
> > be something
>
> It is arithmetic.
>
> > and is there  a simple analogy that can help me to grasp
> > it?  I find

Re: UDA query

2010-01-07 Thread Stathis Papaioannou
2010/1/7 Brent Meeker :

> I think what I asked about is different from simply assuming idealism.  It
> is carrying your thread of reasoning a few steps further. Suppose Platonic
> objects exist.  Suppose computations, as Platonic objects, are enough to
> instantiate consciousness.  Suppose consciousness consists of discrete
> states of this computation.  Suppose the fact that the states are connected
> by the computation is irrelevant to their instantiation of consciousness.
> The states are themselves Platonic objects.  So if we assume Platonic
> objects exist we will already have assumed these states to exist and
> consciousness to have been instantiated by them - with no reference to
> computation.

That could be and in fact it is probably closer to what Plato himself
meant. But mathematical objects seem to have a special status in that
they necessarily exist, whereas everything else (including God) exists
only contingently. You can't imagine the number 7 not existing or not
being prime. The special sense in which mathematical objects and
relationships exist (maybe not the right word) independently of any
material world is their Platonic realm, but it doesn't follow having
accepted this that other objects also exist in a separate Platonic
realm. However, if consciousness supervenes on computation and it does
not require actual physical implementation of the computation, then
consciousness piggybacks on the Platonic existence of computation.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-07 Thread Bruno Marchal

Hi Nick,

On 07 Jan 2010, at 01:39, Nick Prince wrote:



Hi Bruno
OK so there is a good deal of the technical stuff that I've got to
catch up on yet before I can interpret what you are saying  (although
I think I can understand why the everettian imperative based on comp +
UDA is there).


Nice. It is already a big part.




 However if I could for the moment get an intuitive
understanding of what you mean by a consistent extension then perhaps
that would help with what Brent brought up.  From what I gather you
are saying our next observer moment is based not on the laws of
physics but on what possibilities the UD brings up in UD*.


Our "next" first person observer moment. This comes simply from the  
fact that the UD generates my current state (at the doctor  
substitution level or below) an infinity of times. In each computation  
I have a well defined third person next state, but my next 1-state is  
defined olny statistically on all my next 3-states in all computations  
going through my current state.






As an
analogy, in conways game of life, the next screen output display (=OM
for the little inhabitants) depends on the rules put into the cellular
automata (I know this only accounts for a single little universe here
and there would be an infinity of universal numbers for the real
universe etc, but lets try to keep it simple for the sake of clarity).


OK, but the distinction between 1-state and 3-state forces us to NOT  
make that simplification. You will encounter a problem.





So in this game any (little) laws of physics (regularities in the
game) are emergent and would become evident to a conscious entity that
arose in the game.


Only if you implement the game in an already  "self-multiplying"  
computations. If not, then, from the first person points of view of  
the little entities appearing in your game, they will survive  
somewhere else in the UD*. They will survive "here" (in your game)  
only from *your* point of view. But "your reality" is a white rabbit  
universe from *their* point of view.
Of course, if you do it concretely, what you will build is most  
probably a quantum object implementing the game of life, and as such,  
it could gives the right measure. But this is "accidental" in the  
reasoning, and based on the fact that we know already our  
neighborhoods are quantum (and/or comp) multiplied.





So here is a case where physics (regularities in
the little world) arise from "a program".  Is there any simple way
this analogy or example  can be adapted to demonstrate how the
consistent extensions we experience come about.  Does it have
something to do with the prescription of the UD.  If not then how does
my existence pick its next consistent extension.


It is really the consciousness which picks the consistent extension.  
It is your consciousness in Moscow which will pick up the consistent  
extension "Nick + "I am in Moscow"". Similarly, your consciousness in  
Washington will pick the Washington consistent extension. All the  
consistent extension are picked, that is why we have to isolate a  
measure on those extensions.




It's all to do with
what makes extensions "consistent".


Not really. A non consistent extension does not exist, simply. Unless  
0 = 1. In auda, we can see that some extension lead to a belief into  
inconsistency: those are the cul-de-sac worlds. They are consistent  
("0 = 1" does not belong to them, but "provable ("0 = 1")" belongs to  
them, and they are dead end, they have no consistent extensions.


This is subtle and related to the second incompleteness theorem (and  
Löb theorem). Consistency entails the consistency of inconsistency.  
Provable(false) does not entails false, because we cannot prove our  
consistency (if we are consistent).


If we are inconsistent we can prove everything (including the false, 0  
= 1).
But if we prove our inconsistency, we still cannot prove everything.  
We may prove  only that we can prove everything, and that is  
different, and that difference eventually plays a key role.


You may think to buy the Davis Dover book "The undecidable". It  
contains the original paper by Gödel, Turing, Church, Rosser and  
Kleene, and also the formidable paper by Post (which initiate the  
whole recursion theory), and also its incredible 1920-24 anticipation  
(up to my thesis!).  And it is cheap.


http://www.amazon.com/Undecidable-Propositions-Unsolvable-Computable-Functions/dp/0486432289

His little other Dover book "computability and unsolvability" is  
rather nice too, but you don't need it if you have the Mendelson or  
the Cutland book.




If it's not physics then it must
be something


It is arithmetic.




and is there  a simple analogy that can help me to grasp
it?  I find I can always work out the technicalities better if I have
a "road map" or analogy to help.



Arithmetic defined all the lawful sequences of states. But from inside  
"1-persons" do not belong to any precise computations, but to an  
inf

Re: UDA query

2010-01-07 Thread Stathis Papaioannou
2010/1/7 Brent Meeker :

>> A program that generates S2 as it were out of nowhere, with false
>> memories of an S1 that has not yet happened or may never happen, is a
>> perfectly legitimate program and the UD will generate it along with
>> all the others. If the UD is allowed to run forever, this program will
>> be a lower measure contributor to S2 than the program that generates
>> it sequentially;
>
> How do you know this?

Why S2 is unlikely to appear out of nowhere is equivalent to the White
Rabbit problem in ensemble theories, which has been often discussed
over the years on this list. Russell's "Theory of Nothing" book
provides a summary. The general idea is that structures generated by
simpler algorithms have higher measure, and it is simpler to write a
program that computes a series of mental states iteratively than one
that computes a set of disconnected mental states from ad hoc data.

>> and similarly in any physicalist theory. But although
>> S2 may guess from such considerations that he is more likely to have
>> been generated sequentially, the point remains that there is nothing
>> in the nature of his experience to indicate this. That is, the fact
>> that S2 remembers S1 as being in the past and remembers a smooth
>> transition from S1 to S2 is no guarantee that S1 really did happen in
>> the past, or even at all.
>
> We're assuming that thought is a kind of computation, a processing of
> information.  And we're also assuming that this processing can consist of
> static states placed in order.  So given two static states, what is the
> relation  that makes their ordering into a computational process?  One
> answer would be that they are successive states generated by some program.
> But you seem to reject that.  To say that S2 remembers S1 doesn't seem to
> answer the question because "remembering" is itself a process, not a static
> state.  I tried to phrase it in terms of the entropy, or information
> content, of S1 and S2 which would be a static property - as for example, if
> S2 simply contained S1.  But that hardly seems a proper representation of
> states of consciousness - I'm certainly not conscious of my memories most of
> the time.  Even as I type this I obviously remember how to type (though
> maybe not how to spell :-) ) but I'm not conscious of it.

You've made this point in the past but I still don't understand it. If
S1 and S2 are periods of experience generated consecutively in your
brain in the usual manner, do you agree that you would still be
experience them as consecutive if they were generated by chance by
causally disconnected processes? The requirement would be only that
the respective experiences have the same subjective content in both
cases. Memory is only one aspect of subjective content, if an
important one. If S1-S2 spans the typing of a sentence, then both S1
and S2 have to remember how to type and what the sentence they are
typing is. It may seem to be unconscious but obviously it can't be
completely unconscious, otherwise it could be left out without making
any difference. Your digestion is an example of a completely
unconscious process that need not be taken into account in a
simulation of your mind. Another example is your name: you may have no
awareness at all of your name during S1-S2 so it could safely be left
out of the simulation, although at S3 when you reach the end of your
post and you need to sign it you need to remember what it is.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-07 Thread Bruno Marchal


On 06 Jan 2010, at 20:18, Brent Meeker wrote:


Bruno Marchal wrote:


On 05 Jan 2010, at 19:59, Brent Meeker wrote:


Nick Prince wrote:
Is this because you think of your stream of consciousness as  
somehow

like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is  
always

there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey  
the

laws of physics.


But there's the rub.  Why the laws of physics?  That's what somehow
needs to be explained.  Is there something about the UD that  
necessarily

generates law like sequences of states with high probability?



By definition, the UD "generates" all and only the (computable) law  
like sequences.


But only "law like" in the sense of being computable.  Not  
necessarily "law like" in conserving momentum in a 4-space with  
Lorentzian signature.


Yes. Other high level laws can emerge from the computable, note.



The UD executes all programs. It generates all the possible  
computations, those which terminate and those which don't terminate.
It is well defined mathematically, with respect to many equivalence  
results, closure results, Church thesis, etc.


Yes, I understand that.


A notion like "consistent extension" makes sense only for the  
"persons" relatively appearing "in" deeper computations, so the  
precise relation between "consistent extensions" and the UD needs  
the use of the Gödel Löb provability logics.


So do they allow a definition of "consistent extensions" such that  
"persons" can be identified with sequences of consistent extensions  
and those "persons" will define one or more universes in terms of  
intersubjective agreement?


Yes. Like "Brent + "I am in Moscow"" and "Brent + "I am in Washington"  
can appear to be to consistent extension of "Brent + "I am in  
Brussels"". Consistency is a personal and relative notion. "Brent + "I  
am in Moscow"" makes "I am in Moscow"" consistent with "Brent".  
"Brent" here denotes your set of beliefs, not your body.




 That's where you lose me - I don't see how this is to be done.


It is counter-intuitive (or better counter-Aristotelian-intuitive).  
But *you* are losing me. I don't see how we can avoid this, once  
"saying "yes" to the doctor. More in the post to Nick.


Bruno
http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: UDA query

2010-01-07 Thread Bruno Marchal


On 06 Jan 2010, at 20:10, Brent Meeker wrote:


Bruno Marchal wrote:


On 05 Jan 2010, at 19:57, Brent Meeker wrote:




Yes but the UD will generate infinitely more often the in order  
S1/S2/S3
than out of order... with what you are saying I don't even  
understand
what is a computation if not a rules ordered sequential state  
order.


Quentin


It seems strange that we start with the hypothesis that  
consciousness is a kind of computation - a sequential processing  
of information - and then arrive at picture in which there is no  
processing and sequence is just inferred.  On the one hand  
consciousness is a process, on the other hand it is static state.   
I suspect there is something wrong with the slicing of the stream  
of consciousness into zero-duration, non-overlapping states.



But that problem occurs also with physics, as illustrated by the  
debate on "time" and "block universe". Also, we have to be careful:  
no where it has been said that consciousness is a kind of  
computation.


It's been said on this list several times (at least by me :-) ).


I would not brag on this. Physicalist does such identification mistake  
(mind = brain, for example).
Consciousness is a first person apprehension of itself, a belief/ 
kowledge in a reality, etc. It may associate (with some probability)  
to a particular computation, but it is better not to identify them.


It is subtle. A lot of identification will be true (= provable by G*),  
yet, not provable by the machine (not provable by G).
Those identification are "religious" (belongs to G* minus G). They are  
true but not provable, and this plays some key role for the  
understanding of what happens.










Obviously "consciousness" is not a kind of computation.
It's not obvious to me.  If the doctor says to me, "This artificial- 
hypothalmus I'm going to substitute for yours, does exactly the same  
input-output computations that your original does.", then I'll be  
much more inclined to say "yes" than if he says it doesn't do any  
computation.


This shows only that some computation can bear consciousness, not that  
consciousness is equal to a computation.
Being a first person notion, it is better (still slighty false) to  
attach consciousness to all "similar" computation in the UD.
Then the relative proportion of relative measure will distinguish  
between the probable experiences and the rare (white rabbit) one.






It is a property of (first) person, which, assuming mechanism, is  
invariant for a set of functional substitution.


What is invariant under the functional substitution if not the  
computations?


OK. And with comp, if the functional substitutions are done at the  
right level, consciousness will be preserved too, but this does not  
mean that the consciousness is the computation.
A functional substitution can preserve the fact that you are winning a  
chess game, but the state of "winning a chess game" is not a  
computation per se, it is something else, which can be locally attach  
to some computation, but no more.


Bruno


http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-07 Thread Bruno Marchal


On 06 Jan 2010, at 19:57, Brent Meeker wrote:


Stathis Papaioannou wrote:


2010/1/6 Brent Meeker :



I can understand that view, but in that case why consider them
computations?  Why not just suppose all states of your  
consciousness (and
even other parts of the world) exist.  If they can be glued  
together by
inherent features or simply experienced without even an implicit  
order,
then computation seems irrelevant.  Of course that leaves the  
apparent
lawfulness of physics even further from possible explanation than  
the UD

theory.



We start off with what we observe: apparently there is a physical
world, and some parts of this physical world, called brains, seem to
give rise to consciousness. There is reason to think that computers
running a program can also give rise to consciousness. Taking this
hypothesis of computationalism seriously then leads to interesting
questions, such as whether there is a reason to suppose that
consciousness happens only when the computations are physically
instantiated (and what exactly that means), or whether their status  
as

platonic objects is enough to generate the associated consciousness.
In other words, there is a series of rational steps starting from  
what

we observe, and if any step is faulted the whole edifice falls;
whereas imply assuming idealism from the start is ad hoc and
unfalsifiable.



I think what I asked about is different from simply assuming  
idealism.  It is carrying your thread of reasoning a few steps  
further. Suppose Platonic objects exist.  Suppose computations, as  
Platonic objects, are enough to instantiate consciousness.  Suppose  
consciousness consists of discrete states of this computation.


I will insist that consciousness cannot consists of discrete states of  
computation. It may be associated to, attached to, etc. Consciousness  
is a first person notion, and computational state are third person  
notions. We cannot identify them. It is the same mistake than  
identifying mind and brain. Brain are assembly of molecules, minds are  
memories, informations, logical and pragmatical dispositions, etc.
In some thread this can be just an irrelevant  detail, but as we are  
going to the crux of the reasoning, we will have to be very careful.  
The devil is in the detail ...





Suppose the fact that the states are connected by the computation is  
irrelevant to their instantiation of consciousness.  The states are  
themselves Platonic objects.  So if we assume Platonic objects exist  
we will already have assumed these states to exist and consciousness  
to have been instantiated by them - with no reference to computation.


OK.




I think Bruno avoids this by saying consciousness consists of  
computationally connected sequences thru a given state - not the  
state itself - but I'm not sure why that should be.


Assuming digital mechanism, we can associate consciousness to a  
computation. This computation makes sense only with respect to a  
number or a machine which "do" (platonically) that computation. If  
not, all number can be said to code a computational state, and all  
sequence of states could define a computations, and the computations  
would be non enumerable, but the computations (without oracle), and  
considered in the third person way are enumerable: it is always  
generated by a precise phi_i(j).


Now, to associate a consciousness to a computation is not enough. The  
association has to be 1-person statistically stable. We have to take  
into account the global first person indeterminacy, which involved all  
computations.


I will come back on this in my comment to Nick's last post.

Bruno


http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: UDA query

2010-01-06 Thread Nick Prince

Hi Bruno
OK so there is a good deal of the technical stuff that I've got to
catch up on yet before I can interpret what you are saying  (although
I think I can understand why the everettian imperative based on comp +
UDA is there).  However if I could for the moment get an intuitive
understanding of what you mean by a consistent extension then perhaps
that would help with what Brent brought up.  From what I gather you
are saying our next observer moment is based not on the laws of
physics but on what possibilities the UD brings up in UD*.  As an
analogy, in conways game of life, the next screen output display (=OM
for the little inhabitants) depends on the rules put into the cellular
automata (I know this only accounts for a single little universe here
and there would be an infinity of universal numbers for the real
universe etc, but lets try to keep it simple for the sake of clarity).
So in this game any (little) laws of physics (regularities in the
game) are emergent and would become evident to a conscious entity that
arose in the game.  So here is a case where physics (regularities in
the little world) arise from "a program".  Is there any simple way
this analogy or example  can be adapted to demonstrate how the
consistent extensions we experience come about.  Does it have
something to do with the prescription of the UD.  If not then how does
my existence pick its next consistent extension.  It's all to do with
what makes extensions "consistent". If it's not physics then it must
be something and is there  a simple analogy that can help me to grasp
it?  I find I can always work out the technicalities better if I have
a "road map" or analogy to help.

Best wishes

Nick

On Jan 6, 5:12 pm, Bruno Marchal  wrote:
> On 06 Jan 2010, at 01:21, Nick Prince wrote:
>
>
>
>
>
> > Hi Brent
>
> > Perhaps Bruno could give some clarification  here. Just prior to his
> > conclusion on the sane paper I quoted from was this:
>
> > "So if we keep comp at this stage, we are forced to relate the inner
> > experience only to
> > the type of computation involved. The reason is that only those types
> > are univocally related to
> > all their possible counterfactuals. This entails that, from a first
> > person point of view, not only
> > the physical cannot be distinguished from the virtual, but the virtual
> > can no more be
> > distinguished from the arithmetical. Now DU is emulated
> > platonistically by the verifiable
> > propositions of arithmetic. They are equivalent to sentences of the
> > form ‘‘it exists n such that
> > P(n)’’ with P(n) decidable. Their truth entails their provability, and
> > they are known under the
> > name of Sigma1 sentence.
> > If comp is correct, the appearance of physics must be recovered from
> > some point of
> > views emerging from those propositions. Indeed, taking into account
> > the seven steps once
> > more, we arrive at the conclusion that the physical atomic (in the
> > Boolean logician sense)
> > invariant proposition must be given by a probability measure on those
> > propositions. A
> > physical certainty must be true in all maximal extensions, true in at
> > least one maximal extension (we will see later why the second
> > condition does not follow from the first) and
> > accessible by the UD, that is arithmetically verifiable. Figure 8
> > illustrates our main
> > conclusion, where the number 1 is put for the so called Sigma1
> > sentences of arithmetic."
>
> > It sounds as if Bruno thinks that the computations of the UD invoke
> > our inner experiences and also our understanding of  physics.  Both
> > come from arithmetical platonicism ( because thats what the UD is all
> > about).  So the pictures in the "film" are stiched together by the
> > arithmetical (computation necessity) rather than the laws of of
> > physics... Hmm not what I thought and said earlier!!
>
> > So according to Bruno the laws of physics come from something
> > intrinsic in the computation?  Not quite sure how.  I just can't
> > figure out any more at the moment and hope Bruno will give me a hint
> > here.
>
> But the quote you give is the conclusion of step 7 and 8. Except that  
> I use a bit the vocabulary which will help to understand the  
> "interview" of the Löbian machine.
>
> Normally at step seven you understand that COMP + concrete UD => "I am  
> already in UD*" and the physical laws have to result from a sum on my  
> first person (hopefully plural) indeterminacy in UD*. (step 1 ->6 + 7)
>
> Then step 8, MGA,  shows (is supposed to show) that COMP makes any  
> concrete running of the UD irrelevant.
> (but the MGA thread in this list is better, I may send a new version  
> of MGA). MGA = Movie Graph Argument.
>
> This is not *just* because UD* is represented, remarkably enough,  in  
> the elementary consequences of addition and multiplication, but mainly  
> because, by MGA, comp together with the physical supervenience thesis  
> makes it necessary to confuse a computation and a description 

Re: UDA query

2010-01-06 Thread Brent Meeker

Bruno Marchal wrote:


On 06 Jan 2010, at 01:21, Nick Prince wrote:


Hi Brent

Perhaps Bruno could give some clarification  here. Just prior to his
conclusion on the sane paper I quoted from was this:

"So if we keep comp at this stage, we are forced to relate the inner
experience only to
the type of computation involved. The reason is that only those types
are univocally related to
all their possible counterfactuals. This entails that, from a first
person point of view, not only
the physical cannot be distinguished from the virtual, but the virtual
can no more be
distinguished from the arithmetical. Now DU is emulated
platonistically by the verifiable
propositions of arithmetic. They are equivalent to sentences of the
form ‘‘it exists n such that
P(n)’’ with P(n) decidable. Their truth entails their provability, and
they are known under the
name of Sigma1 sentence.
If comp is correct, the appearance of physics must be recovered from
some point of
views emerging from those propositions. Indeed, taking into account
the seven steps once
more, we arrive at the conclusion that the physical atomic (in the
Boolean logician sense)
invariant proposition must be given by a probability measure on those
propositions. A
physical certainty must be true in all maximal extensions, true in at
least one maximal extension (we will see later why the second
condition does not follow from the first) and
accessible by the UD, that is arithmetically verifiable. Figure 8
illustrates our main
conclusion, where the number 1 is put for the so called Sigma1
sentences of arithmetic."

It sounds as if Bruno thinks that the computations of the UD invoke
our inner experiences and also our understanding of  physics.  Both
come from arithmetical platonicism ( because thats what the UD is all
about).  So the pictures in the "film" are stiched together by the
arithmetical (computation necessity) rather than the laws of of
physics... Hmm not what I thought and said earlier!!

So according to Bruno the laws of physics come from something
intrinsic in the computation?  Not quite sure how.  I just can't
figure out any more at the moment and hope Bruno will give me a hint
here.


But the quote you give is the conclusion of step 7 and 8. Except that 
I use a bit the vocabulary which will help to understand the 
"interview" of the Löbian machine.


Normally at step seven you understand that COMP + concrete UD => "I am 
already in UD*" and the physical laws have to result from a sum on my 
first person (hopefully plural) indeterminacy in UD*. (step 1 ->6 + 7)


Then step 8, MGA,  shows (is supposed to show) that COMP makes any 
concrete running of the UD irrelevant.
(but the MGA thread in this list is better, I may send a new version 
of MGA). MGA = Movie Graph Argument.


This is not *just* because UD* is represented, remarkably enough,  in 
the elementary consequences of addition and multiplication, but mainly 
because, by MGA, comp together with the physical supervenience thesis 
makes it necessary to confuse a computation and a description of a 
computation. 


I think you need to carefully explicate your teminology here.  Logicians 
and mathematicians tend to use "description" like "model" to mean 
exactly the opposite of what engineers and physicists mean by the 
terms.  The physicists thinks of the physical computer running as the 
computation and the program as a description of what it is (supposed to 
be) doing.  But I don't think that's what you mean.


Brent

The computation has to consist in the logical relations, not in this 
or that implementation, (which, btw, can only be a reduction to a 
particular universal machine).


Do you see that COMP + concrete UD leads to an Everett-DeWitt shock?  
We am multiplied by 10^100+ at each instant. COMP leads, naively, to 
Aleph_zero + multiplication, or even 2^aleph_zero (in a sense)?


Then MGA is the next and last difficulty.  (before the machine 
interview, if interested).


Bruno

http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Brent Meeker

Bruno Marchal wrote:


On 05 Jan 2010, at 19:59, Brent Meeker wrote:


Nick Prince wrote:

Is this because you think of your stream of consciousness as somehow
like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is always
there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey the
laws of physics.


But there's the rub.  Why the laws of physics?  That's what somehow
needs to be explained.  Is there something about the UD that necessarily
generates law like sequences of states with high probability?



By definition, the UD "generates" all and only the (computable) law 
like sequences.


But only "law like" in the sense of being computable.  Not necessarily 
"law like" in conserving momentum in a 4-space with Lorentzian signature.


The problem is that the "physical law like sequence have to be 
justified", indeed.
This is what is interesting in comp. It gives a solid theory of mind 
(computer science, mathematical logic, machine self-reference, etc.), 
and it transforms the mind body problem into a body problem.

The laws of physics have a reason, an origin.





Doesn't
it generate just those laws we seem to find - that would be a great
discovery.


The UD generates all the laws.
It may or not generate the laws we seem to find.
In any case, those laws have to be a sum on all the (computable) laws. 
(ud argument).





Or does it generate all possible non-self-contradictory
multiverses - in which case nothing has been explained.


The UD executes all programs. It generates all the possible 
computations, those which terminate and those which don't terminate.
It is well defined mathematically, with respect to many equivalence 
results, closure results, Church thesis, etc.


Yes, I understand that.


A notion like "consistent extension" makes sense only for the 
"persons" relatively appearing "in" deeper computations, so the 
precise relation between "consistent extensions" and the UD needs the 
use of the Gödel Löb provability logics.


So do they allow a definition of "consistent extensions" such that 
"persons" can be identified with sequences of consistent extensions and 
those "persons" will define one or more universes in terms of 
intersubjective agreement?  That's where you lose me - I don't see how 
this is to be done.


Brent




Bruno





Deutsch argues similarly in the Fabric of reality.
In my work I often come across the idea of a foliation of
hypersurfaces which is really a set of 3D pictures "stuck together and
stacked in the direction of the time coordinate of the world at a
given instant of time.


But that's starting with the physics given, so the hypersurfaces and
their relation is already defined.

Brent


In MW interpretation though I guess that the
stacking is less certain as in the block universe idea but that's
another issue.  Is this analogy similar to how you feel  the "obvious"
experience of time being normal?

Best

Nick

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.





http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Brent Meeker

Bruno Marchal wrote:


On 05 Jan 2010, at 19:57, Brent Meeker wrote:




Yes but the UD will generate infinitely more often the in order S1/S2/S3
than out of order... with what you are saying I don't even understand
what is a computation if not a rules ordered sequential state order.

Quentin


It seems strange that we start with the hypothesis that consciousness 
is a kind of computation - a sequential processing of information - 
and then arrive at picture in which there is no processing and 
sequence is just inferred.  On the one hand consciousness is a 
process, on the other hand it is static state.  I suspect there is 
something wrong with the slicing of the stream of consciousness into 
zero-duration, non-overlapping states.  



But that problem occurs also with physics, as illustrated by the 
debate on "time" and "block universe". 
Also, we have to be careful: no where it has been said that 
consciousness is a kind of computation.


It's been said on this list several times (at least by me :-) ).


Obviously "consciousness" is not a kind of computation.
It's not obvious to me.  If the doctor says to me, "This 
artificial-hypothalmus I'm going to substitute for yours, does exactly 
the same input-output computations that your original does.", then I'll 
be much more inclined to say "yes" than if he says it doesn't do any 
computation.


It is a property of (first) person, which, assuming mechanism, is 
invariant for a set of functional substitution.


What is invariant under the functional substitution if not the computations?

Brent

Then a reasoning shows that we cannot distinguish a "physical 
computation" from a mathematical one, and that we have to take this 
into account for justifying the (conscious) appearance of the physical 
laws.


Slicing the stream of consciousness, or just the stream of time like 
the physicists do a lot, into zero-length interval is a critics of the 
use of real number, and somehow comp escapes it, given that real 
numbers does not (necessarily) exists at the ontological level. They 
exist necessarily at the epistemological level though.





I can see that states can encode information that, when coarse 
grained, define a sequence of increasing entropy, but is it 
legitimate to identify having the information "in memory" with 
"remembering"?


In my opinion, time is far less problematical in comp than in physics, 
given that we assume a form of primitive time, first by the number 
order, then by the length of computations or of proofs.
Arithmletic and provability logic are so "antisymmetrical" that I was 
afraid the comp physics would contradict the very symmetry of nature 
(laws of physics are reversible, most computations are not).
But the "intelligible and sensible" comp "matter" (the probability one 
defined by Bp & Dt (& p), luckily enough seems able to restaure the 
symmetry, or at least some symmetry. Enough? Open problem.



Bruno


http://iridia.ulb.ac.be/~marchal/ 






--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Brent Meeker




Stathis Papaioannou wrote:

  2010/1/6 Brent Meeker :

  
  
I can understand that view, but in that case why consider them
computations?  Why not just suppose all states of your consciousness (and
even other parts of the world) exist.  If they can be glued together by
inherent features or simply experienced without even an implicit order,
then computation seems irrelevant.  Of course that leaves the apparent
lawfulness of physics even further from possible explanation than the UD
theory.

  
  
We start off with what we observe: apparently there is a physical
world, and some parts of this physical world, called brains, seem to
give rise to consciousness. There is reason to think that computers
running a program can also give rise to consciousness. Taking this
hypothesis of computationalism seriously then leads to interesting
questions, such as whether there is a reason to suppose that
consciousness happens only when the computations are physically
instantiated (and what exactly that means), or whether their status as
platonic objects is enough to generate the associated consciousness.
In other words, there is a series of rational steps starting from what
we observe, and if any step is faulted the whole edifice falls;
whereas imply assuming idealism from the start is ad hoc and
unfalsifiable.


  

I think what I asked about is different from simply assuming idealism. 
It is carrying your thread of reasoning a few steps further. Suppose
Platonic objects exist.  Suppose computations, as Platonic objects, are
enough to instantiate consciousness.  Suppose consciousness consists of
discrete states of this computation.  Suppose the fact that the states
are connected by the computation is irrelevant to their instantiation
of consciousness.  The states are themselves Platonic objects.  So if
we assume Platonic objects exist we will already have assumed these
states to exist and consciousness to have been instantiated by them -
with no reference to computation.

I think Bruno avoids this by saying consciousness consists of
computationally connected sequences thru a given state - not the state
itself - but I'm not sure why that should be.

Brent


-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: UDA query

2010-01-06 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/6 Nick Prince :
  

As I understand it the UD generates all possible programs and as it
generates each one it runs one step of it before generating the next.
Does that not mean that eventually it will generate the program which
is generating what we understand to be some observer moments for us at
this particular time. This is where I was thinking of the foliation
bit - each hypersurface is a snapshot in time of the universe as
experienced by me.  

But of course relativity tells us there is no canonical way to foliate 
the universe; your experience is local and is determined by your past 
light cone, not by the "now" hypersurface.



This being said would that not mean they would
necessarily be in order or are you thinking that some other program.
could generate by chance a perfectly good observer moment that was out
of sync?



A program that generates S2 as it were out of nowhere, with false
memories of an S1 that has not yet happened or may never happen, is a
perfectly legitimate program and the UD will generate it along with
all the others. If the UD is allowed to run forever, this program will
be a lower measure contributor to S2 than the program that generates
it sequentially; 


How do you know this?


and similarly in any physicalist theory. But although
S2 may guess from such considerations that he is more likely to have
been generated sequentially, the point remains that there is nothing
in the nature of his experience to indicate this. That is, the fact
that S2 remembers S1 as being in the past and remembers a smooth
transition from S1 to S2 is no guarantee that S1 really did happen in
the past, or even at all.


We're assuming that thought is a kind of computation, a processing of 
information.  And we're also assuming that this processing can consist 
of static states placed in order.  So given two static states, what is 
the relation  that makes their ordering into a computational process?  
One answer would be that they are successive states generated by some 
program. But you seem to reject that.  To say that S2 remembers S1 
doesn't seem to answer the question because "remembering" is itself a 
process, not a static state.  I tried to phrase it in terms of the 
entropy, or information content, of S1 and S2 which would be a static 
property - as for example, if S2 simply contained S1.  But that hardly 
seems a proper representation of states of consciousness - I'm certainly 
not conscious of my memories most of the time.  Even as I type this I 
obviously remember how to type (though maybe not how to spell :-) ) but 
I'm not conscious of it.


Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Bruno Marchal


On 06 Jan 2010, at 03:34, Brent Meeker wrote:


Nick Prince wrote:


Hi Brent

Perhaps Bruno could give some clarification  here. Just prior to his
conclusion on the sane paper I quoted from was this:

"So if we keep comp at this stage, we are forced to relate the inner
experience only to
the type of computation involved. The reason is that only those types
are univocally related to
all their possible counterfactuals. This entails that, from a first
person point of view, not only
the physical cannot be distinguished from the virtual, but the  
virtual

can no more be
distinguished from the arithmetical. Now DU is emulated
platonistically by the verifiable
propositions of arithmetic. They are equivalent to sentences of the
form ‘‘it exists n such that
P(n)’’ with P(n) decidable. Their truth entails their provability,  
and

they are known under the
name of Sigma1 sentence.
If comp is correct, the appearance of physics must be recovered from
some point of
views emerging from those propositions.


Why only the atomic sentences?  Why not all true sentences?  How is  
"appearance" recovered?


The atomic propositions p, q, r of the modal logic (G) are interpreted  
by the Sigma_1 sentences of Arithmetic (with shape ExP(x), P  
decidable). Dovetailing on their (infinitely many) proofs can be shown  
equivalent with a universal dovetailing (and thus truly universal with  
Church thesis).  Limiting the arithmetical interpretation on that tiny  
Sigma_1 complete part is the way to interview the *computationalist"  
machine.
The formula "p -> Bp" characterizes such Sigma_1 arithmetical formula,  
provably so, by the Löbian machine.


So G + p -> Bp is used in the final.







Indeed, taking into account
the seven steps once
more, we arrive at the conclusion that the physical atomic (in the
Boolean logician sense)
invariant proposition must be given by a probability measure on those
propositions.
But what gives the probability measure?  Is it just the relative  
frequency of occurrence of the atomic sentences in the UD output up  
to a given step?


The 'measure one' will have a logic related to the logic of Bp & Dt  
(and variants).  The measure itself may follow, or not.


Incompleteness; being self-discoverable, provides a geometry on the  
common ignorance of all universal machines, from which the 'physical  
laws" should emerge.


Hmm... I guess you miss something in the MGA, or with computational  
supervenience. Computational supervenience is really "step 7", but in  
front of arithmetical realism, which contains UD* in some way.


Not easy stuff ...

Bruno




http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Bruno Marchal


On 06 Jan 2010, at 01:21, Nick Prince wrote:


Hi Brent

Perhaps Bruno could give some clarification  here. Just prior to his
conclusion on the sane paper I quoted from was this:

"So if we keep comp at this stage, we are forced to relate the inner
experience only to
the type of computation involved. The reason is that only those types
are univocally related to
all their possible counterfactuals. This entails that, from a first
person point of view, not only
the physical cannot be distinguished from the virtual, but the virtual
can no more be
distinguished from the arithmetical. Now DU is emulated
platonistically by the verifiable
propositions of arithmetic. They are equivalent to sentences of the
form ‘‘it exists n such that
P(n)’’ with P(n) decidable. Their truth entails their provability, and
they are known under the
name of Sigma1 sentence.
If comp is correct, the appearance of physics must be recovered from
some point of
views emerging from those propositions. Indeed, taking into account
the seven steps once
more, we arrive at the conclusion that the physical atomic (in the
Boolean logician sense)
invariant proposition must be given by a probability measure on those
propositions. A
physical certainty must be true in all maximal extensions, true in at
least one maximal extension (we will see later why the second
condition does not follow from the first) and
accessible by the UD, that is arithmetically verifiable. Figure 8
illustrates our main
conclusion, where the number 1 is put for the so called Sigma1
sentences of arithmetic."

It sounds as if Bruno thinks that the computations of the UD invoke
our inner experiences and also our understanding of  physics.  Both
come from arithmetical platonicism ( because thats what the UD is all
about).  So the pictures in the "film" are stiched together by the
arithmetical (computation necessity) rather than the laws of of
physics... Hmm not what I thought and said earlier!!

So according to Bruno the laws of physics come from something
intrinsic in the computation?  Not quite sure how.  I just can't
figure out any more at the moment and hope Bruno will give me a hint
here.


But the quote you give is the conclusion of step 7 and 8. Except that  
I use a bit the vocabulary which will help to understand the  
"interview" of the Löbian machine.


Normally at step seven you understand that COMP + concrete UD => "I am  
already in UD*" and the physical laws have to result from a sum on my  
first person (hopefully plural) indeterminacy in UD*. (step 1 ->6 + 7)


Then step 8, MGA,  shows (is supposed to show) that COMP makes any  
concrete running of the UD irrelevant.
(but the MGA thread in this list is better, I may send a new version  
of MGA). MGA = Movie Graph Argument.


This is not *just* because UD* is represented, remarkably enough,  in  
the elementary consequences of addition and multiplication, but mainly  
because, by MGA, comp together with the physical supervenience thesis  
makes it necessary to confuse a computation and a description of a  
computation. The computation has to consist in the logical relations,  
not in this or that implementation, (which, btw, can only be a  
reduction to a particular universal machine).


Do you see that COMP + concrete UD leads to an Everett-DeWitt shock?   
We am multiplied by 10^100+ at each instant. COMP leads, naively, to  
Aleph_zero + multiplication, or even 2^aleph_zero (in a sense)?


Then MGA is the next and last difficulty.  (before the machine  
interview, if interested).


Bruno

http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Bruno Marchal


On 05 Jan 2010, at 23:44, Brent Meeker wrote:


Nick Prince wrote:

OOps sorry I sent an empty post by accident.

I agree with you here.  But I am new to this field so I am uncertain
about so many things.  However, I don't understand why it is that  a
UD would know how to generate these law like sequences of states. It
may well generate all possible programs that generate all possible
universes (with different values for the physical constants say -
maybe even different laws) but I wonder why our conciousness defines
itself by "selecting" only those "consistent" extension among all the
states available that obey a certain set of  laws of physics.

I thought that a TOE should explain the laws of physics and Bruno
states in his SANE paper

" Conclusion: Physics is given by a measure on the consistent
computational histories, or
maximal consistent extensions as seen from some first person point of
view.


But consistent in what sense?  We can't say "consistent with the  
laws of physics" because that's what we're trying to explain.



Laws of physics,
in particular, should be inferable from the true verifiable ‘‘atomic
sentences’’. Those are the
verifiable arithmetical sentences.


I understand true arithmetical sentences, but I'm not sure what  
'verifiable' means?  Does it mean computable, or provable?  What's  
an atomic sentence?  Is it just a finite statement, like "17 is  
prime"; so it excludes infinite statements like Goldbach's conjecture?



p is verifiable means that if p is true then p is provable.

 "p -> Bp" is true for those sentences p.

All statements of the shape "It exist a machine x which will access  
state y" are of that nature. We may run all machines, and never access  
state y, so that we remain ignorant, in case the statement is false,  
but if the statement is true we will know it, soon or later (in  
principle, or in platonia).


Typically, the Sigma_1 sentences. Those can be put in the shape  
ExP(x), with P decidable. If "ExP(x)' is true, we can find it by  
testing P(0), P(1), P(2) ... up to the P(k) witnessing the truth of  
"ExP(x)". If "ExP(x)" is false, we may never know, and this procedure  
will not decide the sentence.


The DU, implemented in arithmetic, flows through all true Sigma_1  
sentences, but also on all proofs of the false one, this change the  
internal measure of the true one. Enough for a successful arithmetical  
renormalization? Open problem.


Bruno


http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Bruno Marchal


On 05 Jan 2010, at 21:18, Nick Prince wrote:


It feels a bit lie a chicken and egg situation - do we pick out the
laws or do they pick us?. But I am still working my way through this
and  and loads of other stuff, so I don't understand it yet.



The computable laws (definable in elementary arithmetic) pick "us",  
and "we" pick the physical law.


"Number => consciousness => matter."

But this makes sense only if you mean by "us", "us, the universal  
machines".

It is pretty ridiculous, if you meant it by "us" the "humans".

It is tricky to understand. Comp *is* counterintuitive. It is related  
to a gap between the fist and third person point of view, which came  
from the gap between 'true' and 'provable', (and 'true and provable',  
etc.).


The possibility of this "reversal" comes from "programming", or "Gödel  
numbering". It comes from the fact that a part of the mathematical  
reasoning can be translated into arithmetic, and so does the  
computations.
Auda comes from the fact, already well seen by Gödel in 1931, that  
machines, or axiomatizable set of beliefs (theorie), can prove their  
own Gödel's incompleteness result (the so called "formalized" second  
incompleteness theorem). (~Bf -> ~B ~Bf).


Good book: Boolos 1979. (assume Mendelson's book or alike). No need  
for uda, although it helps to "de-trivialize" uda, it makes the mind  
body problem a problem in pure math/computer science.


Bruno







http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Bruno Marchal


On 05 Jan 2010, at 19:59, Brent Meeker wrote:


Nick Prince wrote:

Is this because you think of your stream of consciousness as somehow
like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is always
there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey the
laws of physics.


But there's the rub.  Why the laws of physics?  That's what somehow
needs to be explained.  Is there something about the UD that  
necessarily

generates law like sequences of states with high probability?



By definition, the UD "generates" all and only the (computable) law  
like sequences.
The problem is that the "physical law like sequence have to be  
justified", indeed.
This is what is interesting in comp. It gives a solid theory of mind  
(computer science, mathematical logic, machine self-reference, etc.),  
and it transforms the mind body problem into a body problem.

The laws of physics have a reason, an origin.





Doesn't
it generate just those laws we seem to find - that would be a great
discovery.


The UD generates all the laws.
It may or not generate the laws we seem to find.
In any case, those laws have to be a sum on all the (computable) laws.  
(ud argument).





Or does it generate all possible non-self-contradictory
multiverses - in which case nothing has been explained.


The UD executes all programs. It generates all the possible  
computations, those which terminate and those which don't terminate.
It is well defined mathematically, with respect to many equivalence  
results, closure results, Church thesis, etc.


A notion like "consistent extension" makes sense only for the  
"persons" relatively appearing "in" deeper computations, so the  
precise relation between "consistent extensions" and the UD needs the  
use of the Gödel Löb provability logics.


Bruno





Deutsch argues similarly in the Fabric of reality.
In my work I often come across the idea of a foliation of
hypersurfaces which is really a set of 3D pictures "stuck together  
and

stacked in the direction of the time coordinate of the world at a
given instant of time.


But that's starting with the physics given, so the hypersurfaces and
their relation is already defined.

Brent


In MW interpretation though I guess that the
stacking is less certain as in the block universe idea but that's
another issue.  Is this analogy similar to how you feel  the  
"obvious"

experience of time being normal?

Best

Nick

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Bruno Marchal


On 05 Jan 2010, at 19:57, Brent Meeker wrote:




Yes but the UD will generate infinitely more often the in order S1/ 
S2/S3

than out of order... with what you are saying I don't even understand
what is a computation if not a rules ordered sequential state order.

Quentin


It seems strange that we start with the hypothesis that  
consciousness is a kind of computation - a sequential processing of  
information - and then arrive at picture in which there is no  
processing and sequence is just inferred.  On the one hand  
consciousness is a process, on the other hand it is static state.  I  
suspect there is something wrong with the slicing of the stream of  
consciousness into zero-duration, non-overlapping states.



But that problem occurs also with physics, as illustrated by the  
debate on "time" and "block universe".
Also, we have to be careful: no where it has been said that  
consciousness is a kind of computation. Obviously "consciousness" is  
not a kind of computation. It is a property of (first) person, which,  
assuming mechanism, is invariant for a set of functional substitution.
Then a reasoning shows that we cannot distinguish a "physical  
computation" from a mathematical one, and that we have to take this  
into account for justifying the (conscious) appearance of the physical  
laws.


Slicing the stream of consciousness, or just the stream of time like  
the physicists do a lot, into zero-length interval is a critics of the  
use of real number, and somehow comp escapes it, given that real  
numbers does not (necessarily) exists at the ontological level. They  
exist necessarily at the epistemological level though.





I can see that states can encode information that, when coarse  
grained, define a sequence of increasing entropy, but is it  
legitimate to identify having the information "in memory" with  
"remembering"?


In my opinion, time is far less problematical in comp than in physics,  
given that we assume a form of primitive time, first by the number  
order, then by the length of computations or of proofs.
Arithmletic and provability logic are so "antisymmetrical" that I was  
afraid the comp physics would contradict the very symmetry of nature  
(laws of physics are reversible, most computations are not).
But the "intelligible and sensible" comp "matter" (the probability one  
defined by Bp & Dt (& p), luckily enough seems able to restaure the  
symmetry, or at least some symmetry. Enough? Open problem.



Bruno


http://iridia.ulb.ac.be/~marchal/



-- 

You received this message because you are subscribed to the Google Groups "Everything List" group.

To post to this group, send email to everything-l...@googlegroups.com.

To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Re: UDA query

2010-01-06 Thread Stathis Papaioannou
2010/1/6 Brent Meeker :

> I can understand that view, but in that case why consider them
> computations?  Why not just suppose all states of your consciousness (and
> even other parts of the world) exist.  If they can be glued together by
> inherent features or simply experienced without even an implicit order,
> then computation seems irrelevant.  Of course that leaves the apparent
> lawfulness of physics even further from possible explanation than the UD
> theory.

We start off with what we observe: apparently there is a physical
world, and some parts of this physical world, called brains, seem to
give rise to consciousness. There is reason to think that computers
running a program can also give rise to consciousness. Taking this
hypothesis of computationalism seriously then leads to interesting
questions, such as whether there is a reason to suppose that
consciousness happens only when the computations are physically
instantiated (and what exactly that means), or whether their status as
platonic objects is enough to generate the associated consciousness.
In other words, there is a series of rational steps starting from what
we observe, and if any step is faulted the whole edifice falls;
whereas imply assuming idealism from the start is ad hoc and
unfalsifiable.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-06 Thread Stathis Papaioannou
2010/1/6 Nick Prince :
> As I understand it the UD generates all possible programs and as it
> generates each one it runs one step of it before generating the next.
> Does that not mean that eventually it will generate the program which
> is generating what we understand to be some observer moments for us at
> this particular time. This is where I was thinking of the foliation
> bit - each hypersurface is a snapshot in time of the universe as
> experienced by me.  This being said would that not mean they would
> necessarily be in order or are you thinking that some other program.
> could generate by chance a perfectly good observer moment that was out
> of sync?

A program that generates S2 as it were out of nowhere, with false
memories of an S1 that has not yet happened or may never happen, is a
perfectly legitimate program and the UD will generate it along with
all the others. If the UD is allowed to run forever, this program will
be a lower measure contributor to S2 than the program that generates
it sequentially; and similarly in any physicalist theory. But although
S2 may guess from such considerations that he is more likely to have
been generated sequentially, the point remains that there is nothing
in the nature of his experience to indicate this. That is, the fact
that S2 remembers S1 as being in the past and remembers a smooth
transition from S1 to S2 is no guarantee that S1 really did happen in
the past, or even at all.


-- 
Stathis Papaioannou
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Brent Meeker




Nick Prince wrote:

  Hi Brent

Perhaps Bruno could give some clarification  here. Just prior to his
conclusion on the sane paper I quoted from was this:

"So if we keep comp at this stage, we are forced to relate the inner
experience only to
the type of computation involved. The reason is that only those types
are univocally related to
all their possible counterfactuals. This entails that, from a first
person point of view, not only
the physical cannot be distinguished from the virtual, but the virtual
can no more be
distinguished from the arithmetical. Now DU is emulated
platonistically by the verifiable
propositions of arithmetic. They are equivalent to sentences of the
form ‘‘it exists n such that
P(n)’’ with P(n) decidable. Their truth entails their provability, and
they are known under the
name of Sigma1 sentence.
If comp is correct, the appearance of physics must be recovered from
some point of
views emerging from those propositions. 


Why only the atomic sentences?  Why not all true sentences?  How is
"appearance" recovered?


  Indeed, taking into account
the seven steps once
more, we arrive at the conclusion that the physical atomic (in the
Boolean logician sense)
invariant proposition must be given by a probability measure on those
propositions. 

But what gives the probability measure?  Is it just the relative
frequency of occurrence of the atomic sentences in the UD output up to
a given step?

Brent


  A
physical certainty must be true in all maximal extensions, true in at
least one maximal extension (we will see later why the second
condition does not follow from the first) and
accessible by the UD, that is arithmetically verifiable. Figure 8
illustrates our main
conclusion, where the number 1 is put for the so called Sigma1
sentences of arithmetic."

It sounds as if Bruno thinks that the computations of the UD invoke
our inner experiences and also our understanding of  physics.  Both
come from arithmetical platonicism ( because thats what the UD is all
about).  So the pictures in the "film" are stiched together by the
arithmetical (computation necessity) rather than the laws of of
physics... Hmm not what I thought and said earlier!!

So according to Bruno the laws of physics come from something
intrinsic in the computation?  Not quite sure how.  I just can't
figure out any more at the moment and hope Bruno will give me a hint
here.

Enjoying the dialogue!

Nick



On Jan 5, 10:44 pm, Brent Meeker  wrote:
  
  
Nick Prince wrote:


  OOps sorry I sent an empty post by accident.
  


  I agree with you here.  But I am new to this field so I am uncertain
about so many things.  However, I don't understand why it is that  a
UD would know how to generate these law like sequences of states. It
may well generate all possible programs that generate all possible
universes (with different values for the physical constants say -
maybe even different laws) but I wonder why our conciousness defines
itself by "selecting" only those "consistent" extension among all the
states available that obey a certain set of  laws of physics.
  


  I thought that a TOE should explain the laws of physics and Bruno
states in his SANE paper
  


  " Conclusion: Physics is given by a measure on the consistent
computational histories, or
maximal consistent extensions as seen from some first person point of
view.
  

But consistent in what sense?  We can't say "consistent with the laws of
physics" because that's what we're trying to explain.



  Laws of physics,
in particular, should be inferable from the true verifiable atomic
sentences . Those are the
verifiable arithmetical sentences.
  

I understand true arithmetical sentences, but I'm not sure what
'verifiable' means?  Does it mean computable, or provable?  What's an
atomic sentence?  Is it just a finite statement, like "17 is prime"; so
it excludes infinite statements like Goldbach's conjecture?

Brent





  They should be true everywhere (=
in all comp histories),
true somewhere (= true in at least one comp history), and inferred
from the DU-accessible
atomic states".
It feels a bit lie a chicken and egg situation - do we pick out the
laws or do they pick us?. But I am still working my way through this
and  and loads of other stuff, so I don't understand it yet.
  


  Best
  


  Nick
  


  On Jan 5, 6:59 pm, Brent Meeker  wrote:
  


  
Nick Prince wrote:

  


  

  Is this because you think of your stream of consciousness as somehow
like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is always
there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey the
laws of physics.  
  

Re: UDA query

2010-01-05 Thread Nick Prince
Hi Brent

Perhaps Bruno could give some clarification  here. Just prior to his
conclusion on the sane paper I quoted from was this:

"So if we keep comp at this stage, we are forced to relate the inner
experience only to
the type of computation involved. The reason is that only those types
are univocally related to
all their possible counterfactuals. This entails that, from a first
person point of view, not only
the physical cannot be distinguished from the virtual, but the virtual
can no more be
distinguished from the arithmetical. Now DU is emulated
platonistically by the verifiable
propositions of arithmetic. They are equivalent to sentences of the
form ‘‘it exists n such that
P(n)’’ with P(n) decidable. Their truth entails their provability, and
they are known under the
name of Sigma1 sentence.
If comp is correct, the appearance of physics must be recovered from
some point of
views emerging from those propositions. Indeed, taking into account
the seven steps once
more, we arrive at the conclusion that the physical atomic (in the
Boolean logician sense)
invariant proposition must be given by a probability measure on those
propositions. A
physical certainty must be true in all maximal extensions, true in at
least one maximal extension (we will see later why the second
condition does not follow from the first) and
accessible by the UD, that is arithmetically verifiable. Figure 8
illustrates our main
conclusion, where the number 1 is put for the so called Sigma1
sentences of arithmetic."

It sounds as if Bruno thinks that the computations of the UD invoke
our inner experiences and also our understanding of  physics.  Both
come from arithmetical platonicism ( because thats what the UD is all
about).  So the pictures in the "film" are stiched together by the
arithmetical (computation necessity) rather than the laws of of
physics... Hmm not what I thought and said earlier!!

So according to Bruno the laws of physics come from something
intrinsic in the computation?  Not quite sure how.  I just can't
figure out any more at the moment and hope Bruno will give me a hint
here.

Enjoying the dialogue!

Nick



On Jan 5, 10:44 pm, Brent Meeker  wrote:
> Nick Prince wrote:
> > OOps sorry I sent an empty post by accident.
>
> > I agree with you here.  But I am new to this field so I am uncertain
> > about so many things.  However, I don't understand why it is that  a
> > UD would know how to generate these law like sequences of states. It
> > may well generate all possible programs that generate all possible
> > universes (with different values for the physical constants say -
> > maybe even different laws) but I wonder why our conciousness defines
> > itself by "selecting" only those "consistent" extension among all the
> > states available that obey a certain set of  laws of physics.
>
> > I thought that a TOE should explain the laws of physics and Bruno
> > states in his SANE paper
>
> > " Conclusion: Physics is given by a measure on the consistent
> > computational histories, or
> > maximal consistent extensions as seen from some first person point of
> > view.
>
> But consistent in what sense?  We can't say "consistent with the laws of
> physics" because that's what we're trying to explain.
>
> > Laws of physics,
> > in particular, should be inferable from the true verifiable atomic
> > sentences . Those are the
> > verifiable arithmetical sentences.
>
> I understand true arithmetical sentences, but I'm not sure what
> 'verifiable' means?  Does it mean computable, or provable?  What's an
> atomic sentence?  Is it just a finite statement, like "17 is prime"; so
> it excludes infinite statements like Goldbach's conjecture?
>
> Brent
>
>
>
> > They should be true everywhere (=
> > in all comp histories),
> > true somewhere (= true in at least one comp history), and inferred
> > from the DU-accessible
> > atomic states".
> > It feels a bit lie a chicken and egg situation - do we pick out the
> > laws or do they pick us?. But I am still working my way through this
> > and  and loads of other stuff, so I don't understand it yet.
>
> > Best
>
> > Nick
>
> > On Jan 5, 6:59 pm, Brent Meeker  wrote:
>
> >> Nick Prince wrote:
>
> >>> Is this because you think of your stream of consciousness as somehow
> >>> like a reel of film?  All the individual pictures could be cut from
> >>> the reel and laid out any which way but the implicit order is always
> >>> there.  I can understand this because all the spatio temporal
> >>> relationships for the actors in the film remain "normal" i.e obey the
> >>> laws of physics.  
>
> >> But there's the rub.  Why the laws of physics?  That's what somehow
> >> needs to be explained.  Is there something about the UD that necessarily
> >> generates law like sequences of states with high probability?  Doesn't
> >> it generate just those laws we seem to find - that would be a great
> >> discovery.  Or does it generate all possible non-self-contradictory
> >> multiverses - in which case nothing h

Re: UDA query

2010-01-05 Thread Brent Meeker

Nick Prince wrote:

OOps sorry I sent an empty post by accident.

I agree with you here.  But I am new to this field so I am uncertain
about so many things.  However, I don't understand why it is that  a
UD would know how to generate these law like sequences of states. It
may well generate all possible programs that generate all possible
universes (with different values for the physical constants say -
maybe even different laws) but I wonder why our conciousness defines
itself by "selecting" only those "consistent" extension among all the
states available that obey a certain set of  laws of physics.

I thought that a TOE should explain the laws of physics and Bruno
states in his SANE paper

" Conclusion: Physics is given by a measure on the consistent
computational histories, or
maximal consistent extensions as seen from some first person point of
view. 


But consistent in what sense?  We can't say "consistent with the laws of 
physics" because that's what we're trying to explain.



Laws of physics,
in particular, should be inferable from the true verifiable ‘‘atomic
sentences’’. Those are the
verifiable arithmetical sentences. 


I understand true arithmetical sentences, but I'm not sure what 
'verifiable' means?  Does it mean computable, or provable?  What's an 
atomic sentence?  Is it just a finite statement, like "17 is prime"; so 
it excludes infinite statements like Goldbach's conjecture?



Brent


They should be true everywhere (=
in all comp histories),
true somewhere (= true in at least one comp history), and inferred
from the DU-accessible
‘‘atomic’’ states".
It feels a bit lie a chicken and egg situation - do we pick out the
laws or do they pick us?. But I am still working my way through this
and  and loads of other stuff, so I don't understand it yet.

Best

Nick


On Jan 5, 6:59 pm, Brent Meeker  wrote:
  

Nick Prince wrote:


Is this because you think of your stream of consciousness as somehow
like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is always
there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey the
laws of physics.  
  

But there's the rub.  Why the laws of physics?  That's what somehow
needs to be explained.  Is there something about the UD that necessarily
generates law like sequences of states with high probability?  Doesn't
it generate just those laws we seem to find - that would be a great
discovery.  Or does it generate all possible non-self-contradictory
multiverses - in which case nothing has been explained.



Deutsch argues similarly in the Fabric of reality.
In my work I often come across the idea of a foliation of
hypersurfaces which is really a set of 3D pictures "stuck together and
stacked in the direction of the time coordinate of the world at a
given instant of time.  
  

But that's starting with the physics given, so the hypersurfaces and
their relation is already defined.

Brent





In MW interpretation though I guess that the
stacking is less certain as in the block universe idea but that's
another issue.  Is this analogy similar to how you feel  the "obvious"
experience of time being normal?
  
Best
  
Nick- Hide quoted text -
  

- Show quoted text -



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Nick Prince
OOps sorry I sent an empty post by accident.

I agree with you here.  But I am new to this field so I am uncertain
about so many things.  However, I don't understand why it is that  a
UD would know how to generate these law like sequences of states. It
may well generate all possible programs that generate all possible
universes (with different values for the physical constants say -
maybe even different laws) but I wonder why our conciousness defines
itself by "selecting" only those "consistent" extension among all the
states available that obey a certain set of  laws of physics.

I thought that a TOE should explain the laws of physics and Bruno
states in his SANE paper

" Conclusion: Physics is given by a measure on the consistent
computational histories, or
maximal consistent extensions as seen from some first person point of
view. Laws of physics,
in particular, should be inferable from the true verifiable ‘‘atomic
sentences’’. Those are the
verifiable arithmetical sentences. They should be true everywhere (=
in all comp histories),
true somewhere (= true in at least one comp history), and inferred
from the DU-accessible
‘‘atomic’’ states".
It feels a bit lie a chicken and egg situation - do we pick out the
laws or do they pick us?. But I am still working my way through this
and  and loads of other stuff, so I don't understand it yet.

Best

Nick


On Jan 5, 6:59 pm, Brent Meeker  wrote:
> Nick Prince wrote:
> > Is this because you think of your stream of consciousness as somehow
> > like a reel of film?  All the individual pictures could be cut from
> > the reel and laid out any which way but the implicit order is always
> > there.  I can understand this because all the spatio temporal
> > relationships for the actors in the film remain "normal" i.e obey the
> > laws of physics.  
>
> But there's the rub.  Why the laws of physics?  That's what somehow
> needs to be explained.  Is there something about the UD that necessarily
> generates law like sequences of states with high probability?  Doesn't
> it generate just those laws we seem to find - that would be a great
> discovery.  Or does it generate all possible non-self-contradictory
> multiverses - in which case nothing has been explained.
>
> > Deutsch argues similarly in the Fabric of reality.
> > In my work I often come across the idea of a foliation of
> > hypersurfaces which is really a set of 3D pictures "stuck together and
> > stacked in the direction of the time coordinate of the world at a
> > given instant of time.  
>
> But that's starting with the physics given, so the hypersurfaces and
> their relation is already defined.
>
> Brent
>
>
>
> > In MW interpretation though I guess that the
> > stacking is less certain as in the block universe idea but that's
> > another issue.  Is this analogy similar to how you feel  the "obvious"
> > experience of time being normal?
>
> > Best
>
> > Nick- Hide quoted text -
>
> - Show quoted text -
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Nick Prince


On Jan 5, 6:59 pm, Brent Meeker  wrote:
> Nick Prince wrote:
> > Is this because you think of your stream of consciousness as somehow
> > like a reel of film?  All the individual pictures could be cut from
> > the reel and laid out any which way but the implicit order is always
> > there.  I can understand this because all the spatio temporal
> > relationships for the actors in the film remain "normal" i.e obey the
> > laws of physics.  
>
> But there's the rub.  Why the laws of physics?  That's what somehow
> needs to be explained.  Is there something about the UD that necessarily
> generates law like sequences of states with high probability?  Doesn't
> it generate just those laws we seem to find - that would be a great
> discovery.  Or does it generate all possible non-self-contradictory
> multiverses - in which case nothing has been explained.
>
> > Deutsch argues similarly in the Fabric of reality.
> > In my work I often come across the idea of a foliation of
> > hypersurfaces which is really a set of 3D pictures "stuck together and
> > stacked in the direction of the time coordinate of the world at a
> > given instant of time.  
>
> But that's starting with the physics given, so the hypersurfaces and
> their relation is already defined.
>
> Brent
>
>
>
> > In MW interpretation though I guess that the
> > stacking is less certain as in the block universe idea but that's
> > another issue.  Is this analogy similar to how you feel  the "obvious"
> > experience of time being normal?
>
> > Best
>
> > Nick- Hide quoted text -
>
> - Show quoted text -
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Brent Meeker

Nick Prince wrote:

Is this because you think of your stream of consciousness as somehow
like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is always
there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey the
laws of physics.  


But there's the rub.  Why the laws of physics?  That's what somehow
needs to be explained.  Is there something about the UD that necessarily
generates law like sequences of states with high probability?  Doesn't
it generate just those laws we seem to find - that would be a great
discovery.  Or does it generate all possible non-self-contradictory
multiverses - in which case nothing has been explained.


Deutsch argues similarly in the Fabric of reality.
In my work I often come across the idea of a foliation of
hypersurfaces which is really a set of 3D pictures "stuck together and
stacked in the direction of the time coordinate of the world at a
given instant of time.  


But that's starting with the physics given, so the hypersurfaces and
their relation is already defined.

Brent


In MW interpretation though I guess that the
stacking is less certain as in the block universe idea but that's
another issue.  Is this analogy similar to how you feel  the "obvious"
experience of time being normal?

Best

Nick
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Brent Meeker

Stathis Papaioannou wrote:

2010/1/4 Brent Meeker :

  

I think you give an excellent explication of the problem, Stathis.  However,
one thing about it that still worries me is the role of time. You say the
mapping need not be consistent even moment to moment, and yet the mapping is
a timeless Platonic object.  To be a timeless object the the moments need
some timeless representation.  In Bruno's theory time arises from the
computational sequence.  But in the mapping, time is just a relation of
similarity (closest continuation) of states.  So three states which when
ordered by closest continuation are XYZ may have been computed in the order
XZY.  So I find myself seeing the hardwareless computer as a reductio
against consciousness=computation thesis and support for Peter's view that
ur-stuff and contingency are fundamental.



It always seemed to me obvious that I would experience time normally
if the computations or other physical processes generating my stream
of consciousness were chopped up and played out of sequence,
backwards, simultaneously or whatever. It could be happening right
now: I have no way to know if the seconds of my life are running
sequentially or all in parallel during a single second of real time.
The two problems that many seem to have with this idea is a feeling
that there needs to be some sort of mechanism for singling out the
time slice that is the "now", and a feeling that the time slices lack
a causal glue to connect them together. But maybe I'm missing
something, because these objections never seemed to me to be problems.


  

I can understand that view, but in that case why consider them
computations?  Why not just suppose all states of your consciousness (and
even other parts of the world) exist.  If they can be glued together by
inherent features or simply experienced without even an implicit order,
then computation seems irrelevant.  Of course that leaves the apparent
lawfulness of physics even further from possible explanation than the UD
theory.

Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Brent Meeker

Quentin Anciaux wrote:

Le mercredi 06 janvier 2010 à 00:29 +1100, Stathis Papaioannou a écrit :
  

2010/1/5 Quentin Anciaux :



Consider a set of three one minute intervals of experience, {S1, S2,
S3}, which belong to a person S. S2 remembers S1 and remembers no gap
or intervening experiences between S2 and S1; S3 remembers S1 and S2
and remembers that S1 preceded S2; and S3 also remembers no gap or
intervening experiences between S2 and S1 or between S3 and S2. In
other words, they are subjectively three consecutive minutes in the
life of S. S is aware that his experiences are generated on a
computer, and he is also aware that they are being generated in one of
two ways: in sequence as S1, S2, S3 or out of sequence as S2, S1, S3.
Does S have any basis for deciding that it is more likely that his
experiences are being generated in sequence?



It seems to me that it depends if the computation is iterative or not... in
other words, to compute step N you must have computed step N-1 before that.

If you can directly compute step N without computing prior step, S2/S1/S3 is
possible. If not you had necessarily computed step S1 before S2, only by
doing a replay of a previously done computation you could do it :

- first generate S1/S2/S3 in order and save each intermediate result, then
you can do
- S2 (taking the previously intermediate result of S1), S1 then S3 (taking
S2 result).

But running the same thing more times add a priori nothing. If the process
is iterative then "in order" computation win the measure battle (because any
out of order one require a genuine in order computation before).
  

Another way to compute S2 without using S1 would be to run the UD.




Yes but the UD will generate infinitely more often the in order S1/S2/S3
than out of order... with what you are saying I don't even understand
what is a computation if not a rules ordered sequential state order.

Quentin


It seems strange that we start with the hypothesis that consciousness is 
a kind of computation - a sequential processing of information - and 
then arrive at picture in which there is no processing and sequence is 
just inferred.  On the one hand consciousness is a process, on the other 
hand it is static state.  I suspect there is something wrong with the 
slicing of the stream of consciousness into zero-duration, 
non-overlapping states.  I can see that states can encode information 
that, when coarse grained, define a sequence of increasing entropy, but 
is it legitimate to identify having the information "in memory" with 
"remembering"?


Brent

Brent
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Bruno Marchal


On 05 Jan 2010, at 15:09, Stathis Papaioannou wrote:


2010/1/6 Quentin Anciaux :

It seems to me that it depends if the computation is iterative or  
not... in
other words, to compute step N you must have computed step N-1  
before that.


If you can directly compute step N without computing prior step,  
S2/S1/S3 is
possible. If not you had necessarily computed step S1 before S2,  
only by

doing a replay of a previously done computation you could do it :

- first generate S1/S2/S3 in order and save each intermediate  
result, then

you can do
- S2 (taking the previously intermediate result of S1), S1 then  
S3 (taking

S2 result).

But running the same thing more times add a priori nothing. If  
the process
is iterative then "in order" computation win the measure battle  
(because any

out of order one require a genuine in order computation before).


Another way to compute S2 without using S1 would be to run the UD.



Yes but the UD will generate infinitely more often the in order S1/ 
S2/S3

than out of order... with what you are saying I don't even understand
what is a computation if not a rules ordered sequential state order.


A UD running on an actual computer for a finite time *could* generate
S2 before S1.


The UD will generate all the computations going through S1 and S2.

From the first point of view, if S1 correspond to a possible comp  
state of mind, the next probable states depends on the infinitely many  
computations going through S1.






There is nothing in the experience of S to indicate
which was generated first, even though if he had to guess with no
other information he is more likely to be right if he guesses he is
being generated sequentially.



Note that universal computation can be made reversible. Quantum  
computer are reversible, up to the measurement, which is an internal  
event (in the MWI) happening. A priori, the average UD will be non  
reversible, and most computations evolves in more and more complex  
type of events (like a zoom on the Mandelbrot set: it is not just self- 
similar, it is more and more locally complex).


If the sequence S1 S2 S3 belongs to a computation, it means there is a  
universal number U such that U compute S1 into S2 and then S3.
Automatically the UD will generate "later" (in the UD "time") another  
universal number W which will compute U:  (U S1) => (U S2) => (U S3),  
(this is a different, probably longer computation, generating again  
the computation S1, S2, S3) and then another universal J, etc.


So if the order S1, S2, S3 has some logic, it will reoccurs an  
infinite of times in deeper and deeper computations, some leading to  
rare object (object having a necessary long computations), that may  
explain some "cosmic aspect".


The UD will also generate infinitely many description of S1, S2, and  
S3, in many order, but without relating them to "logical histories  
(computations). This is due to the fact that the UD dovetails also on  
the bigger and bigger inputs, using bigger and bigger part of oracles  
(real numbers), which may describe computations. But such description  
of computations are NOT computations. They are not linked through a  
universal machine.


If you take arbitrary sequence of state S1, S2, S3, S4, ..., you will  
have 2^aleph_zero sequences. The computations are (third person)  
enumerable, because defined by universal number, which are enumerable.


So, of course, we have to choose an initial universal machine. It  
defines the base of the phi_i. The UDA shows that ANY choice will do.  
In particular we can chose elementary arithmetic, or the combinators,  
or the universal wave function.


But choosing "the universal wave function" is a bad choice if we want  
to progress on the mind-body (consciousness/reality) problem, given  
that comp makes the physics defined by a measure on all computations,  
it is preferable to verify this from elementary arithmetic, or the  
combinators, than the universal wave function (where this is trivial),  
so that we can test the comp physics and better understand the comp  
hypothesis.


The logic of self-reference makes then possible to distinguish the  
quanta (physical communication) and the qualia (physical sensations).  
It does not give explicitly the "measure" on the computational  
histories, but it gives the logics obeyed by the measure one, from  
each "person" points of view (hypostases). (That's auda).


Bruno

http://iridia.ulb.ac.be/~marchal/



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Nick Prince
As I understand it the UD generates all possible programs and as it
generates each one it runs one step of it before generating the next.
Does that not mean that eventually it will generate the program which
is generating what we understand to be some observer moments for us at
this particular time. This is where I was thinking of the foliation
bit - each hypersurface is a snapshot in time of the universe as
experienced by me.  This being said would that not mean they would
necessarily be in order or are you thinking that some other program.
could generate by chance a perfectly good observer moment that was out
of sync?


Best

Nick

On Jan 5, 2:09 pm, Stathis Papaioannou  wrote:
> 2010/1/6 Quentin Anciaux :
>
>
>
>
>
> >> > It seems to me that it depends if the computation is iterative or not... 
> >> > in
> >> > other words, to compute step N you must have computed step N-1 before 
> >> > that.
>
> >> > If you can directly compute step N without computing prior step, 
> >> > S2/S1/S3 is
> >> > possible. If not you had necessarily computed step S1 before S2, only by
> >> > doing a replay of a previously done computation you could do it :
>
> >> > - first generate S1/S2/S3 in order and save each intermediate result, 
> >> > then
> >> > you can do
> >> > - S2 (taking the previously intermediate result of S1), S1 then S3 
> >> > (taking
> >> > S2 result).
>
> >> > But running the same thing more times add a priori nothing. If the 
> >> > process
> >> > is iterative then "in order" computation win the measure battle (because 
> >> > any
> >> > out of order one require a genuine in order computation before).
>
> >> Another way to compute S2 without using S1 would be to run the UD.
>
> > Yes but the UD will generate infinitely more often the in order S1/S2/S3
> > than out of order... with what you are saying I don't even understand
> > what is a computation if not a rules ordered sequential state order.
>
> A UD running on an actual computer for a finite time *could* generate
> S2 before S1. There is nothing in the experience of S to indicate
> which was generated first, even though if he had to guess with no
> other information he is more likely to be right if he guesses he is
> being generated sequentially.
>
> --
> Stathis Papaioannou- Hide quoted text -
>
> - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Nick Prince
Thank you Stathis,  That does make sense to me.

On Jan 5, 12:22 pm, Stathis Papaioannou  wrote:
> 2010/1/5 Nick Prince :
>
> > Is this because you think of your stream of consciousness as somehow
> > like a reel of film?  All the individual pictures could be cut from
> > the reel and laid out any which way but the implicit order is always
> > there.  I can understand this because all the spatio temporal
> > relationships for the actors in the film remain "normal" i.e obey the
> > laws of physics.  Deutsch argues similarly in the Fabric of reality.
> > In my work I often come across the idea of a foliation of
> > hypersurfaces which is really a set of 3D pictures "stuck together and
> > stacked in the direction of the time coordinate of the world at a
> > given instant of time.  In MW interpretation though I guess that the
> > stacking is less certain as in the block universe idea but that's
> > another issue.  Is this analogy similar to how you feel  the "obvious"
> > experience of time being normal?
>
> (I'm afraid the idea of a foliation of hypersurfaces is wasted on me
> as an explanatory aid!)
>
> It's like a reel of film in which the characters are conscious. For an
> outside observer rearranging the frames out of sequence and playing
> the film would be totally confusing, but for the characters in the
> film it would make no difference. because the ordering is implicit in
> the information contained in each frame.
>
> Consider a set of three one minute intervals of experience, {S1, S2,
> S3}, which belong to a person S. S2 remembers S1 and remembers no gap
> or intervening experiences between S2 and S1; S3 remembers S1 and S2
> and remembers that S1 preceded S2; and S3 also remembers no gap or
> intervening experiences between S2 and S1 or between S3 and S2. In
> other words, they are subjectively three consecutive minutes in the
> life of S. S is aware that his experiences are generated on a
> computer, and he is also aware that they are being generated in one of
> two ways: in sequence as S1, S2, S3 or out of sequence as S2, S1, S3.
> Does S have any basis for deciding that it is more likely that his
> experiences are being generated in sequence?
>
> --
> Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Stathis Papaioannou
2010/1/6 Quentin Anciaux :

>> > It seems to me that it depends if the computation is iterative or not... in
>> > other words, to compute step N you must have computed step N-1 before that.
>> >
>> > If you can directly compute step N without computing prior step, S2/S1/S3 
>> > is
>> > possible. If not you had necessarily computed step S1 before S2, only by
>> > doing a replay of a previously done computation you could do it :
>> >
>> > - first generate S1/S2/S3 in order and save each intermediate result, then
>> > you can do
>> > - S2 (taking the previously intermediate result of S1), S1 then S3 (taking
>> > S2 result).
>> >
>> > But running the same thing more times add a priori nothing. If the process
>> > is iterative then "in order" computation win the measure battle (because 
>> > any
>> > out of order one require a genuine in order computation before).
>>
>> Another way to compute S2 without using S1 would be to run the UD.
>>
>
> Yes but the UD will generate infinitely more often the in order S1/S2/S3
> than out of order... with what you are saying I don't even understand
> what is a computation if not a rules ordered sequential state order.

A UD running on an actual computer for a finite time *could* generate
S2 before S1. There is nothing in the experience of S to indicate
which was generated first, even though if he had to guess with no
other information he is more likely to be right if he guesses he is
being generated sequentially.


-- 
Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Quentin Anciaux
Le mercredi 06 janvier 2010 à 00:29 +1100, Stathis Papaioannou a écrit :
> 2010/1/5 Quentin Anciaux :
> 
> >> Consider a set of three one minute intervals of experience, {S1, S2,
> >> S3}, which belong to a person S. S2 remembers S1 and remembers no gap
> >> or intervening experiences between S2 and S1; S3 remembers S1 and S2
> >> and remembers that S1 preceded S2; and S3 also remembers no gap or
> >> intervening experiences between S2 and S1 or between S3 and S2. In
> >> other words, they are subjectively three consecutive minutes in the
> >> life of S. S is aware that his experiences are generated on a
> >> computer, and he is also aware that they are being generated in one of
> >> two ways: in sequence as S1, S2, S3 or out of sequence as S2, S1, S3.
> >> Does S have any basis for deciding that it is more likely that his
> >> experiences are being generated in sequence?
> >>
> >
> > It seems to me that it depends if the computation is iterative or not... in
> > other words, to compute step N you must have computed step N-1 before that.
> >
> > If you can directly compute step N without computing prior step, S2/S1/S3 is
> > possible. If not you had necessarily computed step S1 before S2, only by
> > doing a replay of a previously done computation you could do it :
> >
> > - first generate S1/S2/S3 in order and save each intermediate result, then
> > you can do
> > - S2 (taking the previously intermediate result of S1), S1 then S3 (taking
> > S2 result).
> >
> > But running the same thing more times add a priori nothing. If the process
> > is iterative then "in order" computation win the measure battle (because any
> > out of order one require a genuine in order computation before).
> 
> Another way to compute S2 without using S1 would be to run the UD.
> 

Yes but the UD will generate infinitely more often the in order S1/S2/S3
than out of order... with what you are saying I don't even understand
what is a computation if not a rules ordered sequential state order.

Quentin


-- 
All those moments will be lost in time, like tears in rain.

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Stathis Papaioannou
2010/1/5 Quentin Anciaux :

>> Consider a set of three one minute intervals of experience, {S1, S2,
>> S3}, which belong to a person S. S2 remembers S1 and remembers no gap
>> or intervening experiences between S2 and S1; S3 remembers S1 and S2
>> and remembers that S1 preceded S2; and S3 also remembers no gap or
>> intervening experiences between S2 and S1 or between S3 and S2. In
>> other words, they are subjectively three consecutive minutes in the
>> life of S. S is aware that his experiences are generated on a
>> computer, and he is also aware that they are being generated in one of
>> two ways: in sequence as S1, S2, S3 or out of sequence as S2, S1, S3.
>> Does S have any basis for deciding that it is more likely that his
>> experiences are being generated in sequence?
>>
>
> It seems to me that it depends if the computation is iterative or not... in
> other words, to compute step N you must have computed step N-1 before that.
>
> If you can directly compute step N without computing prior step, S2/S1/S3 is
> possible. If not you had necessarily computed step S1 before S2, only by
> doing a replay of a previously done computation you could do it :
>
> - first generate S1/S2/S3 in order and save each intermediate result, then
> you can do
> - S2 (taking the previously intermediate result of S1), S1 then S3 (taking
> S2 result).
>
> But running the same thing more times add a priori nothing. If the process
> is iterative then "in order" computation win the measure battle (because any
> out of order one require a genuine in order computation before).

Another way to compute S2 without using S1 would be to run the UD.

-- 
Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Quentin Anciaux
2010/1/5 Stathis Papaioannou 

> 2010/1/5 Nick Prince :
> > Is this because you think of your stream of consciousness as somehow
> > like a reel of film?  All the individual pictures could be cut from
> > the reel and laid out any which way but the implicit order is always
> > there.  I can understand this because all the spatio temporal
> > relationships for the actors in the film remain "normal" i.e obey the
> > laws of physics.  Deutsch argues similarly in the Fabric of reality.
> > In my work I often come across the idea of a foliation of
> > hypersurfaces which is really a set of 3D pictures "stuck together and
> > stacked in the direction of the time coordinate of the world at a
> > given instant of time.  In MW interpretation though I guess that the
> > stacking is less certain as in the block universe idea but that's
> > another issue.  Is this analogy similar to how you feel  the "obvious"
> > experience of time being normal?
>
> (I'm afraid the idea of a foliation of hypersurfaces is wasted on me
> as an explanatory aid!)
>
> It's like a reel of film in which the characters are conscious. For an
> outside observer rearranging the frames out of sequence and playing
> the film would be totally confusing, but for the characters in the
> film it would make no difference. because the ordering is implicit in
> the information contained in each frame.
>
> Consider a set of three one minute intervals of experience, {S1, S2,
> S3}, which belong to a person S. S2 remembers S1 and remembers no gap
> or intervening experiences between S2 and S1; S3 remembers S1 and S2
> and remembers that S1 preceded S2; and S3 also remembers no gap or
> intervening experiences between S2 and S1 or between S3 and S2. In
> other words, they are subjectively three consecutive minutes in the
> life of S. S is aware that his experiences are generated on a
> computer, and he is also aware that they are being generated in one of
> two ways: in sequence as S1, S2, S3 or out of sequence as S2, S1, S3.
> Does S have any basis for deciding that it is more likely that his
> experiences are being generated in sequence?
>
>
It seems to me that it depends if the computation is iterative or not... in
other words, to compute step N you must have computed step N-1 before that.

If you can directly compute step N without computing prior step, S2/S1/S3 is
possible. If not you had necessarily computed step S1 before S2, only by
doing a replay of a previously done computation you could do it :

- first generate S1/S2/S3 in order and save each intermediate result, then
you can do
- S2 (taking the previously intermediate result of S1), S1 then S3 (taking
S2 result).

But running the same thing more times add a priori nothing. If the process
is iterative then "in order" computation win the measure battle (because any
out of order one require a genuine in order computation before).

Regards,
Quentin


>
> --
> Stathis Papaioannou
>
> --
>
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-l...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>
>


-- 
All those moments will be lost in time, like tears in rain.

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-05 Thread Stathis Papaioannou
2010/1/5 Nick Prince :
> Is this because you think of your stream of consciousness as somehow
> like a reel of film?  All the individual pictures could be cut from
> the reel and laid out any which way but the implicit order is always
> there.  I can understand this because all the spatio temporal
> relationships for the actors in the film remain "normal" i.e obey the
> laws of physics.  Deutsch argues similarly in the Fabric of reality.
> In my work I often come across the idea of a foliation of
> hypersurfaces which is really a set of 3D pictures "stuck together and
> stacked in the direction of the time coordinate of the world at a
> given instant of time.  In MW interpretation though I guess that the
> stacking is less certain as in the block universe idea but that's
> another issue.  Is this analogy similar to how you feel  the "obvious"
> experience of time being normal?

(I'm afraid the idea of a foliation of hypersurfaces is wasted on me
as an explanatory aid!)

It's like a reel of film in which the characters are conscious. For an
outside observer rearranging the frames out of sequence and playing
the film would be totally confusing, but for the characters in the
film it would make no difference. because the ordering is implicit in
the information contained in each frame.

Consider a set of three one minute intervals of experience, {S1, S2,
S3}, which belong to a person S. S2 remembers S1 and remembers no gap
or intervening experiences between S2 and S1; S3 remembers S1 and S2
and remembers that S1 preceded S2; and S3 also remembers no gap or
intervening experiences between S2 and S1 or between S3 and S2. In
other words, they are subjectively three consecutive minutes in the
life of S. S is aware that his experiences are generated on a
computer, and he is also aware that they are being generated in one of
two ways: in sequence as S1, S2, S3 or out of sequence as S2, S1, S3.
Does S have any basis for deciding that it is more likely that his
experiences are being generated in sequence?


-- 
Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-04 Thread Nick Prince
Thanks Bruno. I'll look this up and also I want to scan through your
seven steps series for November.  The later posts in these I think
will help me make contact with the concepts.I want to be able to
understand your Sane paper - especially the later parts.  Is there any
english translation of your thesis still underway as it says in the
"pages" part of the list?

On Jan 4, 1:15 pm, Bruno Marchal  wrote:
> Hi Nick,
>
> Oops, soory. I sent an empty answer.
>
> Actually I agree with all you say here, so an empty comment was a good  
> comment!
>
> I think all this becomes simpler once you grasp that a computation, in  
> the math sense, is a very well defined object.
> If a computation exists, it can be proved to exist in elementary  
> arithmetic.
>
> And it exists there with a relative measure. This can not necessarily  
> prove in arithmetic (but init can be proved for arithmetic in set  
> theory). But here Stathis' intuition is correct, we don't have to  
> prove in arithmetic the existence of the measure to be able to "live"  
> it, and develop a first person perspective.
>
> An hardwareless computer is well defined mathematical notion.  
> Conceptually, it is even difficult and not yet solved problem to  
> define an hardware computer (despite its common use could give you the  
> contrary feeling).
> Without the rize of quantum computation, I am not sure I would have  
> ever believed in a notion of physical computation.
> Cf also, the Mallah implementation problem.
>
> Bruno
>
> On 03 Jan 2010, at 14:55, Nick Prince wrote:
>
>
>
>

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-04 Thread Nick Prince
Is this because you think of your stream of consciousness as somehow
like a reel of film?  All the individual pictures could be cut from
the reel and laid out any which way but the implicit order is always
there.  I can understand this because all the spatio temporal
relationships for the actors in the film remain "normal" i.e obey the
laws of physics.  Deutsch argues similarly in the Fabric of reality.
In my work I often come across the idea of a foliation of
hypersurfaces which is really a set of 3D pictures "stuck together and
stacked in the direction of the time coordinate of the world at a
given instant of time.  In MW interpretation though I guess that the
stacking is less certain as in the block universe idea but that's
another issue.  Is this analogy similar to how you feel  the "obvious"
experience of time being normal?

Best

Nick

On Jan 4, 2:51 pm, Stathis Papaioannou  wrote:
> 2010/1/4 Brent Meeker :
>
> > I think you give an excellent explication of the problem, Stathis.  However,
> > one thing about it that still worries me is the role of time. You say the
> > mapping need not be consistent even moment to moment, and yet the mapping is
> > a timeless Platonic object.  To be a timeless object the the moments need
> > some timeless representation.  In Bruno's theory time arises from the
> > computational sequence.  But in the mapping, time is just a relation of
> > similarity (closest continuation) of states.  So three states which when
> > ordered by closest continuation are XYZ may have been computed in the order
> > XZY.  So I find myself seeing the hardwareless computer as a reductio
> > against consciousness=computation thesis and support for Peter's view that
> > ur-stuff and contingency are fundamental.
>
> It always seemed to me obvious that I would experience time normally
> if the computations or other physical processes generating my stream
> of consciousness were chopped up and played out of sequence,
> backwards, simultaneously or whatever. It could be happening right
> now: I have no way to know if the seconds of my life are running
> sequentially or all in parallel during a single second of real time.
> The two problems that many seem to have with this idea is a feeling
> that there needs to be some sort of mechanism for singling out the
> time slice that is the "now", and a feeling that the time slices lack
> a causal glue to connect them together. But maybe I'm missing
> something, because these objections never seemed to me to be problems.
>
> --
> Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-04 Thread Stathis Papaioannou
2010/1/4 Brent Meeker :

> I think you give an excellent explication of the problem, Stathis.  However,
> one thing about it that still worries me is the role of time. You say the
> mapping need not be consistent even moment to moment, and yet the mapping is
> a timeless Platonic object.  To be a timeless object the the moments need
> some timeless representation.  In Bruno's theory time arises from the
> computational sequence.  But in the mapping, time is just a relation of
> similarity (closest continuation) of states.  So three states which when
> ordered by closest continuation are XYZ may have been computed in the order
> XZY.  So I find myself seeing the hardwareless computer as a reductio
> against consciousness=computation thesis and support for Peter's view that
> ur-stuff and contingency are fundamental.

It always seemed to me obvious that I would experience time normally
if the computations or other physical processes generating my stream
of consciousness were chopped up and played out of sequence,
backwards, simultaneously or whatever. It could be happening right
now: I have no way to know if the seconds of my life are running
sequentially or all in parallel during a single second of real time.
The two problems that many seem to have with this idea is a feeling
that there needs to be some sort of mechanism for singling out the
time slice that is the "now", and a feeling that the time slices lack
a causal glue to connect them together. But maybe I'm missing
something, because these objections never seemed to me to be problems.


-- 
Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-04 Thread Bruno Marchal
Hi Nick,

Oops, soory. I sent an empty answer.

Actually I agree with all you say here, so an empty comment was a good  
comment!

I think all this becomes simpler once you grasp that a computation, in  
the math sense, is a very well defined object.
If a computation exists, it can be proved to exist in elementary  
arithmetic.

And it exists there with a relative measure. This can not necessarily  
prove in arithmetic (but init can be proved for arithmetic in set  
theory). But here Stathis' intuition is correct, we don't have to  
prove in arithmetic the existence of the measure to be able to "live"  
it, and develop a first person perspective.

An hardwareless computer is well defined mathematical notion.  
Conceptually, it is even difficult and not yet solved problem to  
define an hardware computer (despite its common use could give you the  
contrary feeling).
Without the rize of quantum computation, I am not sure I would have  
ever believed in a notion of physical computation.
Cf also, the Mallah implementation problem.

Bruno


On 03 Jan 2010, at 14:55, Nick Prince wrote:

> Thank you Stathis
> This has helped move me on a bit. “The hardwareless computer” has been
> giving me some real problems.  Let me replay my understanding of what
> you said back just to check it is on the right lines.
> As a possible example of one of these “lurking computations” we could
> consider the one which begins with no-thing and think of the null set
> as made of it phi ={ } and then associating it with the number 0. Then
> imagine the set { phi} associating it with 1, then{ phi,{phi }}
> associating this with 2, then { phi, { phi} , { ,{phi }} },
> associating it with 3 etc. Hence we get an infinite sequence of
> abstract (platonic) entities which can conjure up (compute) the
> natural numbers and the implied successor function simply from the
> abstract (platonic) notion of a set and an association rule (also a
> platonic relation). More and more structure can be built up until - as
> you say - the entire structure of the computation contained in the
> mapping can be envisioned. Now although no external observers might be
> able to access these computations, the computations might just create
> conscious observers – bootstrapped into existence by the special class
> of computations which these (internal) observers (if they believed in
> comp) would naturally consider as non trivial.  As you say the entire
> structure of the mapping which describes the computation is a platonic
> object too – hence the world comes from nothing and computation.
> Have I got this roughly right? I would be grateful for any critical
> comments from you, Bruno (or anyone).
> Many thanks
> Nick
>
>
> On Jan 3, 11:05 am, Stathis Papaioannou  wrote:
>> 2010/1/3 Nick Prince :
>>
>>
>>
>>
>>
>>> HI Bruno
>>> Thank you so much for your answers to my queries so far.  I really
>>> need to do some more thinking about all that you have said so far  
>>> and
>>> to understand why I am having difficulty replacing a real physical
>>> universal machine existing in the future (like Tipler suggests) or a
>>> great programmer existing now (like schmidhuber suggests) with your
>>> arithmetical realism.  I also need to search some previous posts to
>>> make use of past discussion topics that are relevant. Perhaps my
>>> background makes me a physicalist who can currently accept a milder
>>> form of comp.  However, I want to explore your position because I
>>> think it makes sense in so far as I think it is less vulnerable to  
>>> the
>>> threat of infinite regressions like in  Schmidhuber’s great  
>>> programmer
>>> (or even the greater programmer that programmed him).  Your  
>>> version of
>>> computationalism would still be valid if either or both of the two
>>> options above were true. Herein lies its appeal to me (both
>>> fundamental and universal).
>>> I would like to read up on logic and computation as you suggest. I
>>> have read about all the books you recommend . However, can you  
>>> suggest
>>> topic areas within these texts which I can  focus on to help me  
>>> get up
>>> to speed with the problems I have regarding arithmetical realism  
>>> with
>>> the UDA?  There is much that could perhaps be left out on a first
>>> reading and to my untrained eyes, it’s difficult to know what to  
>>> omit
>>> (for example what would godels arithmetisation technique come under?
>>> (Googling it brings not much up).  Sorry but I haven’t ordered any
>>> books yet so I can’t look into them.
>>> Is there an English translation of your Ph.D. thesis yet?  Sorry  
>>> but I
>>> can’t do French. My thanks and best wishes.
>>
>> My justification for the hardwareless computer is the fact that any
>> computation can be mapped onto any physical process, in the same way
>> that any English sentence can be mapped onto any string of symbols.
>> Such a post hoc mapping would be useless to an observer trying to
>> extract meaning from the symbols or the result of 

Re: UDA query

2010-01-04 Thread Bruno Marchal

On 03 Jan 2010, at 14:55, Nick Prince wrote:

> Thank you Stathis
> This has helped move me on a bit. “The hardwareless computer” has been
> giving me some real problems.  Let me replay my understanding of what
> you said back just to check it is on the right lines.
> As a possible example of one of these “lurking computations” we could
> consider the one which begins with no-thing and think of the null set
> as made of it phi ={ } and then associating it with the number 0. Then
> imagine the set { phi} associating it with 1, then{ phi,{phi }}
> associating this with 2, then { phi, { phi} , { ,{phi }} },
> associating it with 3 etc. Hence we get an infinite sequence of
> abstract (platonic) entities which can conjure up (compute) the
> natural numbers and the implied successor function simply from the
> abstract (platonic) notion of a set and an association rule (also a
> platonic relation). More and more structure can be built up until - as
> you say - the entire structure of the computation contained in the
> mapping can be envisioned. Now although no external observers might be
> able to access these computations, the computations might just create
> conscious observers – bootstrapped into existence by the special class
> of computations which these (internal) observers (if they believed in
> comp) would naturally consider as non trivial.  As you say the entire
> structure of the mapping which describes the computation is a platonic
> object too – hence the world comes from nothing and computation.
> Have I got this roughly right? I would be grateful for any critical
> comments from you, Bruno (or anyone).
> Many thanks
> Nick
>
>
> On Jan 3, 11:05 am, Stathis Papaioannou  wrote:
>> 2010/1/3 Nick Prince :
>>
>>
>>
>>
>>
>>> HI Bruno
>>> Thank you so much for your answers to my queries so far.  I really
>>> need to do some more thinking about all that you have said so far  
>>> and
>>> to understand why I am having difficulty replacing a real physical
>>> universal machine existing in the future (like Tipler suggests) or a
>>> great programmer existing now (like schmidhuber suggests) with your
>>> arithmetical realism.  I also need to search some previous posts to
>>> make use of past discussion topics that are relevant. Perhaps my
>>> background makes me a physicalist who can currently accept a milder
>>> form of comp.  However, I want to explore your position because I
>>> think it makes sense in so far as I think it is less vulnerable to  
>>> the
>>> threat of infinite regressions like in  Schmidhuber’s great  
>>> programmer
>>> (or even the greater programmer that programmed him).  Your  
>>> version of
>>> computationalism would still be valid if either or both of the two
>>> options above were true. Herein lies its appeal to me (both
>>> fundamental and universal).
>>> I would like to read up on logic and computation as you suggest. I
>>> have read about all the books you recommend . However, can you  
>>> suggest
>>> topic areas within these texts which I can  focus on to help me  
>>> get up
>>> to speed with the problems I have regarding arithmetical realism  
>>> with
>>> the UDA?  There is much that could perhaps be left out on a first
>>> reading and to my untrained eyes, it’s difficult to know what to  
>>> omit
>>> (for example what would godels arithmetisation technique come under?
>>> (Googling it brings not much up).  Sorry but I haven’t ordered any
>>> books yet so I can’t look into them.
>>> Is there an English translation of your Ph.D. thesis yet?  Sorry  
>>> but I
>>> can’t do French. My thanks and best wishes.
>>
>> My justification for the hardwareless computer is the fact that any
>> computation can be mapped onto any physical process, in the same way
>> that any English sentence can be mapped onto any string of symbols.
>> Such a post hoc mapping would be useless to an observer trying to
>> extract meaning from the symbols or the result of a calculation from
>> the computer, since he would have to figure out the mapping himself
>> and he would have to know the answer he wants before doing this. With
>> the right key Bruno's PhD thesis contains an account of next week's
>> news, but so what? If you look at it the right way the dust swept up
>> by a storm is implementing a Turing machine calculating the digits of
>> pi, but what good does that do anyone? The claim that codes and
>> computations lurk hidden all around us could be taken as true but
>> trivial, or perhaps defined away as untrue on account of its
>> triviality. However, there is a special class of computations to
>> consider: computations that give rise to conscious observers in
>> virtual universes that do not interact with the environment at the
>> level of the substrate of implementation. If such computations are
>> possible (i.e. if comp is true) then it doesn't matter that no
>> external observers have access to the mapping that would allow them  
>> to
>> recognise them, for these computations create thei

Re: UDA query

2010-01-04 Thread Bruno Marchal

On 03 Jan 2010, at 12:05, Stathis Papaioannou wrote:

> 2010/1/3 Nick Prince :
>> HI Bruno
>> Thank you so much for your answers to my queries so far.  I really
>> need to do some more thinking about all that you have said so far and
>> to understand why I am having difficulty replacing a real physical
>> universal machine existing in the future (like Tipler suggests) or a
>> great programmer existing now (like schmidhuber suggests) with your
>> arithmetical realism.  I also need to search some previous posts to
>> make use of past discussion topics that are relevant. Perhaps my
>> background makes me a physicalist who can currently accept a milder
>> form of comp.  However, I want to explore your position because I
>> think it makes sense in so far as I think it is less vulnerable to  
>> the
>> threat of infinite regressions like in  Schmidhuber’s great  
>> programmer
>> (or even the greater programmer that programmed him).  Your version  
>> of
>> computationalism would still be valid if either or both of the two
>> options above were true. Herein lies its appeal to me (both
>> fundamental and universal).
>> I would like to read up on logic and computation as you suggest. I
>> have read about all the books you recommend . However, can you  
>> suggest
>> topic areas within these texts which I can  focus on to help me get  
>> up
>> to speed with the problems I have regarding arithmetical realism with
>> the UDA?  There is much that could perhaps be left out on a first
>> reading and to my untrained eyes, it’s difficult to know what to omit
>> (for example what would godels arithmetisation technique come under?
>> (Googling it brings not much up).  Sorry but I haven’t ordered any
>> books yet so I can’t look into them.
>> Is there an English translation of your Ph.D. thesis yet?  Sorry  
>> but I
>> can’t do French. My thanks and best wishes.
>
> My justification for the hardwareless computer is the fact that any
> computation can be mapped onto any physical process, in the same way
> that any English sentence can be mapped onto any string of symbols.

You should elaborate and be more precise. Ho do you map the English  
sentence "Life what is it but a dream" onto the string "xxx"?
How do you map the computation of the decimal of PI on *any* physical  
process.

It is true that once we understand that any piece of matter is somehow  
the result of an infinity of competing universal machine, then you are  
right: if the quantum or comp vacuum can be said to compute anything,  
but not in the "doctor" sense: of saying "yes" for a relative  
substitution.



> Such a post hoc mapping would be useless to an observer trying to
> extract meaning from the symbols or the result of a calculation from
> the computer, since he would have to figure out the mapping himself
> and he would have to know the answer he wants before doing this.

I mainly agree. the computation define the interpreter/observer, and  
there is no need to interpret the interpreter. He does it by itself  
very well (assuming comp).




> With
> the right key Bruno's PhD thesis contains an account of next week's
> news, but so what?

Such a key to make sense has to be universal. If you change the key  
often, you encode the computation in an arbitrary string together with  
a non arbitrary sequence of key. Computation makes sense only because  
e choose once and for all any arbitrary key, and don't change it  
anymore. The basic key is the given of elementary arithmetical axioms.



> If you look at it the right way the dust swept up
> by a storm is implementing a Turing machine calculating the digits of
> pi,

I don't believe this. To interpret the dust move as a computation of  
PI, you will have to generate a complex sequence of keys.
The computation of PI will be hidden in the generation of the sequence  
of key, not in the dust move.





> but what good does that do anyone? The claim that codes and
> computations lurk hidden all around us could be taken as true but
> trivial, or perhaps defined away as untrue on account of its
> triviality. However, there is a special class of computations to
> consider: computations that give rise to conscious observers in
> virtual universes that do not interact with the environment at the
> level of the substrate of implementation. If such computations are
> possible (i.e. if comp is true) then it doesn't matter that no
> external observers have access to the mapping that would allow them to
> recognise them, for these computations create their own observers,
> bootstrapping themselves into non-triviality.

That is right. Once number relations describe a computation, we can  
ascribe consciousness to it, if the computation describe some  
interpreters. This includes the key. But the dust doesn't do that,  
except in the comp sense that dust itself is a sump on an infinity of  
computations. The dust don't do the computations without the keys.  
Except in the lucky "white rabbit" case where it does, fo

Re: UDA query

2010-01-04 Thread Bruno Marchal

On 02 Jan 2010, at 17:06, Nick Prince wrote:

> HI Bruno
> Thank you so much for your answers to my queries so far.  I really
> need to do some more thinking about all that you have said so far and
> to understand why I am having difficulty replacing a real physical
> universal machine existing in the future (like Tipler suggests) or a
> great programmer existing now (like schmidhuber suggests) with your
> arithmetical realism.  I also need to search some previous posts to
> make use of past discussion topics that are relevant. Perhaps my
> background makes me a physicalist who can currently accept a milder
> form of comp.  However, I want to explore your position because I
> think it makes sense in so far as I think it is less vulnerable to the
> threat of infinite regressions like in  Schmidhuber’s great programmer
> (or even the greater programmer that programmed him).  Your version of
> computationalism would still be valid if either or both of the two
> options above were true. Herein lies its appeal to me (both
> fundamental and universal).

My point is that we have no choice in the matter (no pun).
Mechanism and materialism are just "epistemologically" incompatible.
Primitive Matter appears to be a mythic product.
What Schmidhuber and Tegmark are still a bit naive about is the mind  
body problem. They does not take the persons view into account, and  
their explanation of physics relies still on some identity thesis,  
which are shown not capable of working when we assume comp (mainly by  
the movie graph argument).




> I would like to read up on logic and computation as you suggest. I
> have read about all the books you recommend . However, can you suggest
> topic areas within these texts which I can  focus on to help me get up
> to speed with the problems I have regarding arithmetical realism with
> the UDA?

I am still not sure to understand what is your difficulty.  
Arithmetical realism is the belief that the truth of elementary  
arithmetic does not depend of "my consciousness". The fact that all  
positive integers can be written as the sum of four squares (Lagranges  
theorem) is true independently of Diophantes and Lagranges (who find  
and prove the result), even if the big bang did not occur. All  
mathematicians are arithmetical realists, except a very small  
(ultrafinitist) minority.



>  There is much that could perhaps be left out on a first
> reading and to my untrained eyes, it’s difficult to know what to omit
> (for example what would godels arithmetisation technique come under?
> (Googling it brings not much up).  Sorry but I haven’t ordered any
> books yet so I can’t look into them.
> Is there an English translation of your Ph.D. thesis yet?  Sorry but I
> can’t do French. My thanks and best wishes.

I feel guilty not writing a long english text, nor submitting papers,  
but there are some personal reasons for that.
Up to now, I realize that physicist have no understanding of logic at  
all, and logicians have no interest neither in physics, and still less  
in the philosophy of mind. It is hard to find the right way to  
introduce all this.

The subject is transdiciplinary, and touches very "hot" (taboo)  
notions, also. I got all this in the sixties/seventies, and at that  
time the work has been considered too much simple and obvious (!). I  
have been mislead. Now I know it is not simple, and that for a  
physicist, the very introductory part of logic is just impenetrable. I  
have assisted to many deaf-dialog between logicians and physicists.  
Big mathematicians like Penrose have shown that it is easy to be  
rigorous yet wrong on Gödel's theorem, and now many just don't dare to  
study the subject.

But the few who have take the time to really study the work have  
understood it, and that is why eventually I have defended it as a  
thesis in computer science in France. In Belgium the thesis has been  
rejected by literary philosophers who confuse materialism with  
marxism, and it is just a sort of blasphemy for them to even harbor  
the shadow of a doubt toward "materialism". Of course my PhD thesis  
says just nothing about marxism, nor any thing political. It is just  
logic applied to ontological questions at the intersection of physics  
and cognitive science. But literary continental philosopher have a  
very long tradition of disliking the scientific attitude in their  
field. They feel like to be invaded by science, and, be them atheist  
or christians, they know such kind of attitude could make ridiculous  
the kind of crap they are teaching, and they would lose power   
(and they actually defend the idea that scientific truth does not  
exist, and that all is a question of political power, and they offer  
me a demonstration of this). At least most Christians are aware of  
this, and can react in a scientific way, unlike most atheists  
philosophers who have become more dogmatic than the pope on  
Aristotelian theology.

Freedom of thought just don't ex

Re: UDA query

2010-01-03 Thread Brent Meeker




We're not circling around it.  Bruno asserts it.  But then we need to
explain the things that were formerly explained by physical existence -
e.g. intersubjective agreement about a physical world, the dependence
of thought on brains, etc.

Brent

Stephen Paul King wrote:

  Hi Folks,

I would like to append a question that we all seem to circle around: Why 
do we even need to have a physical existance at all? Why isn't Platonic 
existence sufficient?

Onward!

Stephen


- Original Message - 
From: "Nick Prince" 
To: "Everything List" 
Sent: Sunday, January 03, 2010 4:30 PM
Subject: Re: UDA query


Stathis wrote

  
  
Yes, but a critic could still say that no conscious observer could be
conjured up by a computation unless the computation is physically
implemented. At least at first glance that seems to be the case: the
brain is required for consciousness, since if the brain is destroyed
consciousness is destroyed. And if the mind is generated by a computer
program, it would be normal to think that if the computer is
destroyed, so is the mind, although the program in Platonia remains
unaffected even if the entire universe blows up. These are the common
sense objections. So the question is, is physical implementation
necessary for consciousness, and what does it actually mean to
physically implement a program?

  
  
>From what I surmised and what Bruno wrote earlier in the discussion, I
thought that consciousness might supervene over all computations that
were essentially equivalent (whatever that might mean— i.e. some sort
of equivalence class?).  Anyway, this would imply that if the brain
was destroyed, then consciousness would simply be continued on by the
rest of the (competing) and remaining equivalent computations.
These would presumably be consistent extensions of the consciousness
in other worlds (MW interpretation) or in a platonic UD.

SP
  
  
(and of course, this hardware may itself be part
of the virtual world generated in Platonia).

  
  
I thought that this would be a consequence of comp since the
probability of consciousness staying in any “concrete” universe would
seem to be essentially zero. see below from earlier in the discussion:

NP> In other words every observer
  
  
moment of his life (not just the one just before being blown up - but
any  of them) could just as easily be followed by a suitable one in
the virtual UD rather than one in the initial run of the universe.

  
  BM
  
  
Absolutely. Would a real *singular* concrete material universe exist,
the probability to stay in that universe is zero.

  
  
Brent

 >I think you give an excellent explication of the problem, Stathis.
However, one thing about it that still worries me is the role of time.
You >say the mapping need not be consistent even moment to moment, and
yet the mapping is a timeless Platonic object. To be a timeless
  
  
object the the moments need some timeless representation. In Bruno's

  
  theory time arises from the computational sequence. But in the
  
  
mapping, time is just a relation of similarity (closest continuation)

  
  of states. So three states which when ordered by closest continuation
  
  
are XYZ may have been computed in the order XZY. So I find myself

  
  seeing the hardwareless computer as a reductio against
  
  
consciousness=computation thesis and support for Peter's view that ur-

  
  stuff and contingency are fundamental.

The time bit confuses me too but if the UD is recursive (as I thought
it would have to be) and a successor function was implicit in the
algorithm then the timeless algorithm would give a perception of time
to the internal observers that Stathis spoke of earlier generated by
the computation.

However I am still not convinced about this myself and get this
feeling that there is a dynamic element missing from the static or
timeless representations which I am assuming to be existent in the
platonic realm

Nick


On Jan 3, 6:57 pm, Brent Meeker  wrote:
  
  
Stathis Papaioannou wrote:2010/1/4 Nick Prince:Thank 
you Stathis This has helped move me on a bit. The hardwareless computer 
has been giving me some real problems. Let me replay my understanding of 
what you said back just to check it is on the right lines. As a possible 
example of one of these lurking computations we could consider the one 
which begins with no-thing and think of the null set as made of it phi 
={ } and then associating it with the number 0. Then imagine the set { 
phi} associating it with 1, then { phi,{phi }} associating this with 2, 
then { phi, { phi} , { ,{phi }} }, associating it with 3 etc. Hence we get 
an infinite sequence of abstract (platonic) entities which can conjure up 
(compute) the natural numbers and the implied successor function simply 
from the abstract (platonic) notion of a set and an association rule (also 
a platonic relation). Mor

Re: UDA query

2010-01-03 Thread Stephen Paul King
Hi Folks,

I would like to append a question that we all seem to circle around: Why 
do we even need to have a physical existance at all? Why isn't Platonic 
existence sufficient?

Onward!

Stephen


- Original Message - 
From: "Nick Prince" 
To: "Everything List" 
Sent: Sunday, January 03, 2010 4:30 PM
Subject: Re: UDA query


Stathis wrote

>Yes, but a critic could still say that no conscious observer could be
>conjured up by a computation unless the computation is physically
>implemented. At least at first glance that seems to be the case: the
>brain is required for consciousness, since if the brain is destroyed
>consciousness is destroyed. And if the mind is generated by a computer
>program, it would be normal to think that if the computer is
>destroyed, so is the mind, although the program in Platonia remains
>unaffected even if the entire universe blows up. These are the common
>sense objections. So the question is, is physical implementation
>necessary for consciousness, and what does it actually mean to
>physically implement a program?

>From what I surmised and what Bruno wrote earlier in the discussion, I
thought that consciousness might supervene over all computations that
were essentially equivalent (whatever that might mean— i.e. some sort
of equivalence class?).  Anyway, this would imply that if the brain
was destroyed, then consciousness would simply be continued on by the
rest of the (competing) and remaining equivalent computations.
These would presumably be consistent extensions of the consciousness
in other worlds (MW interpretation) or in a platonic UD.

SP
>(and of course, this hardware may itself be part
>of the virtual world generated in Platonia).

I thought that this would be a consequence of comp since the
probability of consciousness staying in any “concrete” universe would
seem to be essentially zero. see below from earlier in the discussion:

NP> In other words every observer
> moment of his life (not just the one just before being blown up - but
> any  of them) could just as easily be followed by a suitable one in
> the virtual UD rather than one in the initial run of the universe.
BM
>Absolutely. Would a real *singular* concrete material universe exist,
>the probability to stay in that universe is zero.

Brent

 >I think you give an excellent explication of the problem, Stathis.
However, one thing about it that still worries me is the role of time.
You >say the mapping need not be consistent even moment to moment, and
yet the mapping is a timeless Platonic object. To be a timeless
>object the the moments need some timeless representation. In Bruno's
theory time arises from the computational sequence. But in the
>mapping, time is just a relation of similarity (closest continuation)
of states. So three states which when ordered by closest continuation
>are XYZ may have been computed in the order XZY. So I find myself
seeing the hardwareless computer as a reductio against
>consciousness=computation thesis and support for Peter's view that ur-
stuff and contingency are fundamental.

The time bit confuses me too but if the UD is recursive (as I thought
it would have to be) and a successor function was implicit in the
algorithm then the timeless algorithm would give a perception of time
to the internal observers that Stathis spoke of earlier generated by
the computation.

However I am still not convinced about this myself and get this
feeling that there is a dynamic element missing from the static or
timeless representations which I am assuming to be existent in the
platonic realm

Nick


On Jan 3, 6:57 pm, Brent Meeker  wrote:
> Stathis Papaioannou wrote:2010/1/4 Nick Prince:Thank 
> you Stathis This has helped move me on a bit. The hardwareless computer 
> has been giving me some real problems. Let me replay my understanding of 
> what you said back just to check it is on the right lines. As a possible 
> example of one of these lurking computations we could consider the one 
> which begins with no-thing and think of the null set as made of it phi 
> ={ } and then associating it with the number 0. Then imagine the set { 
> phi} associating it with 1, then { phi,{phi }} associating this with 2, 
> then { phi, { phi} , { ,{phi }} }, associating it with 3 etc. Hence we get 
> an infinite sequence of abstract (platonic) entities which can conjure up 
> (compute) the natural numbers and the implied successor function simply 
> from the abstract (platonic) notion of a set and an association rule (also 
> a platonic relation). More and more structure can be built up until - as 
> you say - the entire structure of the computation contained in the mapping 
> can be envisioned. Now although no external observers might be able to 
> access these computations, the computations might just create conscious 
> observer

Re: UDA query

2010-01-03 Thread Nick Prince
Stathis wrote

>Yes, but a critic could still say that no conscious observer could be
>conjured up by a computation unless the computation is physically
>implemented. At least at first glance that seems to be the case: the
>brain is required for consciousness, since if the brain is destroyed
>consciousness is destroyed. And if the mind is generated by a computer
>program, it would be normal to think that if the computer is
>destroyed, so is the mind, although the program in Platonia remains
>unaffected even if the entire universe blows up. These are the common
>sense objections. So the question is, is physical implementation
>necessary for consciousness, and what does it actually mean to
>physically implement a program?

>From what I surmised and what Bruno wrote earlier in the discussion, I
thought that consciousness might supervene over all computations that
were essentially equivalent (whatever that might mean— i.e. some sort
of equivalence class?).  Anyway, this would imply that if the brain
was destroyed, then consciousness would simply be continued on by the
rest of the (competing) and remaining equivalent computations.
These would presumably be consistent extensions of the consciousness
in other worlds (MW interpretation) or in a platonic UD.

SP
>(and of course, this hardware may itself be part
>of the virtual world generated in Platonia).

I thought that this would be a consequence of comp since the
probability of consciousness staying in any “concrete” universe would
seem to be essentially zero. see below from earlier in the discussion:

NP> In other words every observer
> moment of his life (not just the one just before being blown up - but
> any  of them) could just as easily be followed by a suitable one in
> the virtual UD rather than one in the initial run of the universe.
BM
>Absolutely. Would a real *singular* concrete material universe exist,
>the probability to stay in that universe is zero.

Brent

 >I think you give an excellent explication of the problem, Stathis.
However, one thing about it that still worries me is the role of time.
You >say the mapping need not be consistent even moment to moment, and
yet the mapping is a timeless Platonic object. To be a timeless
>object the the moments need some timeless representation. In Bruno's
theory time arises from the computational sequence. But in the
>mapping, time is just a relation of similarity (closest continuation)
of states. So three states which when ordered by closest continuation
>are XYZ may have been computed in the order XZY. So I find myself
seeing the hardwareless computer as a reductio against
>consciousness=computation thesis and support for Peter's view that ur-
stuff and contingency are fundamental.

The time bit confuses me too but if the UD is recursive (as I thought
it would have to be) and a successor function was implicit in the
algorithm then the timeless algorithm would give a perception of time
to the internal observers that Stathis spoke of earlier generated by
the computation.

However I am still not convinced about this myself and get this
feeling that there is a dynamic element missing from the static or
timeless representations which I am assuming to be existent in the
platonic realm

Nick


On Jan 3, 6:57 pm, Brent Meeker  wrote:
> Stathis Papaioannou wrote:2010/1/4 Nick Prince:Thank 
> you Stathis This has helped move me on a bit. The hardwareless computer has 
> been giving me some real problems. Let me replay my understanding of what you 
> said back just to check it is on the right lines. As a possible example of 
> one of these lurking computations we could consider the one which begins with 
> no-thing and think of the null set as made of it phi ={ } and then 
> associating it with the number 0. Then imagine the set { phi} associating it 
> with 1, then { phi,{phi }} associating this with 2, then { phi, { phi} , { 
> ,{phi }} }, associating it with 3 etc. Hence we get an infinite sequence of 
> abstract (platonic) entities which can conjure up (compute) the natural 
> numbers and the implied successor function simply from the abstract 
> (platonic) notion of a set and an association rule (also a platonic 
> relation). More and more structure can be built up until - as you say - the 
> entire structure of the computation contained in the mapping can be 
> envisioned. Now although no external observers might be able to access these 
> computations, the computations might just create conscious observers 
> bootstrapped into existence by the special class of computations which these 
> (internal) observers (if they believed in comp) would naturally consider as 
> non trivial. As you say the entire structure of the mapping which describes 
> the computation is a platonic object too hence the world comes from nothing 
> and computation. Have I got this roughly right? I would be grateful for any 
> critical comments from you, Bruno (or anyone).Yes, but a critic could still 
> say that no conscious observer c

Re: UDA query

2010-01-03 Thread Brent Meeker




Stathis Papaioannou wrote:

  2010/1/4 Nick Prince :
  
  
Thank you Stathis
This has helped move me on a bit. “The hardwareless computer” has been
giving me some real problems.  Let me replay my understanding of what
you said back just to check it is on the right lines.
As a possible example of one of these “lurking computations” we could
consider the one which begins with no-thing and think of the null set
as made of it phi ={ } and then associating it with the number 0. Then
imagine the set { phi} associating it with 1, then    { phi,{phi }}
associating this with 2, then { phi, { phi} , { ,{phi }} },
associating it with 3 etc. Hence we get an infinite sequence of
abstract (platonic) entities which can conjure up (compute) the
natural numbers and the implied successor function simply from the
abstract (platonic) notion of a set and an association rule (also a
platonic relation). More and more structure can be built up until - as
you say - the entire structure of the computation contained in the
mapping can be envisioned. Now although no external observers might be
able to access these computations, the computations might just create
conscious observers – bootstrapped into existence by the special class
of computations which these (internal) observers (if they believed in
comp) would naturally consider as non trivial.  As you say the entire
structure of the mapping which describes the computation is a platonic
object too – hence the world comes from nothing and computation.
Have I got this roughly right? I would be grateful for any critical
comments from you, Bruno (or anyone).

  
  
Yes, but a critic could still say that no conscious observer could be
conjured up by a computation unless the computation is physically
implemented. At least at first glance that seems to be the case: the
brain is required for consciousness, since if the brain is destroyed
consciousness is destroyed. And if the mind is generated by a computer
program, it would be normal to think that if the computer is
destroyed, so is the mind, although the program in Platonia remains
unaffected even if the entire universe blows up. These are the common
sense objections. So the question is, is physical implementation
necessary for consciousness, and what does it actually mean to
physically implement a program?

Suppose we agree that it is necessary to physically implement a
program in order to get the consciousness. Physical implementation
then involves, essentially, causing a machine to go through a sequence
of causally connected configurations such that the configurations and
the state transition rules match up with the abstract program. There
is a mapping from the abstract program to the machine so that the
engineer, programmer and end user know what's going on. But "write 1
and then move the head to the left" could be represented in an
infinite number of ways. If a man walks down the street chewing gum,
that could represent "write 1 then move the head to the left", while
if he stood still humming "Jingle Bells" that would have represented
"write 0 then move the head to the right". Moreover the mapping does
not have to be consistent from moment to moment: chewing gum could
mean "0" on Fridays and "1" on other days. There is no reason why a
computer could not be designed to function in such an inconsistent
way, other than the practical necessity of keeping track of what's
going on, which is necessary if the computer is to be of any use to
anyone. But if we don't care about its usefulness to an outside
observer we could say that any abstract computation maps to any
physical process: a random physical process, a repetitive physical
process, or a single physical state. The man walking down the street
chewing gum over the course of a second could be seen as representing
the one thousand steps of a Turing machine adding two numbers
together, although of course it wouldn't be of any use to anyone
interested in the result of the calculation. You can see no doubt that
if you accept the argument so far the physical process is irrelevant,
and all of the computation, such as it is, consists in the abstract
machine and the mapping, which are timeless platonic objects. Arguable
the mapping is also irrelevant, since there are an infinite number of
possible mappings for an infinite number of possible physical
processes. The only thing that seems to make a difference is the
abstract machine or program itself. The program "runs" necessarily,
even in the absence of a physical universe, and it only need run on
physical hardware in order to interact with the environment at the
level of the hardware (and of course, this hardware may itself be part
of the virtual world generated in Platonia).


  

I think you give an excellent explication of the problem, Stathis. 
However, one thing about it that still worries me is the role of time.
You say the mapping need not be consistent even moment to moment, and
yet the mapping is a timeless Platonic object

Re: UDA query

2010-01-03 Thread Stathis Papaioannou
2010/1/4 Nick Prince :
> Thank you Stathis
> This has helped move me on a bit. “The hardwareless computer” has been
> giving me some real problems.  Let me replay my understanding of what
> you said back just to check it is on the right lines.
> As a possible example of one of these “lurking computations” we could
> consider the one which begins with no-thing and think of the null set
> as made of it phi ={ } and then associating it with the number 0. Then
> imagine the set { phi} associating it with 1, then    { phi,{phi }}
> associating this with 2, then { phi, { phi} , { ,{phi }} },
> associating it with 3 etc. Hence we get an infinite sequence of
> abstract (platonic) entities which can conjure up (compute) the
> natural numbers and the implied successor function simply from the
> abstract (platonic) notion of a set and an association rule (also a
> platonic relation). More and more structure can be built up until - as
> you say - the entire structure of the computation contained in the
> mapping can be envisioned. Now although no external observers might be
> able to access these computations, the computations might just create
> conscious observers – bootstrapped into existence by the special class
> of computations which these (internal) observers (if they believed in
> comp) would naturally consider as non trivial.  As you say the entire
> structure of the mapping which describes the computation is a platonic
> object too – hence the world comes from nothing and computation.
> Have I got this roughly right? I would be grateful for any critical
> comments from you, Bruno (or anyone).

Yes, but a critic could still say that no conscious observer could be
conjured up by a computation unless the computation is physically
implemented. At least at first glance that seems to be the case: the
brain is required for consciousness, since if the brain is destroyed
consciousness is destroyed. And if the mind is generated by a computer
program, it would be normal to think that if the computer is
destroyed, so is the mind, although the program in Platonia remains
unaffected even if the entire universe blows up. These are the common
sense objections. So the question is, is physical implementation
necessary for consciousness, and what does it actually mean to
physically implement a program?

Suppose we agree that it is necessary to physically implement a
program in order to get the consciousness. Physical implementation
then involves, essentially, causing a machine to go through a sequence
of causally connected configurations such that the configurations and
the state transition rules match up with the abstract program. There
is a mapping from the abstract program to the machine so that the
engineer, programmer and end user know what's going on. But "write 1
and then move the head to the left" could be represented in an
infinite number of ways. If a man walks down the street chewing gum,
that could represent "write 1 then move the head to the left", while
if he stood still humming "Jingle Bells" that would have represented
"write 0 then move the head to the right". Moreover the mapping does
not have to be consistent from moment to moment: chewing gum could
mean "0" on Fridays and "1" on other days. There is no reason why a
computer could not be designed to function in such an inconsistent
way, other than the practical necessity of keeping track of what's
going on, which is necessary if the computer is to be of any use to
anyone. But if we don't care about its usefulness to an outside
observer we could say that any abstract computation maps to any
physical process: a random physical process, a repetitive physical
process, or a single physical state. The man walking down the street
chewing gum over the course of a second could be seen as representing
the one thousand steps of a Turing machine adding two numbers
together, although of course it wouldn't be of any use to anyone
interested in the result of the calculation. You can see no doubt that
if you accept the argument so far the physical process is irrelevant,
and all of the computation, such as it is, consists in the abstract
machine and the mapping, which are timeless platonic objects. Arguable
the mapping is also irrelevant, since there are an infinite number of
possible mappings for an infinite number of possible physical
processes. The only thing that seems to make a difference is the
abstract machine or program itself. The program "runs" necessarily,
even in the absence of a physical universe, and it only need run on
physical hardware in order to interact with the environment at the
level of the hardware (and of course, this hardware may itself be part
of the virtual world generated in Platonia).


-- 
Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+u

Re: UDA query

2010-01-03 Thread Nick Prince
Thank you Stathis
This has helped move me on a bit. “The hardwareless computer” has been
giving me some real problems.  Let me replay my understanding of what
you said back just to check it is on the right lines.
As a possible example of one of these “lurking computations” we could
consider the one which begins with no-thing and think of the null set
as made of it phi ={ } and then associating it with the number 0. Then
imagine the set { phi} associating it with 1, then{ phi,{phi }}
associating this with 2, then { phi, { phi} , { ,{phi }} },
associating it with 3 etc. Hence we get an infinite sequence of
abstract (platonic) entities which can conjure up (compute) the
natural numbers and the implied successor function simply from the
abstract (platonic) notion of a set and an association rule (also a
platonic relation). More and more structure can be built up until - as
you say - the entire structure of the computation contained in the
mapping can be envisioned. Now although no external observers might be
able to access these computations, the computations might just create
conscious observers – bootstrapped into existence by the special class
of computations which these (internal) observers (if they believed in
comp) would naturally consider as non trivial.  As you say the entire
structure of the mapping which describes the computation is a platonic
object too – hence the world comes from nothing and computation.
Have I got this roughly right? I would be grateful for any critical
comments from you, Bruno (or anyone).
Many thanks
Nick


On Jan 3, 11:05 am, Stathis Papaioannou  wrote:
> 2010/1/3 Nick Prince :
>
>
>
>
>
> > HI Bruno
> > Thank you so much for your answers to my queries so far.  I really
> > need to do some more thinking about all that you have said so far and
> > to understand why I am having difficulty replacing a real physical
> > universal machine existing in the future (like Tipler suggests) or a
> > great programmer existing now (like schmidhuber suggests) with your
> > arithmetical realism.  I also need to search some previous posts to
> > make use of past discussion topics that are relevant. Perhaps my
> > background makes me a physicalist who can currently accept a milder
> > form of comp.  However, I want to explore your position because I
> > think it makes sense in so far as I think it is less vulnerable to the
> > threat of infinite regressions like in  Schmidhuber’s great programmer
> > (or even the greater programmer that programmed him).  Your version of
> > computationalism would still be valid if either or both of the two
> > options above were true. Herein lies its appeal to me (both
> > fundamental and universal).
> > I would like to read up on logic and computation as you suggest. I
> > have read about all the books you recommend . However, can you suggest
> > topic areas within these texts which I can  focus on to help me get up
> > to speed with the problems I have regarding arithmetical realism with
> > the UDA?  There is much that could perhaps be left out on a first
> > reading and to my untrained eyes, it’s difficult to know what to omit
> > (for example what would godels arithmetisation technique come under?
> > (Googling it brings not much up).  Sorry but I haven’t ordered any
> > books yet so I can’t look into them.
> > Is there an English translation of your Ph.D. thesis yet?  Sorry but I
> > can’t do French. My thanks and best wishes.
>
> My justification for the hardwareless computer is the fact that any
> computation can be mapped onto any physical process, in the same way
> that any English sentence can be mapped onto any string of symbols.
> Such a post hoc mapping would be useless to an observer trying to
> extract meaning from the symbols or the result of a calculation from
> the computer, since he would have to figure out the mapping himself
> and he would have to know the answer he wants before doing this. With
> the right key Bruno's PhD thesis contains an account of next week's
> news, but so what? If you look at it the right way the dust swept up
> by a storm is implementing a Turing machine calculating the digits of
> pi, but what good does that do anyone? The claim that codes and
> computations lurk hidden all around us could be taken as true but
> trivial, or perhaps defined away as untrue on account of its
> triviality. However, there is a special class of computations to
> consider: computations that give rise to conscious observers in
> virtual universes that do not interact with the environment at the
> level of the substrate of implementation. If such computations are
> possible (i.e. if comp is true) then it doesn't matter that no
> external observers have access to the mapping that would allow them to
> recognise them, for these computations create their own observers,
> bootstrapping themselves into non-triviality. The physical process
> "sustaining" the computation need not even be as complex in structure
> as the computation: the compu

Re: UDA query

2010-01-03 Thread Stathis Papaioannou
2010/1/3 Nick Prince :
> HI Bruno
> Thank you so much for your answers to my queries so far.  I really
> need to do some more thinking about all that you have said so far and
> to understand why I am having difficulty replacing a real physical
> universal machine existing in the future (like Tipler suggests) or a
> great programmer existing now (like schmidhuber suggests) with your
> arithmetical realism.  I also need to search some previous posts to
> make use of past discussion topics that are relevant. Perhaps my
> background makes me a physicalist who can currently accept a milder
> form of comp.  However, I want to explore your position because I
> think it makes sense in so far as I think it is less vulnerable to the
> threat of infinite regressions like in  Schmidhuber’s great programmer
> (or even the greater programmer that programmed him).  Your version of
> computationalism would still be valid if either or both of the two
> options above were true. Herein lies its appeal to me (both
> fundamental and universal).
> I would like to read up on logic and computation as you suggest. I
> have read about all the books you recommend . However, can you suggest
> topic areas within these texts which I can  focus on to help me get up
> to speed with the problems I have regarding arithmetical realism with
> the UDA?  There is much that could perhaps be left out on a first
> reading and to my untrained eyes, it’s difficult to know what to omit
> (for example what would godels arithmetisation technique come under?
> (Googling it brings not much up).  Sorry but I haven’t ordered any
> books yet so I can’t look into them.
> Is there an English translation of your Ph.D. thesis yet?  Sorry but I
> can’t do French. My thanks and best wishes.

My justification for the hardwareless computer is the fact that any
computation can be mapped onto any physical process, in the same way
that any English sentence can be mapped onto any string of symbols.
Such a post hoc mapping would be useless to an observer trying to
extract meaning from the symbols or the result of a calculation from
the computer, since he would have to figure out the mapping himself
and he would have to know the answer he wants before doing this. With
the right key Bruno's PhD thesis contains an account of next week's
news, but so what? If you look at it the right way the dust swept up
by a storm is implementing a Turing machine calculating the digits of
pi, but what good does that do anyone? The claim that codes and
computations lurk hidden all around us could be taken as true but
trivial, or perhaps defined away as untrue on account of its
triviality. However, there is a special class of computations to
consider: computations that give rise to conscious observers in
virtual universes that do not interact with the environment at the
level of the substrate of implementation. If such computations are
possible (i.e. if comp is true) then it doesn't matter that no
external observers have access to the mapping that would allow them to
recognise them, for these computations create their own observers,
bootstrapping themselves into non-triviality. The physical process
"sustaining" the computation need not even be as complex in structure
as the computation: the computation could be mapped for example onto a
repetitive process, the idle passage of time, even a single instant of
time implementing the parts of the computation in parallel. And if we
get that far, it's obvious that the physical process does nothing, and
we may as well map the computation onto the null set. It is obvious
that the entire structure of the computation is contained in the
mapping, and the mapping is a platonic object, not dependent on being
written down or even understood in the mind of an external observer.


-- 
Stathis Papaioannou

--

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




Re: UDA query

2010-01-02 Thread Nick Prince
HI Bruno
Thank you so much for your answers to my queries so far.  I really
need to do some more thinking about all that you have said so far and
to understand why I am having difficulty replacing a real physical
universal machine existing in the future (like Tipler suggests) or a
great programmer existing now (like schmidhuber suggests) with your
arithmetical realism.  I also need to search some previous posts to
make use of past discussion topics that are relevant. Perhaps my
background makes me a physicalist who can currently accept a milder
form of comp.  However, I want to explore your position because I
think it makes sense in so far as I think it is less vulnerable to the
threat of infinite regressions like in  Schmidhuber’s great programmer
(or even the greater programmer that programmed him).  Your version of
computationalism would still be valid if either or both of the two
options above were true. Herein lies its appeal to me (both
fundamental and universal).
I would like to read up on logic and computation as you suggest. I
have read about all the books you recommend . However, can you suggest
topic areas within these texts which I can  focus on to help me get up
to speed with the problems I have regarding arithmetical realism with
the UDA?  There is much that could perhaps be left out on a first
reading and to my untrained eyes, it’s difficult to know what to omit
(for example what would godels arithmetisation technique come under?
(Googling it brings not much up).  Sorry but I haven’t ordered any
books yet so I can’t look into them.
Is there an English translation of your Ph.D. thesis yet?  Sorry but I
can’t do French. My thanks and best wishes.

Nick


On Dec 31 2009, 6:10 pm, Bruno Marchal  wrote:
> On 30 Dec 2009, at 17:51, Nick Prince wrote:
>
>
>
>
>
> > Hi Bruno
>
> >>> If the UD was a concrete one like you ran then it would start to
> >>> generate all programs and execute them all by one step etc.  But are
> >>> you saying that because the UD exists platonically all these  
> >>> programs
> >>> and  each of their steps exist also and hence, by the existence of a
> >>> successor law they have an implicit  time order?
> >> Yes. The UD exist, and is even representable by a number. UD*, the
> >> complete running of the UD does not exist in that sense, because it  
> >> is
> >> an infinite object, and such object does not exist in simple
> >> arithmetical theories. But all finite parts of the UD* exist, and  
> >> this
> >> will be enough for "first person" being able to glue the  
> >> computations.
> >> For example, you could, for theoretical purpose, represent all the
> >> running of the UD by a specific total computable function. For  
> >> example
> >> by the function F which on n gives the (number representing the) nth
> >> first steps of the UD*. Then you can use the theorem which asserts
> >> that all total computable functions are representable in Robinson
> >> Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is
> >> proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,
> >> or in Epstein and Carnielli. In Mendelson book it is done directly in
> >> Peano Arithmetic.
>
> >> It is because our "3-we", our bodies, or our bodies descriptions, are
> >> constructed within these steps. But our first person are not, and no
> >> finite pieces of the UD can give the "real experience". This is a
> >> consequence of the first six steps: our next personal experience is
> >> determined by the whole actual infinity of all the infinitely many
> >> computations arrive at our current state. (+ step 8, where we abandon
> >> explicitly the physical supervenience thesis for the computational  
> >> one).
> > This “glueing” idea reminds me of David Deutsch’s attempt to explain
> > how time is an illusion in “The Fabric of Reality”. I never have got
> > this one!
> > I can follow your argument but it seems to put a very special status
> > on the ist person experience.  You say that our “3-person”/ bodily
> > descriptions are contained as subprograms in the (infinite) programs
> > which collectively provide Observer Moments for them.
>
> OK.
> I rephrase for myself. If you meant things differently, just tell me.
> By comp assumption, I survive if some "machine" goes through a  
> computation, that is, a sequence of computational states related by  
> some universal machine: s0, s1, s2, s3, s4, s5, s6, s7, ...
> The bodily description are, strictly speaking defined by the doctor  
> choice of level of my description. They are third person sharable, you  
> can send them by mail attachment, in principle (a lot of giga!).
> But the computation itself is defined by the logical relation between  
> those steps, and by digitality those steps, and their sequencing (made  
> by a universal machine) are definable in arithmetic, and the existence  
> of the steps, the states, the finite piece of computations, and (in a  
> slightly different sense for technical reason) the infinite  
> 

Re: UDA query

2009-12-31 Thread Bruno Marchal

On 30 Dec 2009, at 17:51, Nick Prince wrote:

> Hi Bruno
>
>>> If the UD was a concrete one like you ran then it would start to
>>> generate all programs and execute them all by one step etc.  But are
>>> you saying that because the UD exists platonically all these  
>>> programs
>>> and  each of their steps exist also and hence, by the existence of a
>>> successor law they have an implicit  time order?
>> Yes. The UD exist, and is even representable by a number. UD*, the
>> complete running of the UD does not exist in that sense, because it  
>> is
>> an infinite object, and such object does not exist in simple
>> arithmetical theories. But all finite parts of the UD* exist, and  
>> this
>> will be enough for "first person" being able to glue the  
>> computations.
>> For example, you could, for theoretical purpose, represent all the
>> running of the UD by a specific total computable function. For  
>> example
>> by the function F which on n gives the (number representing the) nth
>> first steps of the UD*. Then you can use the theorem which asserts
>> that all total computable functions are representable in Robinson
>> Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is
>> proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,
>> or in Epstein and Carnielli. In Mendelson book it is done directly in
>> Peano Arithmetic.
>
>
>> It is because our "3-we", our bodies, or our bodies descriptions, are
>> constructed within these steps. But our first person are not, and no
>> finite pieces of the UD can give the "real experience". This is a
>> consequence of the first six steps: our next personal experience is
>> determined by the whole actual infinity of all the infinitely many
>> computations arrive at our current state. (+ step 8, where we abandon
>> explicitly the physical supervenience thesis for the computational  
>> one).
> This “glueing” idea reminds me of David Deutsch’s attempt to explain
> how time is an illusion in “The Fabric of Reality”. I never have got
> this one!
> I can follow your argument but it seems to put a very special status
> on the ist person experience.  You say that our “3-person”/ bodily
> descriptions are contained as subprograms in the (infinite) programs
> which collectively provide Observer Moments for them.

OK.
I rephrase for myself. If you meant things differently, just tell me.
By comp assumption, I survive if some "machine" goes through a  
computation, that is, a sequence of computational states related by  
some universal machine: s0, s1, s2, s3, s4, s5, s6, s7, ...
The bodily description are, strictly speaking defined by the doctor  
choice of level of my description. They are third person sharable, you  
can send them by mail attachment, in principle (a lot of giga!).
But the computation itself is defined by the logical relation between  
those steps, and by digitality those steps, and their sequencing (made  
by a universal machine) are definable in arithmetic, and the existence  
of the steps, the states, the finite piece of computations, and (in a  
slightly different sense for technical reason) the infinite  
computations are all described completely in the elementary relations  
between number (or between combinators, or whatever is your favorite  
universal inductive structure, say). I take the number because they  
are taught in school (I think).

So, all the statements asserting that there are machines x accessing  
state i and (may be) 'outputing' j, are arithmetical true statement  
(when true), and actually, with Church thesis, they are theorems of  
any Sigma_1 complete theory.

When true, they are true independently of you and me, and when they  
are proved in a theory, that fact is true independently of me and you.  
Theories and machines are mathematical object, and the fact that a  
theory or a machine proves a theorem is a mathematical truth. That is  
independent of you, me, but also of time and space.

Up to this, we did not mention first person experiences. Just all  
machine's histories, described by numbers relations.

The "problem" of the first person view of the machine, is that a  
machine cannot know which machines "it" is, nor which computations  
emulate it. He can bet for a continuum (with the rule Y = II,  
bifurcation of "futur" retrospect on the "path").




> But I think you
> saying that our 1-person experience (frog view) is emergent from the
> collective (infinite) computations which are consistent with this
> emergent experience which is elaborated in your steps 1-7.  It seems
> to make this ist person experience somewhat mystical as to why it is
> “experienced” at all.

I think you are right. But here the amount of mysticism needed, is the  
amount needed to say "yes" to the doctor. The belief in the  
possibility (in principle) of technological reincarnation.
And then, the math explain why this, which is our consciousness, has  
to seem completely mysterious at first sight.
But that mystery is no mo

Re: UDA query

2009-12-31 Thread ronaldheld
Bruno:
yes that is unfortunately true.
 Ronald

On Dec 30, 10:25 am, Bruno Marchal  wrote:
> On 30 Dec 2009, at 03:29, ronaldheld wrote:
>
> > Bruno:
> >   Is there a UD that is implemented in Fortran?
>
> I don't know. If you know Fortran, it should be a relatively easy task  
> to implement one.
> Note that you have still the choice between a fortran program  
> dovetailing on all computations by combinators, or on all computations  
> by LISP programs, or on all proofs of Sigma_1 complete arithmetical  
> sentences, or on all running of game of life patterns, etc.
> Of you can write a Fortran program executing all Fortran programs. All  
> this will be equivalent. All UD executes all UDs, and this an infinity  
> of times.
>
> Good exercise. A bit tedious though.
>
> Bruno
>
>
>
>
>
>
>
> > On Dec 29, 4:55 am, Bruno Marchal  wrote:
> >> On 28 Dec 2009, at 21:24, Nick Prince wrote:
>
>  Well, it is better to assume just the axiom of, say, Robinson
>  arithmetic. You assume 0, the successors, s(0), s(s(0)), etc.
>  You assume some laws, like s(x) = s(y) -> x = y, 0 ≠ s(x), the  
>  laws
>  of addition, and multiplication. Then the existence of the  
>  universal
>  machine and the UD follows as consequences.
>
> >>> Ok so the UD exists (platonically?)
>
> >> Yes. The UD exists, and its existence can be proved in or by very  
> >> weak
> >> (not yet Löbian) arithmetical theories, like Robinson Arithmetic.
> >> The UD exists like the number 733 exists. The proof of its existence
> >> is even constructive, so it exists even for an intuitionist (non
> >> platonist). No need of the excluded middle principle.
>
>  Better not to conceive them as living in some place. "where" and
>  "when" are not arithmetical predicate. The UD exists like PI or the
>  square root of 2.
>  (Assuming CT of course, to pretend the "U" in the UD is really
>  universal, with respect to computability).
>
> >>> Fine so the UD has an objective existence in spite of whatever else
> >>> exists.
>
> >> It exists in the sense that we can prove it to exist once we accept
> >> the statement that 0 is different from all successor (0 ≠ s(x) for
> >> all x), etc.
> >> If you accept high school elementary arithmetic, then the UD exists  
> >> in
> >> the same sense that prime numbers exists.
> >> "exist" is used in sense of first order logic. This leads to the  
> >> usual
> >> philosophical problems in math, no new one, and the UDA reasoning  
> >> does
> >> not depend on the alternative way to solve those philsophical  
> >> problem,
> >> unless you propose a ultra-finitist solution (which I exclude in comp
> >> by arithmetical realism).
>
>  There is a "time order". The most basic one, after the successor  
>  law,
>
>  is the computational steps of a Universal Dovetailer.
>  Then you have a (different) time order for each individual
>  computations generated by the UD, like
>
>  phi_24 (7)^1,   phi_24 (7)^2,   phi_24 (7)^3,   phi_24 (7)^4, ...
>  where    "phi_i (j)^s" denotes the sth steps of the computation (by
>  the UD) of the ith programs on input j.
>
> >>> If the UD was a concrete one like you ran then it would start to
> >>> generate all programs and execute them all by one step etc.  But are
> >>> you saying that because the UD exists platonically all these  
> >>> programs
> >>> and  each of their steps exist also and hence, by the existence of a
> >>> successor law they have an implicit  time order?
>
> >> Yes. The UD exist, and is even representable by a number. UD*, the
> >> complete running of the UD does not exist in that sense, because it  
> >> is
> >> an infinite object, and such object does not exist in simple
> >> arithmetical theories. But all finite parts of the UD* exist, and  
> >> this
> >> will be enough for "first person" being able to glue the  
> >> computations.
> >> For example, you could, for theoretical purpose, represent all the
> >> running of the UD by a specific total computable function. For  
> >> example
> >> by the function F which on n gives the (number representing the) nth
> >> first steps of the UD*. Then you can use the theorem which asserts
> >> that all total computable functions are representable in Robinson
> >> Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is
> >> proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,
> >> or in Epstein and Carnielli. In Mendelson book it is done directly in
> >> Peano Arithmetic.
>
>  Then there will be the time generated by first person learning and
>  which relies eventually on a statistical view on infinities of
>  computations.
>
> >>> Is this because we are essentially constructs within these steps?
>
> >> It is because our "3-we", our bodies, or our bodies descriptions, are
> >> constructed within these steps. But our first person are not, and no
> >> finite pieces of the UD can give the "real experience

Re: UDA query

2009-12-30 Thread Nick Prince
Hi Bruno

>> If the UD was a concrete one like you ran then it would start to
>> generate all programs and execute them all by one step etc.  But are
>> you saying that because the UD exists platonically all these programs
>> and  each of their steps exist also and hence, by the existence of a
>> successor law they have an implicit  time order?
>Yes. The UD exist, and is even representable by a number. UD*, the
>complete running of the UD does not exist in that sense, because it is
>an infinite object, and such object does not exist in simple
>arithmetical theories. But all finite parts of the UD* exist, and this
>will be enough for "first person" being able to glue the computations.
>For example, you could, for theoretical purpose, represent all the
>running of the UD by a specific total computable function. For example
>by the function F which on n gives the (number representing the) nth
>first steps of the UD*. Then you can use the theorem which asserts
>that all total computable functions are representable in Robinson
>Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is
>proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,
>or in Epstein and Carnielli. In Mendelson book it is done directly in
>Peano Arithmetic.


>It is because our "3-we", our bodies, or our bodies descriptions, are
>constructed within these steps. But our first person are not, and no
>finite pieces of the UD can give the "real experience". This is a
>consequence of the first six steps: our next personal experience is
>determined by the whole actual infinity of all the infinitely many
>computations arrive at our current state. (+ step 8, where we abandon
>explicitly the physical supervenience thesis for the computational one).
This “glueing” idea reminds me of David Deutsch’s attempt to explain
how time is an illusion in “The Fabric of Reality”. I never have got
this one!
I can follow your argument but it seems to put a very special status
on the ist person experience.  You say that our “3-person”/ bodily
descriptions are contained as subprograms in the (infinite) programs
which collectively provide Observer Moments for them. But I think you
saying that our 1-person experience (frog view) is emergent from the
collective (infinite) computations which are consistent with this
emergent experience which is elaborated in your steps 1-7.  It seems
to make this ist person experience somewhat mystical as to why it is
“experienced” at all.  Some people wonder why we cannot see the other
worlds in QM but I am often amazed that we experience one at all!
Anyway all of what you say seems consistent with the many worlds
picture (which it should be).

>> Time is not difficult. It is right in the successor axioms of
>> arithmetic.
I’ll come back to this
>> Here again you confirm the invocation of the successor axioms.
>Yes. It is fundamental. I cannot extract those from logic alone. No
>more than I can define addition or multiplication without using the
>successor terms s(-) :
>for all x  x + 0 = x
>for all x and yx + s(y) = s(x + y)
>You have to understand that all the talk on the phi_i and w_i,
>including the existence of universal number
>(EuAxAy phi_u() = phi_x(y)) can be translated in pure first order
>arithmetic, using only s, + and *.
>I could add some nuances. "To be prime" is an intrinsic property of a
>number. To be a universal number is not intrinsic. To define a
>universal number I have to "arithmetize" the theory. The theory uses
>variables x, y, z, ..., so I will have to represent "to be a variable"
>in the theory. The theory "understands" only numbers. I can decide to
>represent the variables by even numbers (for example). "Even(x)" can
>be represented by "Ey(x = s(s(0)) * y)". So "variable(x)" will be
>represented by the same expression. Then I will represent "to be a
>formula", "to be an axiom", to be a proof", "to be a computation",
>using Gödel's arithmetization technic (which is just a form of
>programming in arithmetic). This will lead to a representation of
>being a universal number.



Where can I find out about this arithmetization technique and what do
you mean by a “universal number”?



>Now, would I decide to represent the variable in some other way (by
>the odd numbers, for example), the preceding universal number will
>still be in a universal number (intrinsically), but I will not been
>able to see it, or to mention it explicitly. But here, you have to
>just realize (cf the first six step of uda) that the first person
>experience depends on all universal numbers, in all possible sense/
>arithmetical-implementations.
>In particular "you here and now" are indeed implemented in arithmetic
>in both the universal numbers based on (variable(x) = even(x), and
>variable(x) = odd(x)). *ALL* universal numbers will compete below your
>substitution level.
>The fact that elementary (Robinson) arithmetic is already (Turing)
>universal is an impressive not obvious fact. But it is no more
>astonishing t

Re: UDA query

2009-12-30 Thread Bruno Marchal

On 30 Dec 2009, at 03:29, ronaldheld wrote:

> Bruno:
>   Is there a UD that is implemented in Fortran?

I don't know. If you know Fortran, it should be a relatively easy task  
to implement one.
Note that you have still the choice between a fortran program  
dovetailing on all computations by combinators, or on all computations  
by LISP programs, or on all proofs of Sigma_1 complete arithmetical  
sentences, or on all running of game of life patterns, etc.
Of you can write a Fortran program executing all Fortran programs. All  
this will be equivalent. All UD executes all UDs, and this an infinity  
of times.

Good exercise. A bit tedious though.

Bruno




>
>
> On Dec 29, 4:55 am, Bruno Marchal  wrote:
>> On 28 Dec 2009, at 21:24, Nick Prince wrote:
>>
>>
>>
 Well, it is better to assume just the axiom of, say, Robinson
 arithmetic. You assume 0, the successors, s(0), s(s(0)), etc.
 You assume some laws, like s(x) = s(y) -> x = y, 0 ≠ s(x), the  
 laws
 of addition, and multiplication. Then the existence of the  
 universal
 machine and the UD follows as consequences.
>>
>>> Ok so the UD exists (platonically?)
>>
>> Yes. The UD exists, and its existence can be proved in or by very  
>> weak
>> (not yet Löbian) arithmetical theories, like Robinson Arithmetic.
>> The UD exists like the number 733 exists. The proof of its existence
>> is even constructive, so it exists even for an intuitionist (non
>> platonist). No need of the excluded middle principle.
>>
>>
>>
 Better not to conceive them as living in some place. "where" and
 "when" are not arithmetical predicate. The UD exists like PI or the
 square root of 2.
 (Assuming CT of course, to pretend the "U" in the UD is really
 universal, with respect to computability).
>>
>>> Fine so the UD has an objective existence in spite of whatever else
>>> exists.
>>
>> It exists in the sense that we can prove it to exist once we accept
>> the statement that 0 is different from all successor (0 ≠ s(x) for
>> all x), etc.
>> If you accept high school elementary arithmetic, then the UD exists  
>> in
>> the same sense that prime numbers exists.
>> "exist" is used in sense of first order logic. This leads to the  
>> usual
>> philosophical problems in math, no new one, and the UDA reasoning  
>> does
>> not depend on the alternative way to solve those philsophical  
>> problem,
>> unless you propose a ultra-finitist solution (which I exclude in comp
>> by arithmetical realism).
>>
>>
>>
>>
>>
>>
>>
 There is a "time order". The most basic one, after the successor  
 law,
>>
 is the computational steps of a Universal Dovetailer.
 Then you have a (different) time order for each individual
 computations generated by the UD, like
>>
 phi_24 (7)^1,   phi_24 (7)^2,   phi_24 (7)^3,   phi_24 (7)^4, ...
 where"phi_i (j)^s" denotes the sth steps of the computation (by
 the UD) of the ith programs on input j.
>>
>>> If the UD was a concrete one like you ran then it would start to
>>> generate all programs and execute them all by one step etc.  But are
>>> you saying that because the UD exists platonically all these  
>>> programs
>>> and  each of their steps exist also and hence, by the existence of a
>>> successor law they have an implicit  time order?
>>
>> Yes. The UD exist, and is even representable by a number. UD*, the
>> complete running of the UD does not exist in that sense, because it  
>> is
>> an infinite object, and such object does not exist in simple
>> arithmetical theories. But all finite parts of the UD* exist, and  
>> this
>> will be enough for "first person" being able to glue the  
>> computations.
>> For example, you could, for theoretical purpose, represent all the
>> running of the UD by a specific total computable function. For  
>> example
>> by the function F which on n gives the (number representing the) nth
>> first steps of the UD*. Then you can use the theorem which asserts
>> that all total computable functions are representable in Robinson
>> Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is
>> proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,
>> or in Epstein and Carnielli. In Mendelson book it is done directly in
>> Peano Arithmetic.
>>
>>
>>
 Then there will be the time generated by first person learning and
 which relies eventually on a statistical view on infinities of
 computations.
>>
>>> Is this because we are essentially constructs within these steps?
>>
>> It is because our "3-we", our bodies, or our bodies descriptions, are
>> constructed within these steps. But our first person are not, and no
>> finite pieces of the UD can give the "real experience". This is a
>> consequence of the first six steps: our next personal experience is
>> determined by the whole actual infinity of all the infinitely many
>> computations arrive at our current state. (+ step 8, where we abandon
>> explicitly the physical supe

Re: UDA query

2009-12-29 Thread ronaldheld
Bruno:
   Is there a UD that is implemented in Fortran?
   Ronald

On Dec 29, 4:55 am, Bruno Marchal  wrote:
> On 28 Dec 2009, at 21:24, Nick Prince wrote:
>
>
>
> >> Well, it is better to assume just the axiom of, say, Robinson
> >> arithmetic. You assume 0, the successors, s(0), s(s(0)), etc.
> >> You assume some laws, like s(x) = s(y) -> x = y, 0 ≠ s(x), the laws
> >> of addition, and multiplication. Then the existence of the universal
> >> machine and the UD follows as consequences.
>
> > Ok so the UD exists (platonically?)
>
> Yes. The UD exists, and its existence can be proved in or by very weak  
> (not yet Löbian) arithmetical theories, like Robinson Arithmetic.
> The UD exists like the number 733 exists. The proof of its existence  
> is even constructive, so it exists even for an intuitionist (non  
> platonist). No need of the excluded middle principle.
>
>
>
> >> Better not to conceive them as living in some place. "where" and
> >> "when" are not arithmetical predicate. The UD exists like PI or the
> >> square root of 2.
> >> (Assuming CT of course, to pretend the "U" in the UD is really
> >> universal, with respect to computability).
>
> > Fine so the UD has an objective existence in spite of whatever else
> > exists.
>
> It exists in the sense that we can prove it to exist once we accept  
> the statement that 0 is different from all successor (0 ≠ s(x) for  
> all x), etc.
> If you accept high school elementary arithmetic, then the UD exists in  
> the same sense that prime numbers exists.
> "exist" is used in sense of first order logic. This leads to the usual  
> philosophical problems in math, no new one, and the UDA reasoning does  
> not depend on the alternative way to solve those philsophical problem,  
> unless you propose a ultra-finitist solution (which I exclude in comp  
> by arithmetical realism).
>
>
>
>
>
>
>
> >> There is a "time order". The most basic one, after the successor law,
>
> >> is the computational steps of a Universal Dovetailer.
> >> Then you have a (different) time order for each individual
> >> computations generated by the UD, like
>
> >> phi_24 (7)^1,   phi_24 (7)^2,   phi_24 (7)^3,   phi_24 (7)^4, ...
> >> where    "phi_i (j)^s" denotes the sth steps of the computation (by
> >> the UD) of the ith programs on input j.
>
> > If the UD was a concrete one like you ran then it would start to
> > generate all programs and execute them all by one step etc.  But are
> > you saying that because the UD exists platonically all these programs
> > and  each of their steps exist also and hence, by the existence of a
> > successor law they have an implicit  time order?
>
> Yes. The UD exist, and is even representable by a number. UD*, the  
> complete running of the UD does not exist in that sense, because it is  
> an infinite object, and such object does not exist in simple  
> arithmetical theories. But all finite parts of the UD* exist, and this  
> will be enough for "first person" being able to glue the computations.  
> For example, you could, for theoretical purpose, represent all the  
> running of the UD by a specific total computable function. For example  
> by the function F which on n gives the (number representing the) nth  
> first steps of the UD*. Then you can use the theorem which asserts  
> that all total computable functions are representable in Robinson  
> Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is  
> proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,  
> or in Epstein and Carnielli. In Mendelson book it is done directly in  
> Peano Arithmetic.
>
>
>
> >> Then there will be the time generated by first person learning and
> >> which relies eventually on a statistical view on infinities of
> >> computations.
>
> > Is this because we are essentially constructs within these steps?
>
> It is because our "3-we", our bodies, or our bodies descriptions, are  
> constructed within these steps. But our first person are not, and no  
> finite pieces of the UD can give the "real experience". This is a  
> consequence of the first six steps: our next personal experience is  
> determined by the whole actual infinity of all the infinitely many  
> computations arrive at our current state. (+ step 8, where we abandon  
> explicitly the physical supervenience thesis for the computational one).
>
>
>
> >> Time is not difficult. It is right in the successor axioms of
> >> arithmetic.
>
> > Here again you confirm the invocation of the successor axioms.
>
> Yes. It is fundamental. I cannot extract those from logic alone. No  
> more than I can define addition or multiplication without using the  
> successor terms s(-) :
>
> for all x  x + 0 = x
> for all x and y    x + s(y) = s(x + y)
>
> You have to understand that all the talk on the phi_i and w_i,  
> including the existence of universal number
> (EuAxAy phi_u() = phi_x(y)) can be translated in pure first order  
> arithmetic, using only s, + and *

Re: UDA query

2009-12-29 Thread Bruno Marchal

On 28 Dec 2009, at 21:24, Nick Prince wrote:

>
>
>> Well, it is better to assume just the axiom of, say, Robinson
>> arithmetic. You assume 0, the successors, s(0), s(s(0)), etc.
>> You assume some laws, like s(x) = s(y) -> x = y, 0 ≠ s(x), the laws
>> of addition, and multiplication. Then the existence of the universal
>> machine and the UD follows as consequences.
>
> Ok so the UD exists (platonically?)

Yes. The UD exists, and its existence can be proved in or by very weak  
(not yet Löbian) arithmetical theories, like Robinson Arithmetic.
The UD exists like the number 733 exists. The proof of its existence  
is even constructive, so it exists even for an intuitionist (non  
platonist). No need of the excluded middle principle.


>
>> Better not to conceive them as living in some place. "where" and
>> "when" are not arithmetical predicate. The UD exists like PI or the
>> square root of 2.
>> (Assuming CT of course, to pretend the "U" in the UD is really
>> universal, with respect to computability).
>
> Fine so the UD has an objective existence in spite of whatever else
> exists.

It exists in the sense that we can prove it to exist once we accept  
the statement that 0 is different from all successor (0 ≠ s(x) for  
all x), etc.
If you accept high school elementary arithmetic, then the UD exists in  
the same sense that prime numbers exists.
"exist" is used in sense of first order logic. This leads to the usual  
philosophical problems in math, no new one, and the UDA reasoning does  
not depend on the alternative way to solve those philsophical problem,  
unless you propose a ultra-finitist solution (which I exclude in comp  
by arithmetical realism).


>
>
>> There is a "time order". The most basic one, after the successor law,
>
>> is the computational steps of a Universal Dovetailer.
>> Then you have a (different) time order for each individual
>> computations generated by the UD, like
>
>> phi_24 (7)^1,   phi_24 (7)^2,   phi_24 (7)^3,   phi_24 (7)^4, ...
>> where"phi_i (j)^s" denotes the sth steps of the computation (by
>> the UD) of the ith programs on input j.
>
> If the UD was a concrete one like you ran then it would start to
> generate all programs and execute them all by one step etc.  But are
> you saying that because the UD exists platonically all these programs
> and  each of their steps exist also and hence, by the existence of a
> successor law they have an implicit  time order?

Yes. The UD exist, and is even representable by a number. UD*, the  
complete running of the UD does not exist in that sense, because it is  
an infinite object, and such object does not exist in simple  
arithmetical theories. But all finite parts of the UD* exist, and this  
will be enough for "first person" being able to glue the computations.  
For example, you could, for theoretical purpose, represent all the  
running of the UD by a specific total computable function. For example  
by the function F which on n gives the (number representing the) nth  
first steps of the UD*. Then you can use the theorem which asserts  
that all total computable functions are representable in Robinson  
Arithmetic (a tiny fragment of Pean Arithmetic). That theorems is  
proved in detail, for Robinson-ile arithmetic, in Boolos and Jeffrey,  
or in Epstein and Carnielli. In Mendelson book it is done directly in  
Peano Arithmetic.




>
>
>
>> Then there will be the time generated by first person learning and
>> which relies eventually on a statistical view on infinities of
>> computations.
>
> Is this because we are essentially constructs within these steps?

It is because our "3-we", our bodies, or our bodies descriptions, are  
constructed within these steps. But our first person are not, and no  
finite pieces of the UD can give the "real experience". This is a  
consequence of the first six steps: our next personal experience is  
determined by the whole actual infinity of all the infinitely many  
computations arrive at our current state. (+ step 8, where we abandon  
explicitly the physical supervenience thesis for the computational one).



>
>> Time is not difficult. It is right in the successor axioms of
>> arithmetic.
>
> Here again you confirm the invocation of the successor axioms.

Yes. It is fundamental. I cannot extract those from logic alone. No  
more than I can define addition or multiplication without using the  
successor terms s(-) :

for all x  x + 0 = x
for all x and yx + s(y) = s(x + y)

You have to understand that all the talk on the phi_i and w_i,  
including the existence of universal number
(EuAxAy phi_u() = phi_x(y)) can be translated in pure first order  
arithmetic, using only s, + and *.

I could add some nuances. "To be prime" is an intrinsic property of a  
number. To be a universal number is not intrinsic. To define a  
universal number I have to "arithmetize" the theory. The theory uses  
variables x, y, z, ..., so I will have to represent "to be a variable"  
in the theory. The 

  1   2   >