RE: Immortality

2001-10-11 Thread Marchal

Charles Goodwin wrote:

 -Original Message-
 From: Brent Meeker [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, 10 October 2001 2:23 a.m.

 But then why do you say that a duplicate of your brain processes in a
 computer would not be conscious.  You seem to be
 discriminating between
 a biological duplicate and a silicon duplicate.

The use of the word 'duplicate' seems contentious to me. The question is 
whether you *can* duplicate the processes in the brain at a
suitable level of abstraction, and whether (if you can) such a duplicate 
would be conscious. I don't think anyone knows the answer
to this (yet) !


To answer yes to that question is exactly what I mean by the 
comp *hypothesis*.

The comp hypothesis is the hypothesis that there exist a level of
digital functional description of myself such that I can survive through
a substitution made at that level.
The practitionners of comp are those who say yes to their 
digital-specialist doctor.

It is an hypothesis that has the curious property of entailing its
own necessarily hypothetical character. Comp entails no one will *never*
know it to be necessarily true. 

It comp is true, no consistent machines will ever prove it.

The honest doctor does never assure the succes, and confesses betting
on a level (+ probable other more technical bets for sure).

If you meet someone pretending knowing he/she or you are machine,
you better run!

The real question is: will the consistent machines remains consistent
by betting on it?

Suggestion: derive the physics from comp, compare with
empirical physics. Judge. 

If comp is true there is a danger for the practicionners: having 
survive so many times, having said and resaid yes to their digital 
brain specialist surgeon, and having gone literaly through so much
digital nets that they begin to believe they know comp true. (Then they 
became inconsistent).

Comp entails some trip *near* inconsistencies. Actually I guess
that's life. The miracle occurs at each instant. (Not only at
the mechanist hospital). Well with comp the fall into inconsistency
*can* occur at each instant too.

Bruno


PS The reasoning I propose does not depend on the level of
substitution. Only that it exists. You can choose the universal
wave function as a description of your brain (low level), or
approximation of concentration of chemicals (high level), 
or disposition of neural dynamics (higher level)
...
...
... or the bible (very very high level). By the bible I mean
Boolos 1993 of course :-). My thesis in a nutshell is that FOR is
a missing chapter of that book. FOR and other everything efforts.

The book bears about what self-referentially correct machines can 
say, and cannot say, about themselves and about their (necessarily 
hypothetical) consistent extensions.It's The manual of machine's 
psychology (my term, sure). If you don't know logic, here is
a shortcut: 
Jeffrey: Logic, its scope and limit.
Boolos and Jeffrey: Computability and Logic
Boolos 1993.

Or Boolos 1979, it is lighter and easier to digest.

And recall Smullyan's Forever Undecided.
You told me Smullyan is your favorite philosopher, 
or was I dreaming?






Re: Predictions duplications

2001-10-11 Thread Juho Pennanen



I tried to understand the problem that doctors Schmidhuber 
and Standish are discussing by describing it in the most 
concrete terms I could, below. (I admit beforehand I couldn't 
follow all  the details and do not know all the papers and 
theorems referred to, so this could be irrelevant.)

So, say you are going to drop a pencil from your hand and 
trying to predict if it's going to fall down or up this 
time. Using what I understand with comp TOE, I would take 
the set of all programs that at some state implement a 
certain conscious state, namely the state in which you 
remember starting your experiment of dropping the pencil, 
and have already recorded the end result (I abreviate this 
conscious state with CS. To be exact it is a set of states,
but that shouldn't make a difference). 

The space of all programs would be the set of all programs 
in some language, coded as infinite numerable sequences of 
0's and 1's. (I do not know how much the chosen language + 
coding has effect on the whole thing). 

Now for your prediction you need to divide the 
implementations of CS into two sets: those in which the 
pencil fell down and those in which it fell up. Then you 
compare the measures of those sets. (You would need to 
assume that each program is run just once or something of 
the sort. Some programs obviously implement CS several 
times when they run. So you would maybe just include those 
programs that implement CS infinitely many times, and 
weight them with the density of CS occurrences during 
their running.) 

One way to derive the measure you need is to assume a 
measure on the set of all infinite sequences (i.e. on
all programs). For this we have the natural measure, 
i.e. the product measure of the uniform measure on 
the set containing 0 and 1. And as far as my intuition 
goes, this measure would lead to the empirically correct 
prediction on the direction of the pencil falling. And 
if I understood it right, this is not too far from what 
Dr. Standish was claiming? And we wouldn't need any 
speed priors. 

But maybe the need of speed prior would come to play if I 
thought more carefully about the detailed assumptions
involved? E.g. that each program would be run just once, 
with the same speed etc? I am not sure.

Juho 




/
Juho Pennanen
Department of Forest Ecology, P.O.Box 24
FIN-00014 University of Helsinki
tel. (09)191 58144 (+358-9-191 58144)
GSM 040 5455 845 (+358-40-5455 845)
http://www.helsinki.fi/people/juho.pennanen
*/




Re: Predictions duplications

2001-10-11 Thread juergen



 From [EMAIL PROTECTED] :
 [EMAIL PROTECTED] wrote:
  
  So you NEED something additional to explain the ongoing regularity.
  You need something like the Speed Prior, which greatly favors regular 
  futures over others.
 
 I take issue with this statement. In Occam's Razor I show how any
 observer will expect to see regularities even with the uniform prior
 (comes about because all observers have resource problems,
 incidently). The speed prior is not necessary for Occam's Razor. It is
 obviously consistent with it though.

First of all: there is _no_ uniform prior on infinitely many things.
Try to build a uniform prior on the integers. (Tegmark also wrote that
... all mathematical structures are a priori given equal statistical
weight, but of course this does not make much sense because there is
_no_ way of assigning equal nonvanishing probability to all - infinitely
many - mathematical structures.)

There is at best a uniform measure on _beginnings_ of strings. Then
strings of equal size have equal measure.

But then regular futures (represented as strings) are just as likely
as irregular ones. Therefore I cannot understand the comment: (comes
about because all observers have resource problems, incidently).

Of course, alternative priors lead to alternative variants of Occam's
razor.  That has been known for a long time - formal versions of Occam's
razor go at least back to Solomonoff, 1964.  The big question really
is: which prior is plausible? The most general priors we can discuss are
those computable in the limit, in the algorithmic TOE paper. They do not
allow for computable optimal prediction though. But the more restrictive
Speed Prior does, and seems plausible from any programmer's point of view.

 The interesting thing is of course whether it is possible to
 experimentally distinguish between the speed prior and the uniform
 prior, and it is not at all clear to me that it is possible to
 distinguish between these cases.

I suggest to look at experimental data that seems to have Gaussian
randomness in it, such as interference patterns in split experiments.
The Speed Prior suggests the data cannot be really random, but that a
fast pseudorandom generator PRG is responsible, e.g., by dividing some
seed by 7 and taking some of the resulting digits as the new seed, or
whatever. So it's verifiable - we just have to discover the PRG method.

Juergen Schmidhuber

http://www.idsia.ch/~juergen/
http://www.idsia.ch/~juergen/everything/html.html
http://www.idsia.ch/~juergen/toesv2/






Re: Predictions duplications

2001-10-11 Thread juergen



   From [EMAIL PROTECTED] :
   [EMAIL PROTECTED] wrote:

So you NEED something additional to explain the ongoing regularity.
You need something like the Speed Prior, which greatly favors regular 
futures over others.
   
   I take issue with this statement. In Occam's Razor I show how any
   observer will expect to see regularities even with the uniform prior
   (comes about because all observers have resource problems,
   incidently). The speed prior is not necessary for Occam's Razor. It is
   obviously consistent with it though.
  
  First of all: there is _no_ uniform prior on infinitely many things.
  Try to build a uniform prior on the integers. (Tegmark also wrote that
  ... all mathematical structures are a priori given equal statistical
  weight, but of course this does not make much sense because there is
  _no_ way of assigning equal nonvanishing probability to all - infinitely
  many - mathematical structures.)
 
 I don't know why you insist on the prior being a PDF. It is not
 necessary. With the uniform prior, all finite sets have vanishing
 probability. However, all finite descriptions correspond to infinite
 sets, and these infinite sets have non-zero probability.

Huh? A PDF? You mean a probability density function? On a continuous set? 
No! I am talking about probability distributions on describable objects. 
On things you can program. 

Anyway, you write ...observer will expect to see regularities even with 
the uniform prior but that clearly cannot be true.

  There is at best a uniform measure on _beginnings_ of strings. Then
  strings of equal size have equal measure.
  
  But then regular futures (represented as strings) are just as likely
  as irregular ones. Therefore I cannot understand the comment: (comes
  about because all observers have resource problems, incidently).
 
 Since you've obviously barked up the wrong tree here, it's a little
 hard to know where to start. Once you understand that each observer
 must equivalence an infinite number of descriptions due to the
 boundedness of its resources, it becomes fairly obvious that the
 smaller, simpler descriptions correspond to larger equivalence classes
 (hence higher probability).

Maybe you should write down formally what you mean? Which resource bounds?
On which machine? What exactly do you mean by simple? Are you just
referring to the traditional Solomonoff-Levin measure and the associated
old Occam's razor theorems, or do you mean something else?

  Of course, alternative priors lead to alternative variants of Occam's
  razor.  That has been known for a long time - formal versions of Occam's
  razor go at least back to Solomonoff, 1964.  The big question really
  is: which prior is plausible? The most general priors we can discuss are
  those computable in the limit, in the algorithmic TOE paper. They do not
  allow for computable optimal prediction though. But the more restrictive
  Speed Prior does, and seems plausible from any programmer's point of view.
  
   The interesting thing is of course whether it is possible to
   experimentally distinguish between the speed prior and the uniform
   prior, and it is not at all clear to me that it is possible to
   distinguish between these cases.
  
  I suggest to look at experimental data that seems to have Gaussian
  randomness in it, such as interference patterns in split experiments.
  The Speed Prior suggests the data cannot be really random, but that a
  fast pseudorandom generator PRG is responsible, e.g., by dividing some
  seed by 7 and taking some of the resulting digits as the new seed, or
  whatever. So it's verifiable - we just have to discover the PRG method.
  
 
 I can't remember which incompleteness result it is, but it is
 impossible to prove the randomness of any sequence. In order to
 falsify your theory one would need to prove a sequence to be
 random. However, of course if all known sequences are provably
 pseudo-random (ie compressible), then this would constitute pretty
 good evidence. However, this is a tall order, as there is no algorithm
 for generating the compression behind an arbitrary sequence.

You are talking falsifiability. I am talking verifiability.  Sure, you
cannot prove randomness. But that's not the point of any inductive
science. The point is to find regularities if there are any. Occam's
razor encourages us to search for regularity, even when we do not know
whether there is any. Maybe some PhD student tomorrow will discover a
simple PRG of the kind I mentioned, and get famous.

It is important to see that Popper's popular and frequently cited and
overrated concept of falsifiability does not really help much to explain
what inductive science such as physics is all about. E.g., physicists
accept Everett's ideas although most of his postulated parallel universes
will remain inaccessible forever, and therefore are _not_ falsifiable.
Clearly, what's convincing about the multiverse theory is its simplicity,
not its