Re: Belief Statements

2005-02-01 Thread Hal Ruhl
I would like to offer a resolution to my issue with my (2) by indicating 
that choice is the essential variable that allows the dynamic of an 
evolving Something over kernels within the All to be inconsistent with its 
history.

This allows both the appearance of time and the appearance of choice to be 
not appearances at all.

Hal



RE: Belief Statements

2005-02-01 Thread Bruno Marchal
At 12:51 29/01/05 -0800, Hal Finney wrote:
 On 28 Jan 2005 Hal Finney wrote:
 I suggest that the answer is that accidental instantiations only
 contribute an infinitesimal amount, compared to the contributions of
 universes like ours.
Stathis Papaioannou replied:
 I don't understand this conclusion. A lengthy piece of code (whether it
 represents a moment of consciousness or anything else) is certainly less
 likely to be accidentally implemented on some random computer than on the
 computer running the original software. But surely the opposite is the 
case
 if you allow that all possible computer programs run simply by virtue of
 their existence as mathematical objects. For every program running on a
 biological or electronic computer, there must be infinitely many exact
 analogues and every minor and major variation thereof running out there in
 Platonia.

I'm afraid I don't understand your argument here.  I am using the
Schmidhuber concept that the measure of a program is related to its size
and/or information complexity: that shorter (and simpler) programs have
greater measure than longer ones.  Do you agree with that, or are you
challenging that view?
My point was then that we can imagine a short program that can naturally
evolve consciousness, whereas to create consciousness artificially
or arbitrarily, without a course of natural evolution, requires a huge
number of bits to specify the conscious entity in its entirety.
You mention infinity; are you saying that there is no meaningful
difference between the measure of programs, because each one has an
infinite number of analogs?  Could you explain that concept in more
detail?

I am not sure that I understand what you do with that measure on programs.
I prefer to look at infinite coin generations (that is infinitely 
reiterated self-duplications)
and put measure on infinite sets of alternatives. Those infinite sets of
relative alternative *are* themselves generated by simple programs (like
the UD). Now we cannot know in which computational history we belong,
or more exactly we belong to an infinity of computational histories
(undistinguishable up to now). (It could be all the repetition of your 
simple program)
But to make an infinitely correct prediction we should average on all
computational histories going through our states.
Your measure could explain why simple and short subroutine persists
everywhere but what we must do is to extract the actual measure, the one
apparently given by QM, from an internal measure on all relatively
consistent continuation of our (unknown!) probable computationnal state.
This is independent of the fact that some short programs could play the role of
some initiator of something persisting. Perhaps a quantum dovetailer ? But to
proceed by taking comp seriously this too should be justify from within.
Searching a measure on the computational histories instead of the
programs can not only be justified by thought experiments, but can
be defined neatly mathematically. Also a modern way of talking on the
Many Worlds is in term of relative consistent histories.
But the histories emerge from within. This too must be taken into account.
It can change the logic. (And actually changes it according to the lobian
machine).

Bruno
http://iridia.ulb.ac.be/~marchal/



RE: Belief Statements

2005-02-01 Thread Hal Finney
Bruno writes:
 I am not sure that I understand what you do with that measure on programs.
 I prefer to look at infinite coin generations (that is infinitely 
 reiterated self-duplications)
 and put measure on infinite sets of alternatives. Those infinite sets of
 relative alternative *are* themselves generated by simple programs (like
 the UD).

Here is how I approach it, based on Schmidhuber.  Suppose we pick a model
of computation based on a particular Universal Turing Machine (UTM).
Imagine this model being given all possible input tapes.  There are an
uncountably infinite number of such tapes, but on any given tape only
a finite length will actually be used (i.e. the probability of using
an infinite number of bits of the tape is zero).  This means that any
program which runs is only a finite size, yet occurs on an infinite
number of tapes.  The fraction of the tapes which holds a given program
is proportional to 1 over 2^(program length), if they are binary tapes.
This is considered the measure of the given program.

An equivalent way to think of it is to imagine the UTM being fed with
a tape created by coin flips.  Now the probability that it will run a
given program is its measure, and again it will be proportional to 1
over 2^(program length).  I don't know whether this is what you mean
by infinite coin generations but it sounds similar.

I believe you can get the same concept of measure by using the Universal
Dovetailer (UD) but I don't see it as necessary or particularly helpful
to invoke this step.  To me it seems simpler just to imagine all possible
programs being run, without having to also imagine the operating system
which runs them all on a time-sharing computer, which is what the UD
amounts to.

 Now we cannot know in which computational history we belong,
 or more exactly we belong to an infinity of computational histories
 (undistinguishable up to now). (It could be all the repetition of your 
 simple program)

And by repetition of your simple program I think you mean the fact
that there are an infinite number of tapes which have the same prefix
(the same starting bits) and which all therefore run the same program,
if it fits in that prefix.  This is the basic reason why shorter programs
have greater measure than longer ones, because there are a larger fraction
of the tapes which have a given short prefix than a long one.

It's also possible, as you imply, that your consciousness is instantiated
in multiple completely different programs.  For example, we live in a
program which pretty straightforwardly implements the universe we see;
but we also live in a program which implements a very different universe,
in which aliens exist who run artificial life experiments, and we are
one of those experiments.  We also live in programs which just happen to
simulate moments of our consciousness, purely through random chance.

However, my guess is that the great majority of our measure will lie
in just one program.  I suspect that that program will be quite simple,
and that all the other programs (such as the one with the aliens running
alife experiments) will be considerably more complex.  The simplest case
is just what we see, and that is where most of our measure comes from.

 But to make an infinitely correct prediction we should average on all
 computational histories going through our states.

Yes, I agree, although as I say my guess is that we will be close enough
just by taking things as we see them, and in fact it may well be that
the corrections from considering bizarre computational histories will
be so tiny as to be unmeasurable in practice.

 Your measure could explain why simple and short subroutine persists
 everywhere but what we must do is to extract the actual measure, the one
 apparently given by QM, from an internal measure on all relatively
 consistent continuation of our (unknown!) probable computationnal state.
 This is independent of the fact that some short programs could play the role 
 of
 some initiator of something persisting. Perhaps a quantum dovetailer ? But to
 proceed by taking comp seriously this too should be justify from within.
 Searching a measure on the computational histories instead of the
 programs can not only be justified by thought experiments, but can
 be defined neatly mathematically. Also a modern way of talking on the
 Many Worlds is in term of relative consistent histories.
 But the histories emerge from within. This too must be taken into account.
 It can change the logic. (And actually changes it according to the lobian
 machine).

I'm losing you here.

Hal Finney