At 10:19 01/02/05 -0800, Hal Finney wrote:
Bruno writes: > I am not sure that I understand what you do with that measure on programs. > I prefer to look at infinite coin generations (that is infinitely > reiterated self-duplications) > and put measure on infinite sets of alternatives. Those infinite sets of > relative alternative *are* themselves" generated by simple programs (like > the UD).
Here is how I approach it, based on Schmidhuber.
OK. But remember that Schmidhuber completely dismisses the distinction between first and third points of view, and so his approach cannot be used to explain both the laws of mind, the laws of "matter" and the relation between them. I have explained this before.
Suppose we pick a model of computation based on a particular Universal Turing Machine (UTM).
All right. And with Church's thesis Schmidhuber is right to invoke the compiler theorem when he justifies such a choice is arbitrary.
Imagine this model being given all possible input tapes.
At once? I cannot imagine that, except under the form of a UD. This follows from the two diagonalization posts to this list referred to in my url. It is really the closure of the set of partial recursive function which makes the comp whole comp definable. I will come back on the diagonalisations. It's the pillar of all the construction I try to describe (but also of a very large part of theoretical computer science, including Li and Vitanyi).
There are an uncountably infinite number of such tapes, but on any given tape only a finite length will actually be used (i.e. the probability of using an infinite number of bits of the tape is zero).
To *define* (only) the program. But during arbitrary runs of even very short programs all the tape will be used. In the sense that the needed part of the tape will grow in an unboundable way, getting at deep (in Bennett sense) large strings. Those are both "absolutely" (or Kolmogorov-chaitin improbable but are highly probable relatively to each other sligth variant (like the work of Shakespeare which is both "coin"-improbable and then reflect determined universal lobian anxieties (to be short). The only measure I can make sense of is the relative (to a state) measure of the histories (= computations from some 1 or 3 point of views) going through that states.
This means that any program which runs is only a finite size, yet occurs on an infinite number of tapes.
This is ambiguous for me. Where on the tapes ? In company of other programs ? Finite or recursive or recursively enumerable sets of programs?
The fraction of the tapes which holds a given program is proportional to 1 over 2^(program length), if they are binary tapes.
Yes but you postulate an infinite random structure at the start. What is that? Taking the inside view into account gives cheaply strong form of randomness the first person comp indeterminacy).
This is considered the measure of the given program. An equivalent way to think of it is to imagine the UTM being fed with a tape created by coin flips. Now the probability that it will run a given program is its measure, and again it will be proportional to 1 over 2^(program length). I don't know whether this is what you mean by "infinite coin generations" but it sounds similar.
By some aspect: I just show how comp gives sense to infinite coin
generation by iterating self-duplications. In *that* situation (which is only partial
relatively to the UD) we have a "random noise".
And if we duplicate the populations of observers we get a locally
"third person" (in appearance) observable "random noise", like Quantum
Physicist *seems* to observe (but Many-Worlders know better).
I believe you can get the same concept of measure by using the Universal Dovetailer (UD) but I don't see it as necessary or particularly helpful to invoke this step. To me it seems simpler just to imagine all possible programs being run, without having to also imagine the operating system which runs them all on a time-sharing computer, which is what the UD amounts to.
The UD is just a program among others but it can be shown that it *is* the simplest program generating the most complex histories (actually all). But the complexity is judged from inside, by those "self-aware" programs generated in the many computations generated by the UD, and distributed in the whole execution of it (UD*).
> Now we cannot know in which computational history we belong, > or more exactly we "belong" to an infinity of computational histories > (undistinguishable up to now). (It could be all the repetition of your > simple program)
And by "repetition of your simple program" I think you mean the fact that there are an infinite number of tapes which have the same prefix (the same starting bits) and which all therefore run the same program, if it fits in that prefix. This is the basic reason why shorter programs have greater measure than longer ones, because there are a larger fraction of the tapes which have a given short prefix than a long one.
That simple program *has* been discovered and is the Universal Turing Machine from which the UD is just a "spashed" version. The S K combinators are even simpler.
It's also possible, as you imply, that your consciousness is instantiated in multiple completely different programs. For example, we live in a program which pretty straightforwardly implements the universe we see;
I don't think we see a universe. I am not sure I understand what you mean by "pretty straightforwardly implements the universe we see".
but we also live in a program which implements a very different universe,
I would say we are in a continuum of incredibly similar histories. Like all the "universe" where Pluton weight varies between infintesimals.
in which aliens exist who run artificial life experiments, and we are one of those experiments.
This does not make sense giving that before an act of observation which would make it history, we *are* in front on *all* alternatives.
We also live in programs which just happen to simulate moments of our consciousness, purely through random chance.
Well, those are runned by the UD. Either they give rise to finite computations (measure null) or they give rise to infinities with the genuine measure, and then it is no more attributable to random chance that what we are "most probably" living now.
However, my guess is that the great majority of our measure will lie in just one program.
Then it is either the UD or the "unitary preserving UD". But then my reasoning gives that the second must be justified by the first, or comp is false.
I suspect that that program will be quite simple, and that all the other programs (such as the one with the aliens running alife experiments) will be considerably more complex. The simplest case is just what we see, and that is where most of our measure comes from.
just what we see?
> But to make an infinitely correct prediction we should average on all > computational histories going through our states.
Yes, I agree, although as I say my guess is that we will be "close enough" just by taking things as we see them,
? (seeing, measurement, are one of the most complex phenomenon I can think
about. It is related to "interaction" and relative (local) irreversibility. I don't take
any of that of granted.
and in fact it may well be that the corrections from considering "bizarre" computational histories will be so tiny as to be unmeasurable in practice.
It should be so! That is what we can study by making more clear our
theoretical assumptions. Feynman's explanation of Newton's law by the
the quantum sum, describes why, in the quantum frame, bizarre computations
are "absolutely" more "probable" but then destructively annihilate themselves, so that
the shorter "normal" path get higher probability amplitude. We need to justified
something similar to get rid of "lambs eating wolves" once we take seriously
the idea that there is a level where we are Turing emulable. We must justified
why quantum computations seem to win in the first person actual living experiences.
> Your measure could explain why simple and short subroutine persists
> everywhere but what we must do is to extract the "actual measure", the one
> apparently given by QM, from an internal measure on all relatively
> consistent continuation of our (unknown!) probable computationnal state.
> This is independent of the fact that some short programs could play the role of
> some initiator of something persisting. Perhaps a quantum dovetailer ? But to
> proceed by taking comp seriously this too should be justify from within.
> Searching a measure on the computational histories instead of the
> programs can not only be justified by thought experiments, but can
> be defined neatly mathematically. Also a "modern" way of talking on the
> Many Worlds is in term of relative consistent histories.
> But the histories emerge from within. This too must be taken into account.
> It can change the logic. (And actually changes it according to the lobian
I'm losing you here.
Sorry. Just abruptly summing up years of works. Humans seem not yet quite able
to listen to each other on this planet so it is perhaps just premature to attract
the attention on the fact that those *machines* which are able to prove enough
theorems in elementary arithmetic are clever enough to infer by self-introspection
the geometry of the set of computational continuations they can "normally" expect.
That's the comp "physics".
That makes the comp hypothesis eventually testable/refutable, just by comparing the
comp physics with the empirical-inspired physics.
But listen to machine is pretty not en vogue isn't it?
Well Palestinians and Iraki have voted ... There are some hope ...in the long run.