I have been guilty of responding a little too quickly to your
I want to just focus on the following exchange about the
Universal dovetailer, and put aside questions of ontology, measure,
induction, anthropic principle, etc.
On Sun, Oct 16, 2011 at 04:51:20PM +0200, Bruno Marchal wrote:
I know there are only a countable number of programs. Does
only a countable number of histories too? Or a continuum of
I did think the latter (and you seemed to agree), but I am
influenced by the continuum of histories available in the "no
information" ensemble (aka "Nothing").
It is a priori a continuum, due to the dovetailing on the
real input on programs by the UD.
IIUC, the programs dovetailed by the UD do not take inputs.
Why. By the SMN theorem, this would not be important, but to avoid
its use I always describe the dovetailing as being done on one
For all i, j, k
compute the kth first steps of phi_i(j) (and thus all
programs dovetailed by the UD have an input (j))
The UD has no input, but the programs executed by the UD have
OK - but this is equivalent to dovetailing all zero input programs
the form \psi_k() = \phi_i(j) where k is given by the Cantor pairing
function of (i,j).
No matter, but there's still only a countable number of machines
You need to use the SMN theorem on phi_u(<i,j>). But your conclusion
Unless you take some no-comp notion of 'machines', machines are
always countable. Their histories and their semantics, and
epistemologies, are not.
I'm not sure what you mean by random inputs.
The exact definition of random does not matter. They all work in
this context. You can choose the algorithmic definition of Chaitin,
or my own favorite definition where a random sequence is an
arbitrary sequence. With this last definition, my favorite example
of random sequence is the sequence 111111111.... (infinity of "1").
The UD dovetails on all inputs, but the dovetailing is on the non
random answers given by the programs on those possible arbitrary
Sorry - I know what you mean by random - its the inputs part that
confusing me (see above).
By dovetailing on the reals, which is 3-equivalent with dovetailing
on larger and larger arbitrary finite input, there is a sense to say
that from their 1-views, the machines are confronted with the
infinite bitstrings (a continuum), but only as input to some
machine, unless our substitution level is infinitely low, like if we
need to be conscious the exact real position of some particles, in
which case our bodies would be part of the oracle (infinite
bitstring). This gives a UD* model of NOT being a machine. Comp is
consistent with us or different creature not being machine, a bit
like PA is consistent with the provability of 0=1. (but not with 0=1
itself. For the machine '0=1' is quite different from B'0=1').
Schmidhuber's description of the UD in his 1997 paper is clear. His
dovetailer runs all zero input programs. To be more precise, he
dovetails a universal machine on all finite strings (or equivalently
all strings with a finite number of '1' bits). In this state of
affairs, there can only ever be a countable number of universes.
In your Brussells thesis, on page 11 you describe the UD. You start of
by limiting your programs to no input programs ("sans entrees"). Then
you argue that the UD must also be dovetailing all 1 input programs,
n-input programs etc - by virtue of eg a LISP interpreter being
written in FORTRAN.
Fair enough, but whilst it is possible to convert a one input program
into a zero input program by concatenating the program and the input
(with the possible addition of a prefix the tells the UTM where the
program ends and the data starts), by dovetailing over all zero input
programs, one is not actually dovetailing over the reals. One cannot
say one is running all programs with random oracles - the "oracles"
can at best be simply the output of some zero input machine.
However, just recently, you introduced a new dovetailer, which does
dovetail over the reals. For program i when reading bit k of the
input, you split the program into two instances, and execute both
instances with the bit being '0' or '1'.
This, ISTM, is a completely different, and more wonderful beast, than
the UD described in your Brussells thesis, or Schmidhuber's '97
paper. This latter beast must truly give rise to a continuum of
histories, due to the random oracles you were talking about.
All UDs do that. It is always the same beast.
A computation of phi_i(x, y) can be emulated by a dovetailing on
infinitely many programs parametrized with k: phi_i(x, k), as you
know. Likewize, you can emulate a finite program having an infinite
real oracle by the infinitely many programs like
If such a phi_i needs the 1000000th digits, he will crash, or do
whatever it can do according to his instructions, but soon or later
the UD will emulate it on the "right" real portion of his oracle.
From a thrid person point of view, everything is countable, but from
the points of view of the emulated machines, their "futures" might
depend on the "solution" provided by some oracle, in which case he
will survive on the emulation of the programs with the correct oracle,
that is the correct sequence of 1 and 0 appearing like I described
Whatever is needed for the program, the UD does add inputs, and does
make those inputs grow without bound, and this, by the first person
invariance for its "position" in UD* (or in arithmetic) will count as
a real oracle, and will count as a distinguihed histories: so the the
UD do dovetail on machines with oracles.
I am wondering if this is the heart of the disagreement you had with
Schmidhuber 10 years ago, about (amongst other things) the cardinality
of the histories.
The main disagreement was on the first person/third person
distinction, and the existence of the first person indeterminacy.
My idea of the "no information" ensemble (aka Nothing in my book) was
very strongly influenced by that discussion you had with
Schmidhuber. Yet, until now, I would say I had the misconception of
the dovetailer running just the no input programs.
Usually I explain the UD with the one input programs, but explain that
the SMN theorem explains it will dovetail on all programs with any
number of inputs, including infinite streams, oracles, etc.
Lets leave the discussion of the universal prior to another post. In a
nutshell, though, no matter what prior distribution you put on the "no
information" ensemble, an observer of that ensemble will always see
the Solomonoff-Levin distribution, or universal prior.
I don't think it makes sense to use a universal prior. That would make
sense if we suppose there are computable universes, and if we try to
measure the probability we are in such structure. This is typical of
Schmidhuber's approach, which is still quite similar to physicalism,
where we conceive observers as belonging to computable universes. Put
in another way, this is typical of using some sort of identity thesis
between a mind and a program. But this does not work once we suppose
that "we" are computable (which is implied by the idea of computable
universe). The problem is that once "we" are computable, "we" become
dispersed on infinities of programs generating our state, and below
our substitution level, there will be a "competition" between an
infinities of histories/universal-machines, so that the real
observable universe, from the points of view of the machine cannot be
a computable universe, and the physics can only be a statistics on
computational extensions. This will not depend on any prior you put at
the start on the computations.
Schmidhuber, like many, seems to ignore the first person/third person
distinction. The mind-body problem is still under the rug, and he
seems to believe in third person computable universes. But the UDA
explains that this is just not possible: from the 1-view (the ultimate
judge of a physical theory) physics can only emerge as a stable and
hopefully "multi-users" dreams, obeying to the laws of machine's
epistemologies (like the self-references modalities).
Also, Schmidhuber does not seem to see that although "true random"
string are not computable, and cannot be individually the output of
any program, true randomness exists in the UD, like the infinite
iteration of the self-duplication experience illustrate. Actually the
first person indeterminacy, even not iterated, is an example of "true
randomness". We cannot generate a random sequences, but we can
generate them all, and from the 1-views corresponding to 3-states
dispersed on UD*, we just cannot avoid them.
Most big progresses, even just in physics, like Galileo, Einstein,
Everett, can be seen as a beginning of an 1/3 distinction, but none
seems to realize that, assuming mechanism like almost everybody, a 1-
person view becomes subtle indexical machine view, which has to be
treated with the usual scientific rigor, using computer science.
Assuming mechanism, I don't see how to avoid a serious theory of mind,
like the machine's self-reference theory, which leads to some real
difficulty for non-logicians, as such theories needs familiarity in
logic, which is also ignored by most scientists (except logicians
Unfortunately the mainstream scientists still ignore the first person
indeterminacy today, meaning that they just ignore the 1-person / 3-
person distinction---not mentioning the mind body problem (and, to be
sure, I still don't know if this comes from a genuine non
understanding, or if it is still the problem of acknowledging my work,
which would be a notoriety problem for some).
As I said, I don't know if the problem is really genuine, for the 1-
indeterminacy, which is rather a simple notion. Some researchers told
me that it is a problem to cite my name, but not so much my work if
they change the vocabulary.
What a pity, what a waste of time. It is less tragic than the
illegality of cannabis and drugs, but it is seems clear that human
corporatism leads to an accumulation of human catastrophes,
everywhere. Corporatism perverts democracies and academies. This has
an unavoidable costs and moneys based on lies has no genuine value.
The rule "publish or perish" is also both a killing-science and
killing-human procedure: it creates a redundancy which hides the
interesting results, and it multiplies the fake researches.
Ranking the number of citation creates circular loops of people citing
each others and not much more.
It is a nonsense. A researcher who does not find or solve something
should NOT publish, but should not perish either. He should still
allow to search. . A researcher can be asked to write reports and to
justify the difficulty of his task, but in science, and especially in
fundamental science, findings cannot be ordered(*). OK, I digress :)
Ask any question if I am unclear,
(*) Note that there is a 'slow science' awakening:
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at