On 28 Sep 2011, at 05:44, Pierz wrote:

OK, well I think this and the other responses (notably Jason's) have
brought me a lot closer to grasping the essence of this argument. I
can see that the set of integers is also the set of all possible
information states, and that the difference between that and the UD is
the element of sequential computation. I can also see that my
objection to infinite computational resources and state memory comes
from the 1-p perspective. For me, in the "physical" universe, any
computation is restricted by the laws of matter and must be embedded
in that matter. Now one of the fascinating revelations of the
computational approach to physics is the fact that a quantity such as
position can only be defined to a certain level of precision by the
universe itself because the universe has finite informational
resources at its disposal. This was my objection to the UD. But I can
see that this restriction need not necessarily apply at the 'higher' 3-
p level of the UD's computations. What interests me is the question:
does UDA predict that the 1-p observer will see a universe with such

To be sure, this is an open problem.
To be sure, this is an open problem for physicists too.
Comp+"theatetus" will be refuted if the comp-physics will be quasi *contradicted* by some precise physical fact, not by any physical theory (unless they predicts that precise physical fact).

If it explains why the 1-p observer seems to exist in a
world where there is only a finite number of bits available, despite
existing in a machine with an infinite level of bit resolution, then
that would be a most interesting result. Otherwise, it seems to me to
remain a problem for the theory, or at least a question in need of an
answer, like dark matter in cosmology.

I am going to have to meditate further on arithmetical realism. I
don't believe in objective matter either (it seems refuted by Bell's
Theorem anyway), but a chasm seems to lie between the statement  "17
is prime" and "the UDA (Robinson arithmetic) executes all possible

Don't confuse the UD (Universal Dovetailer, a finite program) and UDA (the UD Argument = the argument that, assuming digital mechanism, physics is in principle a branch of number theory/computer science, and in which the UD plays the role of the effective definable comp ontological realm of *everything*).
Just a vocabulary remark, to avoid possible future confusion.

The problem is one of instantiation. I can conceive of a
universe - a singularity perhaps, with only one bit of information -
in which the statement "17 is prime" can never be made.

Don't confuse the sociological statement "some machine asserts "17 is prime"", and the true fact that 17 is prime, which does not rely on physical universes at all, a priori.

To formulate,
ie instantiate, 17, requires a certain amount of information.

In some physical theory, but this is not an assumption in the theory. You cannot refute an argument by adding new assumptions.

To say
that a program executes, as opposed to saying it merely is implied by
a set of theoretical axioms, requires the instantiation of that

In Aristotelian metaphysics.
Also, even in platonia, a computation is described by a big number (possibly infinite) of implications.

I suppose this is a restatement of the problem above.
Arithemetical realism then would be the postulate that everything
implied in arithmetic is actually instantiated.

Not at all. That would be a physicalist revisionist definition of numbers. You need to "instantiate" 17, in some way, to talk about 17, but 17 itself does not need instantiation. With or without any physical universe, 17 remain a prime number.

Now, an instanciation, or emulation, can be defined from the numbers alone. Some numbers are universal (a relative arithmetical property) and we can say that a universal numbers instantiates 17 (say) if 17 appears in some of its purely arithmetical register.

To understand the detail of this, I can only refer you to some good textbook in computer science. The main theorem for this is the proof that all partial recursive functions can be represented in Robinson arithmetic (Boolos and Jeffrey's book do this very well, Epstein and carnielli also. ref in my theses).

It seems to me I can
grant 17 is prime, without granting this instantiation of everything.

Well, that solves you of a very long and not so easy work.

I'm also troubled by the statement that you have proved in the AUDA
that any Lobian machine can apprehend the UDA. Is not a three-year-old
child and a cat a Lobian machine? Or indeed my senile father. How can
you assert they could comprehend such an abstraction? Either they
aren't Lobian machines, or there's hole in the proof somewhere,

Recently I have updated my spectrum of Löbian machine to the octopus, and the jumping spider. I can argue that they have the cognitive ability to get UDA. But they don't have a sufficiently big brain to exploit this, and they don't have the motivation to use diaries and books, and language to generate their infinite "turing tape memory" like we do. Symptoms of Löbianity are believe in repetition and notice them (like believing in a notion of anniversary), or having empathy for an other creature, etc. This needs some form of the induction axiom. Robinson arithmetic (and Universal machines in general) are not Löbian. Peano Arithmetic is Löbian (it is reallu just Robinson arithmetic + the induction axiom for the first order describable formula).
But this is not important for the reasoning.

Jason mentions the anthropic principle (which of course I'm well
acquainted with) and the idea of the computations which contain
observers. I have read, without following, some of your propositions
involving the Beweisbar predicate and self-referential relations and
what have you. Is that the formalism that is supposed to define which
computations are conscious?

Not really. This is a subtle point. Notion like truth and consciousness are not definable by any machine. But, like with God (or Plotnus' one) we, the machines, can talk in indirect way, by taking some precaution.

Is there a summary somewhere?

It is explained in the second part of the sane04 paper. AUDA is "the interview of the Lôbian universal machine"

I am
wondering how consciousness can possibly be an attribute of some
computations and not others,

Let me be precise; consciousness is not an attribute of a computation, but is an attribute of a person. Now a person can manifest itself relatively to other person, once "enough" similar computations are going through the states of the two person, in some sufficiently cohesive way. The self-reference logics are used to single out the conditions of cohesion (unlike in linear logic, or Girard geometry of interaction, which extract such condition from symmetry intuition and proof theory).

and why, if it's a matter of some certain
mathematical properties of the computations, we could not fairly
easily write a conscious algorithm?

It is easy. I tend to think, since recently, that all universal algorithm are conscious. But their consciousness is disconnected, a bit like if they were born ... in salvialand! And, yes, before doing salvia I would have imposed Löbianity for consciousness, but I am much less sure about that. Now Löbianity is more than consciousness, it is self-consciousness. Peano arithmetic is self-conscious, I think. That is why we can discuss Plotinian theology with them, even without making their soul falling on earth, that is without implementing them and sharing our long story. Current computers have not yet long term memory, nor long term goal. But I think that PA, the octopus, and the jumping spider (but not worms, and most usual spider) are as conscious as you and me.

For the fun here is video illustrating that a jumping spider can do some inductive inference requiring some implicit beliefs in arithmetical induction (look hw she reacts when she looks behind the mirror).


As opposed, here is a typical non Löbian behavior, or a non jumping spider (yet jumping, note):


If not finding food on a top of a plant, she is programmed to jump randomly on other plant, and, in case she get the ground, to climb on a nearest plant. Here there is only a pen, perpendicularly installed on a flat ground table. She seems to repeat in cycle that behavior, except for taking some rest.

But the bigger reason why I think jumping spiders are Löbian, is that like cat and dog, they can bond with you, star at you, and perhaps even recognize you. But this can be judged only by real interaction with real spiders, not by looking at videoas, of course. Stiil, here is a very cute one:


Surely complexity can't be the
defining feature (at what arbitrary level of complexity does the light
come on?), so it should be a simple matter.

I agree. You don't need more than 10 lines instruction code for them, well, in a high logical language like prolog, for example.

(Though the proof of
having created consciousness in the program would be a problem!)

It is not a problem. It is an impossibility. You cannot prove that *I* am conscious, can you?

we have to define consciousness (not necessarily self-awareness, or
the awareness of being aware) as a property of numbers per se?

A quasi-definition is the ability by some universal numbers to discover some non communicable truth by introspection. Consciousness is not much more than the state of believing in some reality.

Sadly when you start to talk about the difficulty of proving that our
histories in the UD are more random than the actual histories we
observe, I can't follow you any more - too much theory I'm unfamiliar
with. I can see however that many (nearly all) of the infinite
computations passing through our aware states will destroy us,

Gosh! I don't see that. ... Ah, you mean in a third person way. OK.

as it
were, so we can never exist in those computations (sort of anthropic
principle). This also suggests a kind of immortality,

OK. This has been a recurrent theme on this list.

the same kind as
I propose in a blog post I wrote called the 'cryogenic paradox' in
which I argue that there can only be a single observer, a single locus
of consciousness underlying all apparently separate consciousnesses,
which would really be just different perspectives of this one

Nice. I agree with this, although it is not part of the reasoning. But it makes the reasoning and comp fitting quite well with some aspect of the salvia experience. Many mystics, including the greeks, thought in that way. Ramana Maharsi too.

It seems irresistible as a conclusion (from philosophical
arguments quite different to the UDA), and yet also kind of horrific.
Only a sort of state-bound recall barrier prevents us from being aware
that we suffer every fate possible.

Yes. It is a bit frightening. It heals the fear of death, but can expand the possible fear of life.

I agree re academia. From all I can observe, it is a viper's pit. The
ground of accepted truth is fought over as hard as any piece of the
Holy Land, and in this as in all struggles, power matters. It is
hardly the free and unbiased exchange between equal and curious minds!
We are not so different today from the cardinals who refused to look
down Galileo's telescope.

To be sure Galileo makes the big mistake too, in pretending that the church was wrong and that he was right. He should have simply pretended that his theory was more plausible, and more economical, like the Church asked. But I see and follow your point.

Finally, I despise all theory that makes obscurity a virtue. Compare
Lacan's tedious impenetrability

Lacan was a great "humorist", except that its disciple did not understand the joke, and Lacan falls in the idolatry trap. In some seminar, he succeeded in being rather clear, and he said quite genuine things on Gödel's theorem, which is rare. Usually non-logicians say lot of crap on Gödel's results. Lacan and Hofstadter are rather exceptions here. But I think you are right, some text of Lacan were voluntarily obscure, and I think that the purpose was a real mockery of its audience.

with Einstein's almost childish
simplicity and profundity.


Obscurity is the darkness which merely
clever minds use to cover their nakedness (to invoke the emperor
again). No insult to you, Bruno, intended, this time.

We have to be a little cautious here. Even Einstein said that God was simple but not that simple (I forget the exact quote). And the unknown is obscure, quasi by definition, and with mechanism, we can explain why some part have to remain obscure. But then this motivates the honest researcher to be even more simple and clear. Obscurity should not be a tool to hide ignorance (or more sinister intentions). Yet, obscurity, in some field, cannot either been brushed away by pure willing. That would be sort of wishful thinking.


On Sep 27, 2:08 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 26 Sep 2011, at 04:42, Pierz wrote:

OK, well first of all let me retract any ad hominem remarks that may
have offended you. Call it a rhetorical flourish! I apologise. There
are clearly some theories which require a profound amount of dedicated
learning to understand - such as QFT. I majored in History and
Philosophy of Science and work as a programmer and a writer. I am not
a mathematician - the furthest I took it was first year uni, and I
couldn't integrate to save myself any more. Therefore if the truth of
an argument lies deep within a difficult mathematical proof, chances
are I won't be able to reach it.

That is the reason why I separate UDA from AUDA. Normally UDA can be
understood without much math, which does not mean that it is simple,
especially the step 8. (but the first seven step shows already the big

AUDA, unfortunately, needs a familiarity with logic, which
unfortunately is rather rare (only professional logicians seems to
have it).

Then my ignorance would hardly
constitute a criticism, and so it may be with UDA and my complaint of

When I teach orally UDA. The first seven step are easily understood.
This contains most of the key result (indeterminacy, non-locality, non
cloning theorem, and the reversal physics/theology (say) in case the
universe is robust.

The step 8 is intrinsicaly difficult, and can be done before. A long
time ago, I always presented first the "step 8" (the movie graph
argument) and then the UDA1-7.

I am still not entirely satisfied myself by the step 8 pedagogy.

On the other hand, it seems to me that ideas about the core
nature of reality can and should be presented in the clearest, most
intelligible language possible.

I have 700 pages version, 300 pages version, 120 pages version, up to
sane04 which about a 20 pages version. The long version have been
ordered to me by french people, and are written in french.
The interdisciplinary nature of the subject makes it difficult to
satisfied everybody. What is simple for a logician is terribly
difficult for a physicist. What is obvious for philosphers of mind,
can make no sense for a logician or a physicist, what is taken granted
by physicists are total enigma for logicians, etc.

I can't solve QFT equations, but I can
grasp the fundamental ideas of the uncertainty principle, non-
locality, wave-particle duality, decoherence and so on. I'm not
arguing for dumbed-down philosophy, but maximal clarity.

OK. Note that my work has been peer reviewed, and is considered by
many as being too much clear, which is a problem in a field (theology)
which is still taboo (for some christian, and especially the atheist
version of christianism). I can appear clear only to people capable of
acknowledging that science has not yet decided between Aristotle and
Plato reality view. So when I am clear, I can look too much
provocative for some.

Having said
that, I'm prepared to put effort in to learn something new if I have
misunderstood something.

OK. Nice attitude.

You have misread my tone if you think it indicates bias against your
theory. I have read your paper (at least the UDA part, not the machine interview) several times, carefully, and presented it to my (informal)
philosophy group, because I certainly find it intriguing.

OK. Nice.

I'll admit
that step 8 is where I struggle

Hmm, from your post, it seemed to me that there remains some problem
in UDA1-7.

- it's not well explained in the paper
yet contains the all the really sweeping and startling assertions.

When I presented UDA at the ASSC meeting of 1995 (I think) a "famous"
philosopher of mind left the room at step 3 (the duplication step). He
pretended that we feel to be at both places at once after a self-
duplication experience. It was the first time someone told me this. I
don't know if he was sincere. It looks some people want to believe UDA
wrong, and are able to dismiss any step.

argument about passive devices activated by counterfactual changes in the environment is opaque to me and seems devious - probably defeated
in the details of implementation like Maxwell's demon - but that is
obviously not a rebuttal. I will take a look at the additional
information you've linked to.

OK. Maudlin has found a very close argument. Mine is simpler (and

I can see that you are actually right in asserting that the UDA's
computations are not random,


but I'm not sure that negates the core of
my objection. Actually what the UDA does is produce a bit field
containing every possible arrangement of bits. Is this not correct?

It generates old inputs of all programs, including infinite streams.
Those can be considered as random. But what the program does with such
input is not random.

am open to contradiction on this. If it doesn't, then it means it has
to be incapable of producing certain patterns of bits, but in
principle every possible pattern of bits must be able to be generated.

As inputs, yes. As computation? No.

Now a machine with infinite processing power and infinite state memory
that merely generates random bit sequences would eventually also
generate every possible arrangement of bits. So the UDA and the
ultimate random generator are indistinguishable AFAICS.

Not really. In fact the random inputs might play a role in making
possible to have a measure on the computational histories. It can
entail also that the "winning computations" (= those being normal in
the Gaussian sense) inherit a random background, which would make
other feature of the usual (quantum) physics confirming comp. Everett
QM makes such a random background unavoidable in any normal branch of
the universe, like when we send a sheaf of electron prepared in the
state (1/sqrt(2)(up + down), on a device measuring them in the {up,
down} base. This should not be a problem, and if it proved to be an
insuperable problem, then comp is refuted. I have no problem with
that, given that my goal consists in showing that comp is "scientific"
in the popperian sense (refutable).

I think what you are saying is that somehow this computation produces
more pattern and order than a program which simply generates all
possible arrangements of bits. Why? If I were to select at random some algorithm from the set of all possible algorithms, it would be pretty
much noise almost all the time. *Proving* it is noise is of course
impossible, because meaning is a function of context. You've selected
out "the program emulating the Heisenberg matrix of the Milky Way",
but among all the other possible procedures will be a zillion more
that perform this operation, but also add in various other quantities
and computations that render the results useless from a physicist's
point of view. There are certainly all kinds of amazing procedures and unfound discoveries lying deep in the UDA's repertoire of algorithms, but only when we intelligently derive an equation by some other means
(measurements, theory, revision, testing etc) can we find out which
ones are signal and which ones noise.

Suppose that you are currently in state S (which exist by the comp
assumption). The UD generates an infinity of computations going
through that state. All what I say is that your future is determined
by all those computations, and your self-referential abilities. If
from this you can prove that your future is more random than the one
observed, then you are beginning to refute rigorously comp. But the
math part shows that this is not easy to do. In fact the random inputs
confer stability for the programs which exploits that randomness, and
again, this is the case for some formulation (à-la Feynman) of QM.

Fine. But then we can simply dispense with the UD altogether and
gather up its final results,

This does not make any sense. A non stopping program does not output

OK. I realised after I posted that this was wrong, actually hasty
shorthand for what I was trying to say - didn't have time for an
amendment. By 'results' I mean the machine's state. It seems that for the UDA to work, we have to assume that the simulation has 'finished',
even though from a 3p perspective it never can.

I don't think so. The terminating computation are on the contrary rare
compared to the non terminating, and so might have a null measure. To
"appear" in the UD*, all we need is that some program go through your
state, not that a program has to stop on that state, or output that

What I mean is, if the
UDA had just started running, it wouldn't have any complex
representations in its trace yet. And since the UDA exists purely
mathematically, platonically, how can it be subject to time at all?

The UD generate all "times" in relation with its own internal time,
which can be defined by the steps of its own computation.
This gives a block mindscape, no more threatening subjective time or
physical time than any physicalist bloc-universe conception of
reality, which in physics is already necessary with special relativity.

has no processing limitations, so any notion of time as a factor can
be disregarded. Otherwise you'd have to say that to process an
instruction takes t amount of time, and where would such a constant
come from?

Just imagine the trace of the UD.
You have many notion of time.
The most basic one is given, as I said, by the number of step of the
UD itself.
Then, for each program generated, you can take the number of steps of
that particular program. Those are sub-step of the preceding one. If a
self-aware creature appears on that particular computation, he will
not be aware of the UD step, but might be aware of the step of "its
own" program.
There many other times notion. The subjective time (à-la Bergson) is
recovered by the logic of knowledge of the self-aware entity
themselves, and handled by the logic of self-reference.

The time taken to compute something in the physical world
is a function of the fact that all computation we know of is bound to
the manipulation of physical substrates that are embedded in the
constraints of time, space and energy. Sequentiality in the UDA is
purely conceptual.

Perhaps, but it is better to remain neutral about the primary or not
nature of the physical time. No physical theories is assumed, beyond
the fact that we need some physical reality (but not necessarily a
primitive one). If not, you beg the question.

 And because my 1-p moments could be anywhere in
the UD's record of histories, I can't speak about where the UD is up
to in its work 'now', but just have to take it as all somehow 'done',

Right. And you next 1p moment, results from the statistical
indeterminacy in UD*.

even though it can 'never' be done. I'm granting this, even though it is itself problematic. 'Results' was my clumsy shorthand for the UD's
infinite record of states.


If this is a misunderstanding, I'm sure you'll point it out!

It is correct, but the states are connected. From the 3p description
of each computation, they are connected by the program leading to such
computation. From the 1-p views, it is quite different, they are
connected by all programs leading to such states. It is a bit like
there is a competition among infinities of (universal) programs for
defining your private 1p history.

Actually I'm not sure why you have to resort to the dovetailing in the
first place. Since you grant your machine infinite computational
resources, why not grant it parallelism? Just to make it a Turing
machine? The Turing machine is just an idea, there's no reason to
think the universe (whatever the hell that is) has to be serial in its

The UD is not the universe. To be sure, there is no physical primary
universe at all (unless some number conspiracy is at play, which
cannot be entirely excluded, but this would mean my brain is the
physical universe, which I doubt). Physical reality is defined by the
way infinitely many computations define normal and lawful shared
Dovetailing assure that the set of all computations is a well define
effective set. Parallelism is defined from this. If I postulate
parallelism, this will be difficult, and ambiguous. The work relies on
Church thesis, for making "universal" mathematically and precisely

The existence of the UD is already a theorem of Peano
Arithmetic.Robinson arithmetic *is* a UD.

Huh? You've inverted ontological priority completely. Any form of
arithmetic is a product of human intelligence.

For a logician, a theory is just a number, relatively to another
number. They exist independently of us, like the number 17 exists
independently of us. Human wiill use richer alphabet, but
axiomatizable theories are really machine or program, or recursively
enumerable set (this can been made precise by a theorem of Craig).
In AUDA I use Robinson arithmetic as defining the basic ontology. It
is just a logician rendering of a sigma_1 complete theory/machine,
that is a Turing universal machine. Then, the more richer theories
(like the infinitely richer Löbian observers) are simulated by
Robinson arithmetic. That is a particularity of comp: the ontology is
much less rich than the epistemology on the internal observer, like
the UD is dumber than an infinity of the programs that it will run.

Just because someone
has mentally constructed a mathematics with the structure of the UD
does not instantiate a UD that actually 'runs' and creates the whole

The expression "whole universe" is ambiguous, and far more complex to
define than the elementary arithmetical truth needed.
Also, we should better be agnostic on the primary existence of that
universe. Its primary existence is not a scientific fact.
All you need to "believe", to give sense to the comp hyp. is that
elementary arithmetical truth are not dependent of humans.
In case you believe that, "17 is prime" does depend on humans, then I
will ask you to define human, and to explain me the dependence in a
theory which does not assume its independence. Actually, logicians
have proved that this is not possible. Elementary arithmetic, or
equivalent, have to be postulated.

That is a vast mathematical hubris - akin to the way any
person tends to over-apply their dominant metaphors. As a writer it's
very easy to see the universe as a vast story.

Comp implies that the phsyical reality will appear to be deep (very
long, perhaps infinitely long) from the internal observers point of
view. To stabilize sharable computations, we need deep computation (in
the Bennett sense of deep), and linearity at the botton, which has
already been isolated from self-reference logics (I skip the nuance
for not being too much long and technical).

As a programmer, I see
algorithms everywhere. But I'm not so inflated as to think it's more
than a metaphor.

The key point here, is that if you say "yes to a doctor", he will put
in your skull a computer, and this, in case you survive (the comp
case) is not a metaphor.
If you want, no digital machine can distinguish a mathematical reality
from a primary physical one. And the mathematical definition of
reality by physicist are also given by particular universal machine.
Who run those machine. Comp gives an answer: they are run by the laws
of addition or multiplication of numbers, or by the laws of
abstraction and application of lambda term. Eventually, physics is
shown to not depend on the choice of the initial universal system. In
a sense, physics is treachery: it postulate the simplest universal
machine that we observe. But comp explain that the physical universe
cannot be such a machine, and that if we want to extract both qualia
and quanta, we have to derived the physical laws from any universal

I can invent my own logically consistent set of
axioms right here and now, but I wouldn't presume it was anything more
than a set of mental relations.

Don't take the mental granted. Don't take the physical granted.

Oh, and :
A  proof is only something presented as a proof. You can only say:
is the flaw, (in case you have found one). I guess that is what you
did, or thought you did.

That's kind of pedantic. You know what I'm doing.

Unfortunately I don't have time to continue my response/questions now
- I'm amazed and impressed you can find the time for such detailed
responses to random ignorants such as me!

If ever you understand AUDA, you will understand that UDA is
understandable by any Löbian universal machine.
The only problem with the "old" humans, is that they are not always
aware of their millenary assumption/prejudices, especially when they
are experts, curiously enough. I like to share my questioning with
people having a personal sincere interest.

I'm more than prepared to
concede my naivete and have my eyes opened to the revelation of UDA.

Lol. You can follow UDA on the entheogen forum. Ah but I see you just
send a post there too. Good. Ask there, because I don't want to bore
too much the people of the everything list with a nth explanation of
UDA. Unless other insist, I prefer to link people to the UDA treads of
the entheogen forum.

On the other hand, the intelligent naive person has some advantages
(hence the emperor's clothes reference).

Some universities (not all, not all departments 'course) are often as
much rotten than some political government. The diploma sometimes
measure only the ability to lick the shoes of bosses, and in the right
order, please. Human are still driven by the gene: "the boss is
right". Useful in war, and in hard life competition, but a bullet for
free exploration.

Layman have often a more genuine interest, and they are less blinded
by their expertise, and narrow specialities. We live a sad period for
knowledge, education, science, and even art. The "publish or perish"
dicto has transformed some researcher into cut and paste machine,
searching only funding and nothing else.

Whether I'm the child in the
story or merely ignorant is the question. I remain open the
discovering the latter.

It is up to you,


On Sep 26, 3:20 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 25 Sep 2011, at 04:20, Pierz wrote:

OK, so I've read the UDA and I 'get' it,

Wow. Nice!

but at the moment I simply
can't accept that it is anything like a 'proof'.

Hmm... (Then you should not say "I get it", but "I don't get it"). A proof is only something presented as a proof. You can only say: here
is the flaw, (in case you have found one). I guess that is what you
did, or thought you did.

I keep reading Bruno
making statements like "If we are machine-emulable, then physics is necessarily reducible to number psychology", but to me there remain
serious flaws, not in the logic per se, but in the assumptions.

Bruno says that "no science fiction devices are necessary, other
the robust physical universe".

To get the step-7. But that robust universe assumption is discharged
in the step 8. Which I have explained with more details (than in
sane04) on this very list:


He also claims that to argue that the
universe may not be large or robust enough (by robust I assume he
means stable over time) to support his Universal Dovetailer is "ad
hoc and disgraceful". I think it is anything but.

By robust I mean expanding enough to run the UD.

It is disgraceful with respect to the reasoning. But if for some
reason, you believe that there are evidence that the physical
does develop the infinite running of a UD, then you can skip the last (and most difficult) step 8. Physics is already a branch of computer
science/number theory, in that case.

This is funny: if we have evidence that the physical universe has a
never ending running UD, then we can from step 7 alone conclude that physics is a branch of number theory. And by Occam, we don't need to
assume the primitive physical universe.
But we don't, and I doubt we can, have such an evidence. The UD
running is very demanding. Not only the universe must expand
infinitely, but in a way which connect solidly all its parts. Better
to grasp the step 8 (the movie graph argument).

To describe such an
argument as "disgraceful" is to dismiss with a wave of the hand the
entirety of modern cosmology and physics, disciplines which after
have managed to produce a great deal more results in the way of
prediction, explanation and tangible benefits than Bruno's theory (I
insist it is a theory and not a 'result').

Yes, it is the theory known as "mechanism". The theory that the brain
is a natural machine.  The result is that physics emerges from
numbers, or combinators, or from any first order specification of a
universal machine, in the sense of theoretical computer science
(branch of math).

As a computer science
expert, I assume Bruno is aware of modern computational approaches
physics. Such approaches explicitly forbid any kind of 'infinite
informational resolution' as is required by Bruno's theory.

Where is this required?

Note that as a corollary of UDA we can show that the physical
is not a computable object, a priori.
The computational approach to physics can have many interesting
application, but it can't tackle the mind body problem. But to get
this, it is better to grasp UDA first.

information content of the universe is seen as being a fundamental
quantity much like energy, constantly transforming but conserved
the whole system in the same way energy is.

There is no assumption about the universe in the theory. We assume
only that the brain (or the generalized brain, that is the portion of
observable things needed to be emulated for my consciousness to be
preserved) is Turing emulable.

UDA assumes the existence of brains and doctors, and thus on some
physical reality, but not on a primitive physical reality. At the
start of the UDA, we are neutral on the nature of both mind and

This computational
approach indeed seems to be the *basis* for much of Bruno talks
(computability, emulability and so on are all fundamental ideas),
then he flies in the face of it by proposing some kind of automated,
Platonic computation devoid of any constraints in terms of state
memory or time.

Computation is a mathematical notion, discovered by Post, Turing,
It is based on the notion of state memory, time steps, etc. It is not base on physical implementation of those notion (unlike engineering).

Let's take a look at the UD. Obviously this is not an 'intelligent'

You are right. It is very dumb. It is not even Turing universal, and
it computes in the most complex possible way the empty function (it
has no input, it has no output).

beyond the intelligence implicit in the very simple base
algorithm. It just runs every possible computer program.


computer programs are made of and produce *static*, they are a
arrangement of bits.

There is no randomness in the work of the UD.

Now clearly, we know that if you look at a large
enough field of static, you will find pictures in it, assemblies of
dots that happen to form structured, intelligible images.

OK. But they are not related by computations. Neither in the first
person views, nor in the third person views.

Likewise in
the field of random computed algorithms, very very occasionally one will make some kind of 'sense', although the sense will naturally be
entirely accidental and in the vast, vast majority of cases will
way a moment later to nonsense again.

The only randomness which might appear comes from the first person
indterminacy, and the fact that we acnnot know in which computation
are. This leads to the "white rabbit" problem, but the computation
themselves are not random at all, and the WR problem is basically the
problem to which physics is reduced too, at the conclusion of the

So when the UD runs through its
current sequence of programs, what it is really doing is just
generating a vast random field of bits.

I have not the slightest clue why you say that. It is provably false.
No program can generate randomness in this third person way. The
randomness ¨possible* can only appear from the first person (emulated
in the UD) perspective.

The UD generates, to give an example, the program emulating the
Heisenberg matrix of the Milky Way, at the level of string theory,
this with 10^(10^(10^(10^(10^9999999))))) digits. Notably. Actually
does it also with 10^(10^(10^(10^(10^9999999)))))  + 1 digits, and
10^(10^(10^(10^(10^9999999))))) + 2 digits, etc.
The point here is that all those running are not random structures.
fact, there is no randomness at all.

Nonetheless, each of these
individual programs needs to have potentially infinite state memory
available to it (the Turing machine tape). Now the list of of
run by the machine continues to grow with each iteration as it adds new algorithms, so it takes longer and longer to return to program 0
to run the next operation.

Right. Note that such delays are not perceptible for the emulated

As it needs to run *all* programs, a
necessarily infinite number, it requires infinite time, but for some
reason Bruno thinks this is not important.

It is utterly important.

This why the first person indeterminacy bears on a continuum, despite
the digitalness of all present factors.

You attribute me things which I never say, here. n the contrary, the
fact that the UD never stops is crucial.

Either it has infinite
processing speed as well as memory, or it has infinite time on its

The UD* (the infinite trace or running of the UD) is part of a tiny
part of arithmetical truth (the sigma_1 arithmetical truth).
Step 8 makes the physical running of the UD irrelevant.
UD and UD* are mathematical notion (indeed arithmetical relations).

Fine. But then we can simply dispense with the UD altogether and
gather up its final results,

This does not make any sense. A non stopping program does not output

which is an infinite field of static, a
giant digital manuscript typed by infinite monkeys. Everything
of being represented by information will exist in this field, which
means it is capable of "explaining" everything. And nothing.

I think you miss the step 3: the first person indeterminacy. I think you miss also the arithmetical non random dynamic of the UD. You are confusing an infinite set of information, with an infinite non random
and well defined particular computation.

We have to deconstruct the notion of "computation" here. Computation
is the orderly transformation of information.

I can agree, although information is more an emerging notion. It is
not used in the definition of computation.

But the UD's orderliness
is the orderliness of the typing monkey.

Not at all. It is the orderliness of the computations. Or the
orderliness of the sigma_1 sentences and the logic of their
probability/consistency (as it is made completely transparent in the AUDA: the translation of the UDA in arithmetic, or in the language of
the Löbian machine).

If it is orderly at all, it
is by mistake.

It is 100% orderly.

By talking about it the UD as performing computation
more intelligence is implicitly imputed than this hypothetical

Where? The existence of the UD is already a theorem of Peano
Arithmetic. Robinson arithmetic *is* a UD. You need only the
intelligence for grasping addition and multiplication. The UD has

And besides, the physical and psychological (theological,
biological,..) order are brought by the machines from inside the
running of the UD. The UD's intelligence is not needed.

Yes, it would generate every possible information state,

read more »

You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com .
To unsubscribe from this group, send email to 
For more options, visit this group 


You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to