Juergen Schmidhuber wrote:
>Bruno, there are so many misleading or unclear statements
>in your post - I do not even know where to start.
Rhetorical tricks, if not insult as usual :(
Have you read http://www.escribe.com/science/theory/m3241.html
Well, G* tells me to remain mute here, but then I don't care.
>You can be a GP. Write a program that computes all universes
>computable in the limit. Add storage in case of storage overflow.
>Still, you as a GP have a resource problem. At some point it will
>get harder and harder for you to keep computing all the things
>you'd like to compute.
Are you using the GP as an analogy or what? What is exactly
your ontology? You have not answered my question: How could the
GP have resource problems? Is not resource a notion internal to
some universe, and are not universe generated by the GP.
You are presupposing physics.
>Just like I cannot say much about the environment of the particular
>GP who programmed us.
Are you really serious? Are you believing we have a GP in a universe?
Where does the universe of our GP comes from?
This is tortoise build on tortoises. Even infinite tortoises
sequences embedded in a physical universe. That explains nothing.
I knew you have not yet understand
the comp reversal physico/psycho, but you have not even seen
the reversal physico/computer-science, or what?
>Again: forget all the esoteric undecidability stuff, which does not
>really matter here, and be pragmatic and try to look at things from the
>perspective of some observer being computed as part of a virtual reality
>that you programmed. The observer may get smart and correctly suspect
>you have a resource problem, and can make nontrivial predictions from
>there, using the Speed Prior.
You are still postulating what I am searching an explanation for.
I am not (necessarily) pragmatic in advance. A concrete virtual reality
on earth inherits the "normality" of the world in which
it is embedded. I pretend that comp force us to justify that
normality. It is not obvious that that is possible.
But the UDA forces to take all computations into account, and
undecidability stuff is unavoidable, if only because we can only *bet*
on our consistent extentions (In the translation of the UDA into
arithmetic, consistent extension have the unprovable type "<>p").
Incompleteness phenomena put fundamental constraints for a TOE.
You could call them logical and arithmetical priors.
But you don't seem searching a toe, just a recipe for
predicting some granted neighborhood.
The arithmetical UD is just the set of \Sigma 1 sentences.
Resource must be explained from it, not the inverse.
I am not sure a TOE can be pragmatic. Like
quantum mechanics is not useful for cooking.
>No, that's exactly where the Speed Prior comes in. Accoring to this prior,
>the slow programs computing the same thing contribute less to its
>probability mass. It is explained in the chapter on
>Speed Prior-Based Inductive Inference
I don't need to read it because it is in contradiction with the
"invariance lemma", in particular by the fact that from a first
person point of view the delays (between the many acces by the UD
of near computational states) does not matter.
(BTW I have read it, and we discussed it before).
You know the GP will generate itself, but will generate also less and
less efficient version of itself. From the first person point of view
are indistinguishable. Are you telling me that the Juergen and Bruno
appearing in those very slow GPs are zombie?
>> It is indeed hard to write a little program which generate a long
>> Kolmogorov-Chaitin incompressible 01 string.
>> But, as I told you earlier, it is quite easy to write a little
>> program which generates them all. (It is the essence of the
>> everything idea imo).
>As you told me earlier? Huh?
>That's one of the points of the 1997 book chapter.
I was meaning "As I recall you earlier in some recent post"!
(but look at my 1988 Toulouse paper ;-)
>The thing is: when you generate them all, and assume that all are
>equally likely, in the sense that all beginnings of all strings are
>uniformly distributed, then you cannot explain why the regular universes
>keep being regular. The random futures are then just as likely as the
>So you NEED something additional to explain the ongoing regularity.
>You need something like the Speed Prior, which greatly favors regular
>futures over others.
You speak like if I were proposing the iterated duplication as
a universal explanation. It is only an illustration of the comp
indeterminism. With comp it is the whole work of the UD, UD*, which
constitutes the domain of indeterminism. The self-multiplication
just show how a 3 deterministic process consistent with comp explains
a phenomenology of (in the limit) sort of absolute randomnes.
The "real" experience is the infinite running of the UD. And that running
is embedded in the arithmetical truth, for exemple under the form of
UD* generalises both QM and "the iterated self duplication" experiences.
The unawareness of delays forces us to take into account an infinite set
of infinite histories (like in Feynman formulation of QM).
Some internal (modal) interpretation are given by the logic of
what consistent machine can prove. Have you follow my post with George
Levy on Modal Logic?
Have you follow the recent version of the UDA in 11 steps, I have
proposed to Joel Dobrzelewski see links:
>But don't you see? Why does a particular agent, say, yourself, with a
>nonrandom past, have a nonrandom future? Why is your computer still there
>after one second, although in a truly random world it would immediately
>dissolve? Why do pencils keep falling down instead of up, when the
>futures where they fall up are just as likely?
We "practice" iterated duplications all the time only below our
sharable level of substitution. (like in Everett). This is plausible.
UD* structure is highly non trivial especially from the point of view
of consistent machine "surviving" on their
consistent extensions which are *sparsed* among UD*.
>To repeat, the concept of the machine that computes all universe
>histories is NOT by itself sufficient to explain why our future stays
But the concept of the machine that executes all programs is NOT
intended to explain why our future stays regular. It is the unavoidable
reality for machines from their provable and consistent point of view.
To add any priors is just sort of treachery with comp.
> The additional concept of the Speed Prior explains it though.
I would like to believe you but you don't even adress the problems
which interest me like the link between 1 and 3 person discourse, the
extraction of the quantum from a measure on the whole UD*, etc.
I don't see how you attach mind to a single computation.
You are using a naive theory of mind which is inconsistent with
both comp and QM at once.
I am agnostic about the little program, although the UD*-measure,
*if* computable (which I doubt) would define it.
My main question to you is "how does the speed prior filters out
the many possible slow similar history generated
by the GP, given that we cannot distinguish them, and we cannot be
aware of which one we belong here and then.
Do you know the logic of provability and quantum logic? Have you
follow http://www.escribe.com/science/theory/m2855.html and the
posts leading to? (some mathematician prefer to consider the
UDA only as a motivation for that result).
The more urgent, if you want to refute my approach, is to show me
where I am wrong in the UDA.
I repeat I don't refute your approach. It surely lacks motivation
(especially) your second paper, but with comp it is hardly complete.
You dismiss 100% of the mind body problem.
I am not astonished, it is a widespread disease since Aristotle.
Another problem with your approach is that it is inconsistent
with quantum mechanics. As you know.
Mine is consistent with QM and even entails some of it and
(normally) all of it (less possible geographical peculiarities).
>I do not understand. But please do _not_ elaborate.
How am I suppose to interpret that. Well, I know you believe in
(and are using) a one-one mind-body/computation relation, but this
has been shown inconsistent with comp (Marchal 88, Maudlin 89,
reference are in my thesis URL below).
So you should still explain us how you attach mind to single
computations, but this will contradict the invariance lemma,
so I'm skeptical. This does not mean the prior approach has not
some intrinsical interest. Your critics are to poor for
helping to see if our approaches are compatible or not.
In a older post I conjecture your approach is reductible to mine
in the case of comp with a sort of infinitely low level of
substitution. This could perhaps help singularising relevant
computations. But with your unfair remarks I realise I am not
even sure you read my post and that I'm probably wasting my time.