On 19 Oct 2011, at 05:30, Russell Standish wrote:
On Mon, Oct 17, 2011 at 07:03:38PM +0200, Bruno Marchal wrote:
This, ISTM, is a completely different, and more wonderful beast,
the UD described in your Brussells thesis, or Schmidhuber's '97
paper. This latter beast must truly give rise to a continuum of
histories, due to the random oracles you were talking about.
All UDs do that. It is always the same beast.
On reflection, yes you're correct. The new algorithm you proposed is
more efficient than the previous one described in your thesis, as
machines are only executed once for each prefix, rather over and over
again for each input having the same prefix. But in an environment of
unbounded resources, such as we're considering here, that has no
Note that my programs are not prefixed. They are all generated and
executed. To prefix them is usefulm when they are generated by a
random coin, which I do not need to do.
So the histories, we're agreed, are uncountable in number, but OMs
(bundles of histories compatible with the "here and now") are surely
This is not obvious for me. For any to computational states which are
in a sequel when emulated by some universal UM,there are infinitely
many UMs, including one dovetailing on the reals, leading to
intermediate states. So I think that the "computational neighborhoods"
are a priori uncoutable. That fits with the topological semantics of
the first person logics (S4Grz, S4Grz1, X, X*, X1, X1*). But many math
problems are unsolved there.
If we take the no information ensemble,
You might recall what you mean by this exactly.
and transform it by applying a
universal turing machine and collect just the countable output string
where the machine halts, then apply another observer function that
also happens to be a UTM, the final result will still be a
Solomonoff-Levin distribution over the OMs.
This is a bit unclear to me. Solomonof-Levin distribution are very
nice, they are machine/theory independent, and that is quite in the
spirit of comp, but it seems to be usable only in ASSA type approach.
I do not exclude this can help for providing a role to little program,
but I don't see at all how it could help for the computation of the
first person indeterminacy, aka the derivation of physics from
computer science needed when we assume comp in cognitive science. In
the work using Solomonof-Levin, the mind-body problem is still under
the rug. They don't seem aware of the first/third person description.
This result follows from
the compiler theorem - composition of a UTM with another one is still
So even if there is a rich structure to the OMs caused by them being
generated in a UD, that structure will be lost in the process of
observation. The net effect is that UD* is just as much a "veil" on
the ultimate ontology as is the no information ensemble.
UD*, or sigma_1 arithmetic, can be seen as an effective (mechanically
defined) definition of a zero information. It is the everything for
the computational approach, but it is tiny compared to the first
person view of it by internal observers accounted in the limit by the
Unless I'm missing something here.
Lets leave the discussion of the universal prior to another post.
nutshell, though, no matter what prior distribution you put on the
information" ensemble, an observer of that ensemble will always see
the Solomonoff-Levin distribution, or universal prior.
I don't think it makes sense to use a universal prior. That would
make sense if we suppose there are computable universes, and if we
try to measure the probability we are in such structure. This is
typical of Schmidhuber's approach, which is still quite similar to
physicalism, where we conceive observers as belonging to computable
universes. Put in another way, this is typical of using some sort of
identity thesis between a mind and a program.
I understand your point, but the concept of universal prior is of far
more general applicability than Schmidhuber's model. There need not be
any identity thesis invoked, as for example in applications such as
observers of Rorshach diagrams.
And as for identity thesis, you do have a type of identity thesis in
the statement that "brains make interaction with other observers
relatively more likely" (or something like that).
yes, by the duplication (multiplication) of populations of observers,
like in comp, but also like in Everett.
There has to be some form of identity thesis between brain and mind
that prevents the Occam catastrophe, and also prevent the full retreat
into solipsism. I think it very much an open problem what that is.
This will depend on the degree of similarity between between quantum
mechanics and the comp physics, which is given entirely by the
(quantified) material hypostases (mainly the Z1* and X1* logics). Open
but well mathematically circumscribed problem.
Unfortunately the mainstream scientists still ignore the first
person indeterminacy today, meaning that they just ignore the
1-person / 3-person distinction---not mentioning the mind body
problem (and, to be sure, I still don't know if this comes from a
genuine non understanding, or if it is still the problem of
acknowledging my work, which would be a notoriety problem for some).
As I said, I don't know if the problem is really genuine, for the 1-
indeterminacy, which is rather a simple notion. Some researchers
told me that it is a problem to cite my name, but not so much my
work if they change the vocabulary.
Wow, you must've really got some people's noses out of joint.
Incidently, New Scientist has a recent article about dastardly deeds
done in science, including some well know ones like Newton's treatment
of Hooke and Watson & Crick's treatment of Franklin. Even Einstein
gets a serve about claiming the equation E=mc^2 for himself.
All this is relative. In some institution nearby a good teacher is a
teacher who does not rape the student of his colleagues.
What a pity, what a waste of time. It is less tragic than the
illegality of cannabis and drugs, but it is seems clear that human
corporatism leads to an accumulation of human catastrophes,
everywhere. Corporatism perverts democracies and academies. This has
an unavoidable costs and moneys based on lies has no genuine value.
The rule "publish or perish" is also both a killing-science and
killing-human procedure: it creates a redundancy which hides the
interesting results, and it multiplies the fake researches.
Ranking the number of citation creates circular loops of people
citing each others and not much more.
It also creates the psychopathic reviewer, who does all to undermine
the credibility of a paper. I have experienced one or two like that -
not many, but its still a nuisance.
It is a nonsense. A researcher who does not find or solve something
should NOT publish, but should not perish either. He should still
allow to search.
Well its more about lack of funding. One can research anything one
desires if you are independently wealthy, or have an independent
income stream (like myself). Of course, getting attention of other
scientists is a different matter! Nevertheless, current funding
damaging to the integrity of science - gone are the days when
researchers would write papers and put them in their filing cabinet
for 3-6 months before submitting to a journal. I think David Deutsch
may have done something like that, but I recall Schroedinger
something like that as a matter of course.
30 years ago, after a lot of work I got 5000 $ for a project in AI,
but AI was considered as a very bad things to do for a mathematician,
so they did not sign the paper. Even when the funding is there, some
people will prefer to reject the project for ideology-like reason.
Some mathematician just hate computers, or the idea that "pure
mathematics" (like logic) can have any applications.
A researcher can be asked to write reports and to
justify the difficulty of his task, but in science, and especially
in fundamental science, findings cannot be ordered(*). OK, I
Ask any question if I am unclear,
(*) Note that there is a 'slow science' awakening:
Hmm - I'm not sure I agree completely with the slow science
manifesto. Email exchanges like this are really good for fomenting
science. I never thought attempts at online conferences worked that
well - eg MUDs and the like, as there was not enough time to think
about other people's comments. Twitter might be alright for sharing
reading lists - one could tweet about an excellent paper, perhaps,
although personally I haven't gotten into it.
Academies are all very well, but don't work well with people with
family commitments, or who need other sources of income (or simply
have other businesses) to live. My PhD was done in such an academy, an
Australian "Institute of Advanced Studies", modelled on the Princeton
one. I would have to say that the place was rather moribund,
unfortunately - a few bright stars, but a lot of dead wood.
Academies are like democracies. The worst except for anything else. To
paraphrase Nietzsche, they are human, too much human.
Have a good day, Russell,
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at