I make a comment on Wei Dai, as quoted by Hal Finney, and then I Hal Finney's answer.


Le 14-juil.-05, à 23:03, Hal Finney a écrit :

Wei Dai writes:
One day, Earth is contacted by a highly advanced alien civilization, and
they tell us that contrary to what most of us think is likely, not all of
the fundamental physical laws of our universe are computable.


I recall that the comp hyp (I am a machine) entails that the apparent universe cannot be the result of a computation (but can appear (first personally) *through* the infinite execution of a non terminating computation like UD* (the infinite trace of the Universal Dovetailer).
I have discussed this at length with Schmidhuber on this list, but he dismissed the 1-3 difference so we were not able to progress.



Furthermore,
they claim to be able to manufacture black boxes that work as oracles for
the Halting Problem of Turing machines (one query per hour). They give
us one free sample, and want to sell us more at a reasonable price. But
of course we won't be allowed to open up the boxes and look inside to
find out how they work.

So our best scientists test the sample black box in every way that they
can think of, but can't find any evidence that it isn't exactly what
the aliens claim it is. At this point many people are ready to believe
the claim and spend their hard earned money to buy these devices for
their families.

Hal Finney:So in terms of induction, the situation is that we do test after test
to see if this box acts as a halting-problem oracle (HPO, we always
need more 3 letter acronyms around here). And it passes all the tests.
So then we apply induction and say, since it's acted like an HPO in our
tests, we will go ahead and assume that it really is an HPO.

The problem is that HPOs are theoretically impossible. We have a proof
of it, in fact we have a lot of proofs. So how do we reconcile this?


I guess you mean that "testing HPO" is theoretically impossible. I agree with comp. But some "hyperturing" thesis we could test HPO (the fact that nobody can give a protocol for testing HPO in a finite time suggests that "hyperturing thesis" could be non plausible.


Well, one possibility is that the proof is invalid. It's a pretty
simple proof so we look at its assumptions. Fundamentally it assumes
that computation is a finitary process. We can only do a finite amount
of computation in a finite time.


This is equivalent to the comp hyp.


To act as a real HPO it would seem to be necessary for the box in some
way to deal with infinities. There would have to be an actual infinity
in there somewhere.


... and in "our head" to understand the HPO. Note that a test exists in the limit by computing the OMEGA Chaitin number (through an infinite always self-correcting procedure which stabilize in a non constructive way).


But again, we generally assume that there are no actual infinities.
In physics they are called singularities, which means places where the
laws of physics break down. In fact even the prospect of an actual
infinity in a theory is seen as a sign that the theory is wrong or
incomplete. Relativity puts singularities at the center of black holes,
but it is assumed that quantum gravity will fix these, in fact this is
one of the main motivations for work on quantum gravity.

OK.

Disbelief in actual infinities, like disbelief in HPOs, is not really
rooted in observation and induction. We don't disbelieve in them simply
because our universe seems to be devoid of them. In fact, on the face
of things there is evidence that actual infinities may really exist in
our universe. Relativity theory has its infinites; there are quantum
models which hint at the possibility as well, and even old fashioned
Newtonian gravitation on point particles, like is studied in high school
physics, implicitly embeds infinities and can construct an HPO.


That is right. Mathematically HPO and actual infinities are not obviously inconsistent. They are obviously consistent for those who believe in the consistency of ZF (Zermelo Fraenkel Set Theory).


But still,
nobody believes that this stuff will work. It's always assumed to be
a mistake which future work will correct. (Google hypercomputers or
super-Turing computers for some theoretical models of HPOs.)


(Note, in passing, that G and G* remains sound and complete for those hyper-turing machine. Actually what I call comp could be called in that contest omega-comp. I think (but don't have a full detailed proof of it) that for all constructive (Church-Kleene) ordinal alpha, the logic of self-reference (G and G*) remains stable for alpha-comp. Solovay's proof makes it possible to get proper extensions of G and G* for strong, non constructivist generalization of provability, like "being true in all *transitive* models of ZF".
Of course ""being true in just all models of ZF", is equivalent, by Godel's COMPLETENESS (not incompleteness) theorem to provable in ZF, and this leads to completeness and soundness of G and G*.
Let me quote the footnote 28 of mùy sane paper:

<x-tad-bigger>[G and G* are sound and complete for larger systems, and can be enriched for providing non</x-tad-bigger><x-tad-bigger>-</x-tad-bigger><x-tad-bigger>comp notion of belief, for example Solovay got that G together with the formulas B(BX->BY) v B(BY->(BX&X)) gives a system which is sound and complete for the (set theory) propositions which are true in all transitive models of ZF (Zermelo Fraenkel set theory).  For a proof see Boolos 1993. Solovay got also that G together with the formulas B(BX->Y)vB((BY &Y)->X) captures in the same way the propositions true in all models  V</x-tad-bigger><x-tad-smaller>Kappa</x-tad-smaller><x-tad-bigger> with kappa an inaccessible (rather big) cardinal. In case we find, as a measure on the consistent histories, a consistent subset of physics, but don’t find all of physics, making comp false, similar Solovay extensions of G and G* could provide psychologies of some “non machine” notions. See R. M. Solovay (1976): “Provability Interpretation of Modal Logic,” </x-tad-bigger><x-tad-bigger>Israel Journal of Mathematics</x-tad-bigger><x-tad-bigger>, 25:287-304.]</x-tad-bigger>



So where does our disbelief come from? Why are we so skeptical? We don't
have very good grounds from observation. The mere fact that we've never
seen X isn't strong evidence that X doesn't exist. We discover new things
all the time, amazing things. Why should HPOs be any different?

It seems that our disbelief in HPOs and in other manifestations of
infinity is somehow rooted in mathematics itself. We have an instinctive
aversion to the possibility of an actual infinity existing as something
we can interact with. We believe that mathematics is essentially a
finite endeavor, at least in terms of how it manifests in the real world.


Note that you talk a little bit as you were (again?) postulating the existence of a real world, when in other post you acknowledge the interest of deriving it from the observer-moments.


And yet there are plenty of mathematical models of infinities.
The study of transfinites is a very active part of set theory, in
fact it is entirely what makes set theory non-trivial.
Likewise, with
the super-Turing work people have constructed hierarchies of ever more
powerful infinity machines. So there is no dearth of mathematical models
to deal with infinities.

In response to early work on transfinites the mathematical school called
constructivism arose. Constructivists reject most of mathematics that
deals with infinities. I am not too familiar with the history but I
believe that they were concerned about the many potential paradoxes
and the possibility that our intuition was a poor guide to truth in the
treacherous waters of the transfinites.

Constructionism has not gained much ground among mathematicians;


The problem for the constructionist is that the work they do find terribly interesting applications in non-constructive mathematics! There is a fertile back and forth between constructive and non-constructive mathematics.



I get the
impression that it's just not that much fun to do math that way. But it
seems to accord well with our instincts about the world. Perhaps we
could say that it is all very well for the mathematicians to construct
their transfinite castles in the air, but when it comes to reality,
we are constructionists.


This is *very* debatable.
1) A down to earth problem on the braids group has been solved by using large cardinals.
2) Girard succeeds in blurring the distinction between constructive/non constructive. In some sense classical linear or quantum logics are more constructive than constructive linear logics.
3) George Boolos, commenting a solution of a puzzle by Smullyan, shows that decision theory needs non classical thought (like the excluded middle principle).



Wei Dai: Fortunately, the Artificial Intelligence in charge of
protecting Earth from interstellar fraud refuses to allow this. Having
been programmed with UD+ASSA (see Hal Finney's 7/10/2005 post for a
good explanation of what this means), it proclaims that there is zero
probability that Halting Problem oracles can exist, so it must be pure
chance that the sample black box has correctly answered all the queries
submitted to it so far.

The moral of this story is that our intuitive notion of induction is not
fully captured by the formalization of UD+ASSA. Contrary to UD+ASSA, we
will not actually refuse to believe in the non-existence of uncomputable
phenomena no matter what evidence we see.

Our theories say that it can't happen. Yet in this case we have an
observation where it seems like it does happen. So what do we do?

It helps in the analysis to distinguish what people actually do from
what they "should" ideally do. People are imperfect reasoning machines
and there is no fundamental or theoretical interest in explicating every
detail of what they believe and what they don't. No doubt it would be
possible to build up a complete formal theory and model of a given human
brain that would fully explain how it does induction, full of complicated
rules and contradictions. That's not the interesting question.

The harder question is, what *should* we do, in this situation? I see
two possibilities.

One is to hold to our theories. No matter how many times we see
this machine work, we disbelieve it. We assume it is a trick.



But concerning the HPO I don't think we can do the test at all. We cannot know if the machine does the trick. If the machine solves and give the proof for the first 2^64 math problems corresponding to the 64 first digit of Chaitin OMEGA, all we can conclude is that they are very advanced in math. Testing an HPO box just measure the advance of the extraterrestrial math. Even if they open the box and show us it is not just a summary (-la-OMEGA-Chaitin's way) of their "annals of mathematics", we will not been able to understand it, or if we do, then the extra-terrestrial will indeed refute Church thesis and comp.




Others have suggested ways the trickery might be done. It would be
extremely difficult and technologically challenging, but not impossible.
These tricks require effort that would be characterized by extremely
large numbers, but not infinities. If we are skeptical that they could
put such large numbers together, we should be infinitely more skeptical
that they can manage infinities.

The other possibility is to change our theories, and apply induction.
Faced with the evidence of a machine that seems to work, we accept that
maybe it really does work. And if so, then our theories are wrong and
we need new ones.

This is a hard course because of the peculiar grounding of the theories
involved. As described above, they aren't based on observation. They are
much more fundamental. They have to do with our deepest beliefs about the
nature of reality and perhaps even the nature of logic and mathematics.
It's questionable to me whether any finite set of observations in the real
world has the standing, the philosophical strength, to change our beliefs
about the nature of something as ethereal and unphysical as mathematics.


I agree, at least for a large part of math.


But maybe it's the right thing to do. After all, our imperfections,
our existence as creatures of a mundane reality, make us prone to error.
We might be wrong about anything. Arguably, an optimal induction engine
stands ready to change any and every one of its pre-existing beliefs,
when they are strongly enough contradicted by the evidence. So perhaps
we really should change our minds about mathematics, forget about that
constructionism nonsense, and accept that infinities exist and are real,
and here's one that we can touch and poke at.


What can we do to repair this flaw? Using a variant of UD, based on a
more powerful type of computer (say an oracle TM instead of a plain TM),
won't help because that just moves the problem up to a higher level of
the computational hierarchy. No matter what type of computer (call it C)
we base UD' on, it will always assign zero probability to the existence of
even more power types of computer (e.g., ones that can solve the halting
problem for C). Intuitively, this doesn't seem like a good feature.

Earlier on this mailing list, I had proposed that we skip pass the
entire computational hierarchy and jump to the top of the set theoretic
hierarchy, by using a measure that is based a set theoretic notion of
complexity instead of a computational one. In this notion, instead of
defining the complexity of an object by the length of its shortest
algorithmic description, we define its complexity by the length of
its shortest description in the language of a formal set theory. The
measure would be constructed in a manner analogous to UD, with each
set theoretic description of an object contributing n^-l to the measure
of the object, where n is the size of the alphabet of the set theory,
and l is the length of the description. Lets call this STUM for set
theoretic universal measure.

Are you confident that this is well defined? I understand Schmidhuber's
approach: pick an arbitrary UTM, run a random program through it, and take
the bit pattern that comes out as the information object. The fraction
of programs that produce a given information object is the measure.

Is there a similarly mechanical way of understanding the concept of
object description in the language of set theory? Can you sketch how
that would work?


Well, Tegmark tried but did not succeed. Of course you can list the definition of set object or of set of proof. But then "computationnaly" is it like working in very weak theory, and concerning proof, well this is not "closable" for the diagonalization procedure so this is "forever" a relative concept. Formally, if the proofs are checkable (in finite time) the invariant you can capture is already given by G and G*.

STUM along with ASSA does a much better job of formalizing induction, but
I recently realized that it still isn't perfect. The problem is that it
still assigns zero probability to some objects that we intuitively think
is very unlikely, but not completely impossible. An example would be a
device that can decide the truth value of any set theoretic statement. A
universe that contains such a device would exist in the set theoretic
hierarchy, but would have no finite description in formal set theory,
and would be assigned a measure of 0 by STUM.

I'm not sure where this line of thought leads. Is induction
unformalizable? Have we just not found the right formalism yet? Or is
our intuition on the subject flawed?

The mainstream view, I gather, is that induction is indeed unformalizable.
The contrary claim, that induction can be formalized, would be considered
controversial.


Right, but this does not prevent mathematical analysis.

Another way to express the problem is to think of trying to build an
optimal induction machine. It could use Bayes theorem to update its
beliefs, but what about the priors? Same problem. We could use the
Universal Prior but it gives probability 0 to HPOs. Then there are all
those other priors that implicitly assume infinite computation, so where
does it stop? There are no end to infinities, and as Wei's example shows,
there is apparently no place to stand once you start down that road.

Mmh..., Prior, Bayes, ... required ASSA. But comp needs RSSA. (old discussion).

It would be absurd to suggest, say, that everything up to Aleph-23
has Platonic existence, while infinities from Aleph-24 on up are mere
mysticism. Likewise, building a universe out of a UTM+HPO doesn't make
sense because as Wei says, there is a 2nd-order HPO, an HPO2, which is
beyond the scope of UTM+HPO, so what if the aliens show up with one
of those?


This rejoins my critics of Tegmark.


For a multiverse model to make sense it has to be simple,
distinctive and (ideally) unique. We don't quite achieve uniqueness
with the UDist (due to the arbitrary choice of a UTM which creates a
multiplicative constant difference on measure), which is a major flaw.
But adding oracles makes the problem infinitely worse.

Here's what I conclude. If we really believe in the Universal
Distribution, then we ascribe probability zero to HPOs.


I agree, except that methodologically, once we accept Church thesis, this is not something we can decide, we must prove that there is no other choice.



That means
that in Wei's story, indeed the aliens are tricking Earth.


Most probably (unless not comp).



If we try to
imagine a universe where the aliens are legitimate and have real HPOs,
that is impossible. We are just confusing ourselves if we think such a
universe could be real. There is no point in even considering thought
experiments based on it, any more than imagining what would happen if
aliens showed up with a logical formula which was obviously simultaneously
true and false. So given that we stand upon the UDist, there is no need
to pay much attention to these kinds of thought experiments.

Nor with the UD ;)
I would suggest that evidence for or against the UDist should come
more from the fields of mathematics and logic than from any empirical
experience.

Yes.


My hope is that further study will lead to a computational
model which is distinguished by its uniqueness and lack of ambiguity.
That seems necessary for this kind of explanation of our existence to
be successful.

ASSA + Udist: I doubt it can work. But who knows?
RSSA + UD: they are "concrete" evidence it works. No?

Bruno


http://iridia.ulb.ac.be/~marchal/

Reply via email to