On 11 Jan 2012, at 18:07, acw wrote:
On 1/10/2012 17:48, Bruno Marchal wrote:
Yes, with the usual notion of Turing-computable, you don't really
need more than arithmetic.
On 10 Jan 2012, at 12:58, acw wrote:
On 1/10/2012 12:03, Bruno Marchal wrote:
On 09 Jan 2012, at 19:36, acw wrote:
To put it more simply: if Church Turing Thesis(CTT) is correct,
mathematics is the same for any system or being you can imagine.
I am not sure why. "Sigma_1 arithmetic" would be the same; but
mathematics (set theory, analysis) might still be different.
If it's wrong, maybe stuff like concrete infinities,
and infinite minds could exist and that would falsify COMP,
there is zero evidence for any of that being possible.
Not sure, if CT is wrong, there would be finite machines, working
finite time, with well defined instructions, which would be NOT
emulable. Hypercomputation and infinite (human) minds would
comp, not CT. On the contrary, they need CT to claim that they
more than any programmable machines. CT is part of comp, but comp
part of CT.
Beyond this, I agree with your reply to Craig.
In that response I was using CT in the more unrestricted form: all
effectively computable functions are Turing-computable.
I understand, but that is confusing. David Deutsch and many
are a bit responsible of that confusion, by attempting to have a
of "effectivity" relying on physics. The original statement of
Turing, Markov, Post, ... concerns only the intuitively human
functions, or the functions computable by finitary means. It asserts
that the class of such intuitively computable functions is the same
the class of functions computable by some Turing machine (or by the
unique universal Turing machine). Such a notion is a priori
independent of the notion of computable by physical means.
For the 3-person view, assuming mechanism, not only we don't need more
than arithmetic, but we cannot use more than arithmetic. Anything
added to Robinson arithmetic is empty of explanative power, at the 3p
And for the 1-views, you need to add the induction axioms to get the
Löbianity of the observer, which is needed only for interviewing them,
and then you need something bigger than the "whole mathematics" to get
the full 1-picture.
That could be possible, but more evidence is needed for this(beyond
the random oracle). I also wonder 2 other things: 1) would we be
able to really know if we find ourselves in such a world (I'm
leaning toward unlikely, but I'm agnostic about this) 2) would
someone performing my experiment(described in another message), lose
the ability to find himself in such a world (I'm leaning toward 'no,
if it's possible now, it should still be possible').
It might be a bit stronger than the usual equivalency proofs
very wide range of models of computation (Turing machines, Abacus/PA
machines, (primitive) recursive functions (+minimization), all kinds
of more "current" models of computation, languages and so on).
Yes. I even suspect that CT makes the class of functions computable
physics greater than the class of Church.
1) is a difficult question, due to the inability to know our level of
2) is difficult for me, due to the length of your sentences and
paragraphs (thanks for being patient).
If hypercomputation was actually possible that would mean that
variant of CT would be false, because there would be something
effectively computable that wasn't computable by a Turing machine.
In a way, that strong form of CT might already be false with comp,
only in the 1p sense as you get a free random oracle as well as
staying consistent(and 'alive'), but it's not false in the 3p
Yes. Comp makes physics a first person plural reality, and a priori
might be able to exploit the first plural indeterminacy to compute
function, like we know already that we have more "processes", like
free random oracle. The empirical fact that quantum computer does not
violate CT can make us doubt about this.
In the third person, there's no need to consider more than UD,
Yes. That is why RA is enough for the theory of everything, at the
which seems to place some limits on what is possible, but in the
first person, the possibilities are more plentiful (if COMP).
Yes. That' what I just said above. Then, remember that the physical
reality *are* first person, subjective, realities, yet most plausibly
first person *plural*. Everett's multiplication (entanglement) of
populations of observers confirm this.
Also, I do wonder if the same universality that is present in the
current CT would be present in hypercomputation (if one were to
it would be possible)
Yes, at least for many type of hypercomputation, notably of the
computability with some oracle.
- would it even retain CT's current "immunity" from diagonalization?
Yes. Actually the immunity of the class of computable functions
the immunity of the class of computable functions with oracle. So the
consistency of CT entails the consistency of some super-CT for larger
class. But I doubt that there is a super-CT for the class of
computable by physical means. I am a bit agnostic on that.
OK, although this doesn't seem trivial to me.
Most theorems in recursion theory, notably what you prove by
diagonalization, extends easily from machine to machine + oracles. You
might study the book by Rogers:
ROGERS H.,1967, Theory of Recursive Functions and Effective
Hill, 1967. (2ed, MIT Press, Cambridge, Massachusetts 1987).
As for the mathematical truth part, I mostly meant that from the
perspective of a computable machine talking about axiomatic
as it is computable, the same machine (theorem prover) would always
yield the same results in all possible worlds(or shared dreams).
I see here why you have some problem with AUDA (and logic). CT = the
notion of computability is absolute. But provability is not
all. Even with CT, different machine talking or using different
axiomatic system will obtain different theorems.
In fact this is even an easy (one diagonalization) consequence of CT,
although Gödel's original proof does not use CT. provability, nor
definability is not immune for diagonalization. Different machines
proves different theorems.
If questions(axiomatic system being looked into) are different,
results can be different too, but if the questions (and inference
steps) are the same, shouldn't the result always be the same? This
would follow from CTT implementing a theorem prover.
Actually you don't need the CT for this, but I see your intuition:
yes, if the axioms and inference rules are the same, or are
equivalent, different theories will yield the same theorems. But for
any subject matter extending arithmetic, there is no universal theory
proving all the truth, and for all theories, you can always find (even
algorithmically) a more powerful theories.
For computability, CT asserts that this is not possible. Nothing
computes more computable functions than a universal Turing machine, or
any of its equivalent, with respect to computability, systems.
In term of provability, computability is Sigma_1 provability. Some
one capable of proving all true sigma_1 propositions will be able to
emulate all (Turing) machines. RA is already like that. But PA, which
is RA + the induction axioms, has the ability to recognize its
universality, and to recognize its own incompleteness and ignorance.
Although with my incomplete understanding of the AUDA, and I may be
wrong about this, it appeared to me that it might be possible for a
machine to get more and more of the truth given the consistency
That's right both PA + con(PA) and PA + ~con(PA) proves more true
arithmetical theorems than PA.
And PA + con(PA + con(PA + con (PA + con PA)) will proves even more
theorems. The same with the negation of those consistency.
I do wonder what kind of interesting theorems would result from those.
I have studied this with Eric Vandenbush, and he solved some
interesting theorems, showing that interesting theorems exist indeed.
Not really the time to dig on this now.
Note that the theory PA* = PA* + con(PA*), which can be defined
by the use of the Kleene recursion fixed point proves ALL the true
propositions of arithmetic!!! Unfortunately it proves also all false
propositions of arithmetic. This follows easily by the second
Gödel, because such a theory can prove its own consistency given that
(con "itself") is an axiom, and by Gödel II, it is inconsistent.
PA* is the one which claims its own consistency as a theorem?
Yes. (The fool!)
It would be inconsistent in that case, and unfortunately, even if it
shows some new truths, we lose the ability of telling true from
false, making it a bit useless once one starts making inferences
from false statements.
Yes. An inconsistent theory proves all formula. It proves the Riemann
hypothesis, and it proves the negation of Riemann hypothesis. It
proves that 0=1. Such inconsistent theories is what we try to avoid.
But by incompleteness, PA + (PA proves f) is consistent. ~Bf remains
true for that theory. PA + Bf can prove Bf, but cannot proves f from
How come? PA being able to encode a theorem prover for ZFC, wouldn't
mean PA believes in the truth of ZFC(or some other set theory)?
But, yes, once you have a consistent machine, you can extend its
provability ability on the whole constructive transfinite.
Or did I misunderstand something here. Maybe you should recommend me
some book or paper to read that would explain this to me.
What I mean is that PA can find its own Gödel sentence, like con(PA),
and then transforms itself into a new machine/theory, denoted by PA
+con(PA), which is not PA (if it is we are back to an inconsistent
theory). It is PA enriched by a sound (true) statement asserting that
PA (not itself) is consistent. That new theory can be shown to prove
infinitely more truth than PA, and to have infinitely many proofs
being arbitrarily shortened. In a theoretical sense PA+con(PA) is more
efficacious than PA.
And all this is true for the theory PA+con(PA), which I will call PA2.
But then you have PA3 defined by PA2 + con(PA2), which is itself more
efficacious than PA2.
So you have PA4, PA5, PA6, etc.
Given that the process above is effective (algorithmic), you can build
the theory PA-omega (omega being the first infinite ordinal), and then
PA-omega+1, PA-omega+2, ... PA-omega+omega, ... PA-omega+omega
+omega, ... , PA-omega*omega, etc... all along the constructive
transfinite, that all the ordinal up to Church-Kleene omega_1^CK (the
least non constructive ordinal).
You might take pleasure in reading Torkel Franzen's book
"inexhaustibility", which runs through that theme.
The significance of this is not well understood.
As for higher math, such as set theories: do they have a model and
they consistent? (that's an open question) If some forms close to
Cantor's paradise are accepted in the ontology, wouldn't that risk
potentially falsifying COMP?
Trivially. You can refute comp in the theory ZF + ~comp (accepting
formalization of comp in set theory).
With that question I was more concerned about the number of entities
appearing in the theory and that the possibility of concrete
infinities appearing in the physics and thus leading to possible
worlds where brains implementations have with concrete infinities =
lack of substitution level.
This cannot work for us, given that our physical realities are
determined by our common substitution level. But what you say is
correct, and this means we might meet entities with lower substitution
than us. That's might already, arguably, the case for quantum
computers, which exploits the many computations branches.
Remember that, like consistency, (~ probable comp) is consistent with
comp. If comp is true, like consistency, it is not provable, and so
can add (~ provable comp), or (con ~comp) to the comp theory of
everything (arithmetic) without getting an inconsistency with comp.
course you should not add ~comp to comp at the same (meta)level. You
will get a contradiction.
I've always wondered about one thing: if there are solid evidences
for COMP being true(in the sense of no evidence about it being false
being found, its predictions confirmed by observation and so on),
wouldn't that warrant a belief in it?
That's the unavoidable dangerous trap. We have to keep in mind that we
cannot impose mechanism to others, but we will fight about the case of
children. Do we have the right to transplant an artificial brain to
someone who did not ask? Is it a murder not doing so? Is it a murder
It would be religious belief in a way, like belief in the
consistency of PA or some others belief in God or primitive matter
or reality or particular physical laws or whatever (some may be
true, other may be false, but one must have some assumptions if they
have to bet on something).
I can't agree more. It *is* a religious belief. Mechanism is from the
start a belief in a form of reincarnation, even if today, you need to
introduce some magic, to refuse the transplant. But to say that the
magic does not exist is itself not a scientific statement. So people
have the right to decide by themselves. Like health and religion, it
cannot be the government business (in working democracy).
There are three reasons to see a "theology" here:
- it concerns a form of reincarnation (even if technological)
- by UDA, it concerns an infinity of reincarnations (even if not
- it concerns *truth* about the machine (in Tarski's sense) minus
*provability* by the machine (in Gödel's sense).
In another way, a conscious machine can always doubt that it's a
machine and this doubt would not make it inconsistent.
She better doubt that. Her non-doubt would make her inconsistent,
unless she is sure to be ... inconsistent (in that case she would
just be unsound).
Likewise, as above, the theory PA + (PA is inconsistent) is
You can prove, in it, that the false is provable, but you cannot
in it, the false. Bf -> f is not a theorem (Bf -> f) = ~Bf =
I always found that as a non-intuitive consequence of Godel's
PA doesn't contains a proof of its own consistency, if it's
consistent (it is inconsistent if it does contain it).
On the other hand, in a stronger theory, which shows PA consistency
(for example ZFC), wouldn't PA + (PA is inconsistent) be inconsistent?
Why? ZFC does not need to believe in the theorems of PA+Bf (B =
"provable by PA", Bf = "PA is inconsistent").
And PA + Bf has no reason to believe in the theorems of ZFC.
Or to put it different, wouldn't it lack a model (because 'PA is
inconsistent' would be false, despite not being provable within PA
By completeness theorem: a first order logical theory is consistent if
and only if it has a model. PA + Bf is consistent, and thus has a
model, which of course cannot be the standard model. A proof of f will
be a non standard object. PA+Bf is consistent, but is not sound,
despite it proves more truth on the standard model, it proves some
falsity with respect to it too, indeed, like Bf.
Maybe it could even be generalized to adding a sentence which could
be false, but from which nothing false could be proven within some
Yes. Like the following non equivalent false sentences: Bf, BDt, BBf,
BBDt, BDDt, BDDBDDBBDt, BBBBBDDDDBBBf, ...
As long as B is followed by f, and D by t (or f, but then in the scope
of some B) you will get false but irrefutable sentences.
I hope that some of these ideas will be clearer to me after I'm done
reading those books on logic and provability.
I can see many reasons why a particular machine/system would want to
talk about such higher math, but I'm not sure how it could end up
different discourses/truths if the machine('s body) is computable.
Here there is a difficulty, and many people get it wrong. There is a
frequent error in logic which mirrors very well Searles error in his
chinese room argument. With comp I can certainly simulate Einstein's
brain, but that fact does not transform me into Einstein. If someone
asks me a complex question about relativity, I might be able to
by simulating what Einstein would respond, but I might still not
clue about what the meaning of Einstein's answer. In fact I would
make it possible for Einstein to answer the question. Not me.
Like wise, a quasi debilitating arithmetical theorem system like
Robinson Arithmetic, which cannot prove x+y = y+x, for example, is
Turing universal, and as such can imitate PA perfectly through its
provability abilities. That is, RA is able to prove that PA can prove
x+y=y+x. But RA has not the power to be convinced by that PA's proof,
like I can simulate Einstein without having the gift to understand
words by him.
RA can simulate PA and ZF, and even ZF+k (which can prove the
consistency of ZF), but this does not give to RA the *provability*
of PA, ZF or ZF+k.
PA can prove that ZF can prove the consistency of PA, but PA can
not prove the consistency of PA.
I don't think I said that PA can take other system's truths as their
own, even if it can simulate other systems, it cannot believe they
are true, and if it could, it no longer would be PA, but some other
All right. Key point.
What I was talking about was a bit different, formal systems can be
simulated by some UTM, thus while the UTM wouldn't "be" the system
(in the same way that in COMP, a brain could allow a mind to
manifest relatively to some other observers, but it wouldn't be that
given the same questions asked to some particular Turing-emulable
system, the answers will always be the same. That of course doesn't
mean that PA can take ZFC truths as its own - they are not provable
In another way, the totality of possible discourses should be
present within PA, but that doesn't mean PA can take them as its
There's another problem here: not all formal systems will be
consistent or have models, but unless we have proofs of their
inconsistency, we will never know.
In fact we can dovetail on the proofs, so that if the theory proves
the false, we will know it, soon or later.
What you say is exact for consistency. If our theories are consistent
(equivalently by Gödel 1: if there are model satisfying the theory) we
will never know.
To believe in a Universe or in a God belongs already to the same type
of belief. The universal machine are born theological.
Some of this confusion might be because, we as humans, sometimes
take other people or system's beliefs as our own, even if doing so
is not always rational, but despite that this increases the risk of
being wrong, it also lets us get to a lot more truth.
Yes. the easy fuzzy not-completely correct solution of today's problem
gives rise to the complexity of tomorrow.
Humans, and life, have a non monotonic layers, we can abandon beliefs,
update them. The conceptual root of this might be in that consistency
We are "variable machine" with respect to possible neighborhood.
I think we're mostly in agreement here, beliefs (what's provable)
will differ per machine, and with adding more axioms, more beliefs
I can see it discovering the independence of certain axioms (for
example the axiom of choice or the continuum hypothesis), but
all the math that it can /talk/ about be the same? The machine would
have to assume some axioms and reason from there.
Yes. And with different axioms you get different provability
Once a machine can prove all true arithmetical sigma_1 sentences
the shape ExP(x) with P decidable) she is universal, with respect to
*computability. You can add as many axioms you want, the machine will
not *compute* more functions. But adding axioms will always lead the
machine into proving more *theorems*.
In AUDA, "belief" is modeled by provability (not computability), and
then knowledge is defined in the usual classical (Theaetetus) ways.
beliefs of the correct machines will obeys to the same self-
logic, but all belief-content will differ from a machine to a
machine. PA and ZF have the same self-referential logics, but they
quite different belief, even restricted on the numbers.
For example ZF proves more arithmetical truth than PA. ZFC and ZF+
proves exactly the same theorem in arithmetic, despite they proves
different theorem about sets (so arithmetic is deeply independent
axiom of choice). ZF+k (= ZF + it exists an inaccessible cardinal)
proves *much more* arithmetical theorem than ZF.
To sum up:
Computability is an absolute notion.
Provability is a relative notion.
Yes. More axioms = more beliefs = less models (but always infinities
of models once the beliefs effectively extends RA)
There can be many 'believers'(axiomatic systems), but all of them
can be implemented by the same base(their "body", some theorem
prover implemented by some UTM).
OK. Logicians formalizes them in first order logic, or in some other
system (not always Turing complete).
Amazingly pure addition and pure multiplication are already rather
mathematically rich and non trivial (and isomophic, also), but the
universal mess comes when you mix addition and multiplication. This
leads to intelligence, which can only contemplate the mess, and try to
survive, relatively to its most possible histories. But the whole mind
reality is more complex than we thought, there are layer of realities,
intermediate dreams, etc.
However, I would like to know what the many more arithmetical
theorems ZF(and some of its extensions) that proves are. The only
ones I'm familiar with are the type of PA's consistency and
Goodstein's theorem as well as similar results about the fact that
some sequences terminate/halt.
This might interest the conventional mathematicians, but the computer
scientist gives some merits to the intensional meaning. ~Bf, that is
con(PA), say, has no intrinsic meaning in arithmetic, yet it has a
metamathematical meaning, and that meaning is preserved by its
arithmetical translation, even if this one will depend on the choice
of some coding: in nature that coding is what makes exist relatively
to some universal local machine: even in term of body: we are
implemented by lower layer of universality, and probably a lot of them
in our nervous network. That why I use the term 'relative number'. It
is always relative to a universal number, and we have to fix a
universal base to talk on them, the easiest one being elementary
BTW, acw, you might try to write a shorter and clearer version of
joining post argument. It is hard to follow. If not, I might take
I think I talked about too many different things in that post, not
directly relevant to the argument (although relevant when trying to
consider as many consequences as possible of that experiment). If
parts are unclear, feel free to ask in that thread. The general
outline of what I talked about in the part you have yet to comment
a generalized form of the experiment the main character from
"Permutation City" novel performed is described in detail(assuming
COMP), a possible explanation for why it might not actually be
to perform such an experiment and why it might be a good practical
test for verifying the consequences described in the UDA, various
variations/factors/practicalities of that experiment are discussed
(with goals such as reducing white rabbits, among a few others),
not directly relevant to the argument and at the end I tried to
the notion of observer can be better defined and tried to show that
the notion of "generalized brain" might not always be an appropriate
way to talk about an observer. That post was mostly meant to be
exploratory and I hoped the ensuing discussion would lead to 2
1) assessing the viability of that experiment if COMP is assumed AND
2) reaching a better definition of the notion of observer.
OK. This I think I understood this, but your style is not easy, and
might be useful, even within your goal, to work on a clearer and
version, with shorter sentences, without any digression, with clear
section and subsection, so as to invite most people (including me) to
grasp it, or to find a flaw, and this in reasonable time. In
I fail to see the point of discussing the use of different universal
systems like you did with the Cellular Automata (CA).
I will consider rewriting it if the time allows.
OK. Take your time I will be myself rather busy those next weeks.
The original idea as presented in that book, had a CA as their
We agree, I think, that this is already incompatible with comp, unless
they take a quantum CA, but in that case they do treachery, and will
miss the quanta/qualia available 'naturally' in the self-reference
so I tried to show the difference between my experiment and the
experiment in the book.
If one assumes COMP, there is no longer a need for choosing any
particular physical system, such as a CA. I then went on to say that
if a 'physical CA' is chosen, there are many practical problems that
could occur, both a decrease in stability(white rabbits, jumpyness),
as well as many problems for those living with the system (speed-of-
light limit and most social problems present in our physical world).
Here the thought experiments assumes a lot, and in a rather vague way.
I think it could only be 100% stable if the (mind's) substitution
level is exactly at the CA level, which was not the case in the book.
You lost me a little bit.
Instead of such a "primary physics", I just choose an easy to design/
program Turing-equivalent machine (such as something based on a PA
machine), which implements a message-passing operating system/
scheduler on top of it. I argue that the careful use of a random
oracle should reduce the chance of white-rabbits and increase the
overall world's stability(1p experiences not being too jumpy), but I
still don't know to what degree this would be sufficient (oracle
implemented by dovetailing or leaving "undefined"). I tried to
consider how the choice of OS and observer's implementation could
affect the world's stability(1p).
This seems equivalent with doing virtual realities. If they dovetails
universally, they should not change your measure, but if they simulate
you environment, including yourself, then they will change your
As a thought experiment, I could have stopped there, but I decided
to consider more practical details as well, because if one day there
will be a computationalist doctor to say yes to and one wants to
perform such an experiment, they better have all their details
right, otherwise after performing the experiment, their future 1p
experiences might not be very pleasant.
One day, 99,9999% of computer science will be in privacy security
coding, because Platonia, with comp, is a real jungle. Don't let your
Gödel number in the hand of anybody else, still less organization or
I have to say I don't sleep well since Obama signed that NDAA bill,
and I'm afraid the bandits betrayed themselves. It looks like a
confirmation that the war on terrorism is nearly as fake as the war on
drugs. Pure criminal fear selling business.
If you are concerned with the future 1p experiences being pleasant in
our neighborhoods, you might have to convince first the americans to
vote for a candidate having not the slightest air of complacency with
prohibitionists. It is more urgent, I'm afraid.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at