On 29 Feb 2012, at 20:26, Stephen P. King wrote:
On 2/29/2012 4:28 AM, Bruno Marchal wrote:
On 28 Feb 2012, at 20:17, Stephen P. King wrote:
On 2/28/2012 10:43 AM, Quentin Anciaux wrote:
Digital physics says that the whole universe can be substituted
with a program, that obviously imply comp (that we can substitue
your brain with a digital one), but comp shows that to be
inconsistent, because comp implies that any piece of matter is
non-computable... it is the limit of the infinities of
computation that goes through your consciousness current state.
Can you see how this would be a problem for the entire digital
uploading argument if functional substitution cannot occur in a
strictly classical way, for example by strictly classical level
measurement of brain structure? Any dependence of consciousness on
quantum entanglement will prevent any form of digital substitution.
This is not correct. It would only make the comp subst. level
lower, for we would need to Turing-emulated the entire quantum
system. What you say would be true if a quantum computer was not
Turing emulable, but it is. Sure, there is an exponential slow-
down, but the UD does not care, nor the 'first persons' who cannot
be aware of the delays.
This might not be a bad thing for Bruno's ontological argument -
as it would show that 1p indeterminacy is a function or
endomorphism of entire "universes" in the many-worlds sense - but
would doom any change of immortality via digital uploading.
Did you not see this last comment [SPK2] that I wrote? We need to
distinguish between the actions on and by physical systems, such as
human brains, and the "platonic" level systems.
We certainly have to do that locally, when we say 'yes' to the doctor,
or when the doctor builds the artificial brain. But the reasoning
leads to a conceptual distinction between the physical systems and the
objects of Platonia.
Roughly speaking, the objects in Platonia are specific numbers and
numbers relations, while physics is a relative sum on all computations
going through my actual computational state. This follows form step
Your remark seemed to be one that was considering my comment [SPK1]
as if it where discussing the Platonic level aspect. This is just
probably a confusion caused by our use of the same words for the two
completely different levels. For example, a physical system is a UTM
if it can implement any enumerable recursive algorithm, aka is
"programable" in the Turing Thesis sense, but its actual behavior is
limited by its resources, transition speeds, etc.
It is the difference between a UM, and a UM implemented in some other
UM. When we implement a UM physically, we Implement a UM in some local
subparts of the physical reality, which is itself emerging from the
sum on all UMs' computations going through my current state.
Note that the physical reality is not in Platonia. It is how the
border of Platonia looks to "me", taking into account the infinity of
UMs and computations to which I "belong".
An abstract Platonic Machine, such as what you consider in SANE04,
does not have any such limits.
I am not sure which one you are talking about.
I think that we should consider a formal way to describe these
relations. Perhaps some one that is fluent in Category theory will
come to help us in these discussions.
I have used category theory in "Conscience et mécanisme", but it helps
only for the semantics of the 1-person (S4Grz, S4Grz1, X1*). It is
also very distracting. It is better to understand well the problem
before musing on the tools which can solve them. The problem *is* a
problem in computer science, which has already good tools.
We need a way to define the idea of "the limit of the infinities of
computations that go through a given consciousness state" in a way
that is more clear given that "a given consciousness state" is still
a very ambiguous notion.
We can bet that some equivalence relation is at play, like all similar
1p in non-diverging computations, yes. But this is necessarily a non
constructive notion, and that is why it is simpler to start with the
logic of measure 'one' extracted directly from the modalities of self-
Is Löbianity required for bare consciousness, e.g. consciousness
without self-awareness? It seems to me that our entire discussion
seems to assume that consciousness is just the "inside aspect" of
I have come to be open to the idea that bare consciousness needs only
one UM, or even less. Löbianity is required for self-consciousness,
and for the machine able to reason on all this, making the interview
enough rich to extract physics.
But Löbianity is basically given once the machine believes in the
(arithmetical) induction axioms. All chatting UMs obeys to Gödel's
second incompleteness theorem, but only the Löbian "knows that", that
is, they can prove their own incompleteness theorem.
The entire discussion use only the invariance of consciousness for a
set of transformation, in UDA, and from the classical theory of
knowledge and observation in AUDA. You can approximate consciousness
by an unconscious bet in self-consistency. To be conscious is only to
be in a state of believing in some reality.
So White Rabbits would be the abstract equivalent of a Boltzmann
White rabbits are perception by people on aberrant computations
executed by the (concrete in step 7, abstract after step 8) UD.
Boltzmann brains are physical UMs appearing in physical universe. The
UD can be said to generalize them through the arbitrary computations
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at