On 2/6/2012 06:25, Stephen P. King wrote:
Hi ACW,
On 2/4/2012 1:53 PM, acw wrote:
One can wonder what is the most "general" theory that we can postulate
to explain our existence. Tegmark postulates all of consistent
mathematics, whatever that is, but is 'all of consistent mathematics'
consistent in itself?
I have read several papers that argue strongly that it cannot be! For
instance see: http://arxiv.org/abs/0904.0342 The fact that there are set
theories that use axioms that are completely opposite each other is
another strong indication of this.
It's what I was suspecting as well. I'll have to read that paper when
time allows.
Schmidhuber postulates something much less, just the UD, but strangely
forgets the first-person or the what the implementation substrate of
that UD would be (and resorts to a Great Programmer to hand-wave it
away).
I wonder why Schmidhuber held back? Did he fear ridicule?
I have no idea why, although it might indeed be a touchy topic as we can
see in the long discussions on this mailing list.
Before reading the UDA, I used to think that something like Tegmark's
solution would be general enough and sufficient, but now I think 'just
arithmetic' (or combinators, or lambda calculus, or ...) or is
sufficient. Why? By the Church-Turing Thesis, these systems posses the
same computability power, that is, they all can run the UD.
I agree with this line of reasoning, but I see no upper bound on
mathematics since I take Cantor's results as "real". There is not upper
bound on the cardinality of Mathematics. I see this as an implication of
the old dictum "Nature explores all possibilities."
The question is if transfinite extensions are considered as part of the
foundation, what different consequences will follow for COMP or the new
theory?
Now, if we do admit a digital substitution, all that we can experience
is already contained within the UD, including the worlds where we find
a physical world with us having a physical body/brain (which exist
computationally, but let us not forget that random oracle that comes
with 1p indeterminacy).
Not quite, admitting digital substitution does not necessarily admit to
pre-specifiability as is assumed in the definition of the algorithms of
Universal Turing machines, <http://en.wikipedia.org/wiki/Algorithm> it
just assumes that we can substitute functionally equivalent components.
What do you mean by ``pre-specifiability''? Care to elaborate?
Functional equivalence does not free us from the prison of the flesh, it
merely frees us from the prison of just one particular body. ;-)
I'm not so sure to term ``body'' is as meaningful if we consider the
extremes which seem possible in COMP. After a digital substitution, a
body could very well be some software running somewhere, on any kind of
substrate, with an arbitrary time-frame/ordering (as long as 1p
coherent), it could even run directly on some abstract machine which is
not part of our universe (such as some machine emulating another machine
which is contained in the UD) - the only thing that the mind would have
in common is that some program is being instantiated somewhere, somehow.
In this more extreme form, I'm not sure I can see any difference between
a substrate that has the label 'physical' and some UD running in
abstract Platonia. If you can show why the 'physical' version would be
required or how can someone even tell the difference between someone
living in a 'physical' world vs someone living in a purely mathematical
(Platonic) world which sees the world from within said structure in
Platonia and calls it 'physical'. It seems that 'physical' is very much
what we call the structure in which we exist, but that's indexical, and
if you claim that only one such structure exists (such as this
universe), then you think COMP is false (that is, no digital
substitution exists) or that arithmetic is inconsistent (which we cannot
really know, but we can hope)?
If there's any difference between a physical and non-physical
implementation in the context of COMP, I'd like to know what it is and
what effect it has.
This idea goes back to my claim that the "Pre-established harmony
<http://en.wikipedia.org/wiki/Pre-established_harmony>" idea of Leibniz
is false because it requires the computation of an infinite NP-Complete
problem to occur in zero steps. As we know, given even infinite
resources a UTM must take at least one computational step to solve such
a NP-Complete problem. My solution to this dilemma is to have an
eternally running process at some primitive level. Bruno seems to
identify this with the UD, but I claim that he goes too far and
eliminates the "becoming" nature of the process.
I think the idea of Platonia is closer to the fact that if a sentence
has a truth-value, it will have that truth value, regardless if you know
it or not. In essence, Platonia might very well contain Chaitin's
constant of some machine, even if we cannot know it (although we can
make guesses at it by making stronger and stronger theories). Your
objections seem intuitionist/constructivist at its core, that is, that
something does not have a truth value if we can't prove it. Some
sentences may require infinite proofs ("this machine will never halt"),
thus we cannot say that they are true, even if they are (such as the
absence of proof of a contradiction in arithmetic). In another way, this
seems like a problem with the provably unprovable (or a form of
"religion"), although COMP is itself a bet of this sort (existence of a
1p continuation). Yet, we all make the bet that we will be subjectively
conscious in our probable future, the bet that there will be a future
observer moment, that the sun will still exist and so on. It also seems
to me that given the time/space/structure indeterminacy that is shown in
the UDA, the bet on a continuation is justified (if one admits a digital
subst.), and almost magical.
As for your solution, again, I'm not entirely sure there would be any
difference in the experienced reality (your objection seems to be the
transition from UDA step 7 to UDA step 8/MGA?), although you now need a
more complex theory to get the initial substrate (which we cannot even
know anything about). Such an idea seems to lose the elegant solution to
the "why something instead of nothing" question, which was solved rather
nicely by assuming a Platonia (that some mathematical sentences have
truth values, such as arithmetical ones). Such an approach also makes
consciousness more mysterious again, and by MGA (or UDA step 8), we do
know of the conflict between mechanism and materialism. All in all, it
seems to make the theory more complex and at great cost, with many added
problems, and the only benefit of making it more friendly to the
intuitionist/constructivist.
If we are machines, then we can only experience finite amount of
information given some finite interval of time, some of this
information may be incompressible, due to 1p indeterminacy, thus we
could experience "reals" in the limit, despite there only being finite
computations at any given time. This essentially means that any
mathematical object which can be described in Tegmark's "Ultimate
Ensemble" and that can contain us, is already part of the 1p
experiences of those existing within the UD and we can look at 1p
experiences, as well as the UD* trace as being part of the greater
"arithmetical" truth (or any other theory with equivalent
computational power, by the Church-Turing Thesis).
Umm, we have to show that the finiteness of machines is necessary from
first principles, we cannot just assume that it is so.
Are you using a more general definition of machine? A machine always has
a finite body (an integer), even though the grown itself may be
unbounded, but the growth at each step is finite, and given finite
time-steps, there is no way for the machine to become infinite (only in
the "limit").
I agree that the
"arithmetical truth" of the UD may be enough to "force" the 1p to have
content, but we still need to account for the appearance of interactions
or histories of interactions (ala Julian Barbour'sTime Capsule
<http://en.wikipedia.org/wiki/Julian_Barbour> idea). There reaches a
point, even if it is in the limit of infinitely many, that we cannot put
off the concurrency problem, we have to deal with interactions. An
option is to take the "running of the UD" as a primitive kind of dynamic
that at our local 1p emerges as time and notions of forces, fields, etc.
emerge from the algebras of interactions between the many distinct 1p.
So your beef is with the appearance of continuity in our 1p experience
and our inferred 3p world? The local 3p world may indeed to considered
like a Block Universe (or similar extensions to MWI), although by COMP,
that's just a valid model that we could be using, and a matter of
epistemology. This is indeed a tricky problem, which I'm not sure I'm
satisfied with the tentative answer I'm currently thinking for it. From
the 1p, we can only be certain of the existence of the observer moment,
this can lead someone to consider the ASSA (disconnected OM(observer
moments)). From the 3p or 1p's memories/knowledge, that is, at a higher
level than just experience, we bet on the existence of the past and
future, as a matter of self-consciousness and self-reference. We tend to
identify with the (abstract) structure making this bet. This leads one
to RSSA - OM's being relative to each other - that we will make our bets
based only on expected continuations and past/journal/history. If
consciousness is how some truths associated with a self-referential
universal number feel from the inside, and given the bets that number is
making, it wouldn't seem that strange that we will experience apparent
continuity (even though we cannot prove to anyone that we actually
experience such continuity - we cannot even show that to ourselves - if
we just consider a few moments in the past).
I don't think the continuity problem gets solved by dismissing a
Platonia and using something more "physical"(what is that though?). See:
MGA for why.
This is why I think "arithmetic" is as good as any for a neutral
foundation, and we cannot really distinguish (from the inside) between
these foundations by the CTT.
This does not address the neutrality problem though. How can the
foundation be neutral if it is biased toward a particular structure,
even if it is as elegant as arithmetic? My point is that whatever
foundation we take, within our ontological theories, it must be neutral
with respect to a basis, reference frame, grammar or any other structure
that would break its perfect symmetry. Nature does not respect any
privileged framing what so ever and thus there cannot be a privileged
observational stance. This stance toward neutrality may seem unusually
strong, but I don't see how it can be any other way, even allowing
arithmetic to be a primitive is to allow a bias against non-arithmetical
structures and any bias, however weak, is still a rupture of neutrality.
But it doesn't have to be "arithmetic", it can be any system capable of
universal computation. Take something less and nothing truly intelligent
can exist (going less than computation). Take something more (concrete
infinities) and I'm not sure that those structures would be conscious
like you and me. I'm not that against the "more" possibility, just that
I don't think we can ever know too much about them except by our
mathematical theories, this being a consequence of COMP (if one is
turing-emulable). In a way, while more "general" foundations can exist,
it's unlikely we'll ever be able to truly know more about them than we
can compute about them (that is, any theory we'll come up with will be
limited to what a theorem prover can prove about it, and we cannot know
more, although we could bet on more by adding more axioms, although we
cannot know if some of those axioms are truly correct), and it's also
unlikely that they can affect arithmetical/computational matters (if you
think otherwise, you'll have to explain why or show a proof; I'm aware
of Goodstein's theorem, but to prove it, we have to have stronger
axioms, which we cannot know if they are correct or not! Similar
stronger theories are needed for solving some other specific halting
problem-related questions).
However, there might be other possible foundations, if you wish to
postulate concrete infinities, but even if they existed, how could we
tell them apart, it doesn't seem to be possible for someone admitting
a digital substitution, which has a finite mind (at any finite point
in time). If you can show that those other foundations are necessary
and they affect our measure/continuations, or that concrete infinities
are involved in the implementation of our brain, it could prove COMP
wrong.
The Dualism that follows the analogy of the Stone duality covers this
question. Boolean algebras have a specific kind of topological space as
their dual. It is forced and as such there is a direct and predictable
link between the behavior of the logic and the behavior of the dual
space. Is it a complete accident that the topological space that is the
dual to Boolean algebras looks like a collection of primitive atoms
<http://en.wikipedia.org/wiki/Atomism> in a void? I don't think so! So
if the logic that observers are limited to is required to be
representable in terms (up to isomorphism) with Boolean algebras, then
the physical world that those logical entities have as 1p must look like
"atoms in a void". No wonder our particle physics works so well!
There is more to add to this, such as the Pontryagin duality that
expands the class of dual spaces out to range between the discrete
spaces to the compact spaces, but that is for another conversation. :-)
That's interesting, although my Category theory knowledge is rather
incomplete, so I can't really comment on the specifics. In a way though,
it seems that your idea is even more restricted than the UD*, in which
case, it would fall to your generality objection, would it not?
There is another problem with taking a set theory as foundational
rather than arithmetic - some set theories have independent axioms and
they can be extended by adding either an axiom or its negation, and
they result in different set theoretical truths.
I didn't mean to take set theory per se as fundamental, I was thinking
of set theory as just a mereology - a schemata of sorts - of how we
define relations between parts and wholes. But as to your point about
set theory, does not the proven existence of non-standard Arithmetic
<http://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic> argue
the other way? While the Tennenbaum Theorem
<http://en.wikipedia.org/wiki/Tennenbaum%27s_theorem> seems to make
standard (ala Peano) arithmetics "special" and "unique", I strongly
suspect that this is just an invariance property, similar to the
invariance of the speed of light in physics: any logical entity will see
its own Arithmetic model as countable and recursive, it cannot see the
"constant" that would make it non-standard as such is its fixed point,
its "identity" if you will. I do not have any formal description of this
latter idea nor even a proof of it, so please just take this as a
conjecture. ;-)
Non-standard models no longer have computable addition and
multiplication, thus they're not considered in COMP - where the
observer's body is assumed to be computable (as an axiom). Your latter
idea seems interesting, although for me to better understand what you
mean, you'd have to elaborate on the details.
This doesn't really happen with computation - if there's anything
absolute in math, it's computation (although different theories about
what arithmetic is will result in different things the theory can talk
about, but it won't make computation any less absolute).
I strongly suspect that your argument here about the "absoluteness" of
computation is a bit too strong or even misplaced. Restricting
information to only being a binary bit on mappings in the Integers is a
harsh regime, no wonder computation is so "well behaved", any deviation
of the bits from the tyranny of the integers at all is terminated with
extreme prejudice! I see computation, in general, as "the transformation
of representations" and thus do not see the by fiat confinement to the
integers as beneficial.
By absoluteness I mostly meant the very wide consequences that follow
from the Church-Turing Thesis. In another way, the behavior of finite
things to which we apply finite processes is always well-defined. Things
are never as clear when we have infinitely-sized things or potentially
infinite processes. At least 'we' cannot know how they behave without
adding some axioms and when we do add those axioms, we can also consider
alternate theories where the negation of the axiom is considered and
that results in different consequences. The "absoluteness" of
computation is of this nature. If we can truly *know* more than
arithmetic while still remaining correct, I do not know (assuming COMP).
As a side-note, I don't see why the primitive physical world is
necessary, from the 1p, we can only know that we have senses and from
the senses we can infer the existence of the external world.
We have the problem of other minds to deal with! That is why, among
other things, we need the physical world albeit NOT primitive, the
physical world allows form an "external" differentiation of 1p that
would otherwise be identical by Leibniz' identity of indiscernibles. I
am just claiming that the abstrac
<http://en.wikipedia.org/wiki/Abstract_object>t and the concrete
<http://en.wikipedia.org/wiki/Concrete_object> are always co-present at
any level until we go to the limit of bare neutral existence. At that
point any differences that might make a difference vanish, thus logic
and spaces would cease being different yet isomorphic. Vaughn Pratt
shows how this works in terms of the directions of the Arrows of the
categorical representations of LOGIC and SPACE, they point in opposite
directions thus if we add them up their directions and scalars would
vanish. -> + <- = (see
http://upload.wikimedia.org/wikipedia/commons/f/ff/Laws_of_Form_-_double_cross.gif)
Maybe we mean different things by the physical world. I think of it as
an implementation substrate and thus I have no problem with it being a
direct or indirect consequence of some abstract computations.
It's also not obvious at all to me why the 1p would be the same for any
structure in Platonia (such as some computation running in the UD), but
different for magical-physical-land (non-platonic, but still running an
UD). 1p differences should exist if the contents of the mind's
body/brain are different, regardless of the substrate that it's
implemented on. I'd venture to guess that given 2 identical
structures/universes/..., the observers in them will have identical
experiences, or even identical 1p (in COMP it doesn't matter how many
copies you make of a computation, there's only one 1p associated with it
- the body just lets it manifest relatively to you).
Additionally, I see this conjecture as similar to Tegmark's Mathematical
Universe Hypothesis
<http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis> except
that I do not see how the postulate "/All structures that exist
mathematically also exist physically."/ implies a mathematical monism as
the wiki article states. If for any structure that exists mathematically
there must exist a physical structure, there is the implication of a
duality between the mathematical and the physical. This is a different
sort of duality than that of Descartes as it does not assume distinct
"substances", it is a form of dual aspect theory
<http://en.wikipedia.org/wiki/Double-aspect_theory> where the dynamics
of each aspect run in opposite directions. Vaughn Pratt explains the
idea here: http://boole.stanford.edu/pub/dti.pdf
I'm not going to comment on the paper as my category theory knowledge is
insufficient, instead I'll save reading it until I'm more familiar.
As for the MUH, I'm not even sure what the word 'physical' means anymore
for it. If all the consistent mathematical structures do exist and you
happen to find yourself in one, you call it 'physics', while you call
the rest 'abstract', however that just makes the term 'physical' an
indexical - "The time now is xx:xx", "I'm in structure y".
The UDA also shows that you can find yourself in a different structure
at a different subjective time, and the only global "inescapable" one is
the UD, but it's so limitless in its possibilities that it shouldn't
particularly matter.
If consciousness is how some (possibly self-referential) arithmetical
(or computational) truth feels from the inside, it does not seem
impossible that there would not be computations representing some
physical (just not primitive) world and that world would contain us
and our bodies/brains, and the existence of such computations would be
a theorem in arithmetic.
I agree, but the representation of a thing is not the thing except in
very special cases, such as what we have when we say that the "best"
simulation of an object is the object itself.
<http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html>
This takes us into a discussion of questions like "when might the map =>
the territory or, by duality,the territory => the map
<http://chorasimilarity.wordpress.com/2011/06/21/entering-chora-the-infinitesimal-place/>?
This is a subtle and important question! ;-)
We can never know for sure what structure we're part of, but we can
always make more and more accurate maps of the local one we're in.
One can call the global one the UD* if we happen to admit a digital
substitution (by the UDA/MGA), or more generally some arithmetical
Platonia. Just knowing how the UD works does not mean we have complete
access to it. We just have a tool for generating the territory (or the
"perfect" map), but we'll never be able to actually generate the full
UD* (if we could, COMP would be false, and thus UD* itself wouldn't be
our "global" territory), although we can generate any parts we have the
resources/memory to compute.
Onward!
Stephen
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.