Hi Russell,

Please forgive me, the jump to religion was, in fact, somewhat of an
overreach.  But actually, I came up with a better way to reward the
argument, which ignores the specifics of either religion or science.
The fact that the model I've given may retrodict features of one or
both independently is just a happy coincidence that has honed my
intuition on the subject.  The models may or may not be correct, but
my intuition is that, in some sense, it may in fact, be meaningless to
ask whether they are there not.  Maybe they're not "true" until you
prove to "yourself" one way or another.

Basically, the fundamental philosophical argument is that full MUH
leads inexorably to the most general possible relativity principle,
the "relativity of reality": (i.e. 
http://en.wikipedia.org/wiki/Simulated_reality).
This is basically the principle that you can never prove that you are
never in an single arbitrary Matrix-type simulation of the world with
"new rules" being added over time or a one completely predetermined
reality chosen anthropically from a space of equally ontologically
real universes.  In fact, the argument is that, you can *never* have
enough knowledge to prove either picture fully, and this is, in fact,
a completely necessary consequence of the MUH.  Because it all
mathematical structures are real, you have no idea "where" you should
begin in your ontology views and, in principle, can always make a
convincing argument *not* to update your ontological views no matter
what new "evidence" you find from some classical point of view (where
the definition of "classical" is, in some, sense, infinitely
regressing away from you, just like "observable universes" are in our
current picture.

Case in point: consider the very extreme "Matrix" argument is
basically the argument that you may be a non-interacting "point
consciousness" or "homoculous" that is essentially deterministically
watching a movie unfold.  From this point of view, you're thinking is
not really "deciding" what to do next (since you are causally
disconnected from the deterministic simulation) but "deducing" what
the "person" you are "watching" is doing in real time (how could you
tell the difference, if this were happening in continuous time?).  Of
course, you can argue that "according to your definition of the laws
of physics", this ontology is unlikely, but If perfect Matrix-style
simulation is possible (and, really, why wouldn't it be?), then the
number of Matrix-style simulated people should, in fact, strictly
outweigh the number of non-Matrix-style simulation people, right, in
some sense, right? Infinitely so, really? And if you're in a
simulation where the ultimate mathematical rules are, potentially
malleable, and information may be, potentially, added or subtracted
arbitrarily from "your" observed "universe", then in some sense the
"laws of the universe" are not fixed until you need them to be,
consistent with "your" observations.

Take FTL for instance.  We all "know" that "FTL" is impossible on
causality grounds. However, I will argue that "FTL" travel and
communication are not that unreasonable after all, as long as you rely
on the fact that your "FTL" travel is not to your neighbors in flat 4-
D Minkowski space-time, but to neighors embedded in a higher
dimensional (closed, by countably infinite) topology.  Because if Max
Tegmark's full Level I universe exists, then you really should assume
that topology is this way: in the limit of infinite starting
information, the universe looks locally flat on every length scale,
but, just as it seems to close back on itself, curving the other
direction until it is seemingly closed again (in a MUCH bigger
universe), until it you trace it further and it appears to start
reversing and closing itself in the opposite direction, ad infinitum
(countably).  This is, in a sense, isotrophic on ever length scale,
but in fact contains all possible patterns consistent with its
starting conditions "somewhere".  In this topology, is it really
unreasonable to suggest that "wormholes" may provide what looks,
effectively, from "your" point of a view, an FTL wormhole? But really,
you don't have "local" FTL travel at all, your FTL is really just
sublight travel in a different topological direction which transports
you VERY far away from the point of view of flat, unrolled, 4-D
spacetime. You could easily imagine a way out of all the causality
violations this way, because the accessible points in flat 4-D
spacetime would just so happen to be chosen in a particular way that
they are, in effect, different "versions" which are classically
causally disconnected.  (So, you can travel to "Vega" for instance,
but its not your "local" Vega but a Vega which diverged from it a long
time ago, in the MWI-verse view.

This seems, in some sense, "hard" to do...at least requiring the
universe to be "very" well arranged, macroscopically.  But, wait, if
you were "really" in an Matrix-style universe, then it would really be
"easy" for a purported simulator to do.  None of these topologically
separate Vegas will exist until you the simulators flip a switch, and
then a Vega-centered Hubble volume and an Earth-centered Hubble volume
will be copied and separated from each other.  Next, humanity gets a
message from space with instructions on building this "wormhole"
machine, and this wormhole allows instantanously travel between the
two separately centered Hubble volume universes (which in no way
affects the "locally" adjacent Earths and Vegas.  This is, in a sense,
very "easy" for a Matrix-style simulator to do--it would be, in fact,
be "flipping" a bit, almost literally.

Now, what would happen, from your point of view, if this happened
tomorrow? Would you update your ontology immediately to assume a
Matrix-type simulation (which, in fact, should be "more" likely than a
non-Matrix type simulation...). I would guess that most scientists on
this planet would definitely say "no" and would go quickly updating
their models to try to explain "how" FTL is happening.  This may, in
principle, be a VERY hard problem, because even though what is
happening is deterministic, it may obey no logical rules that "any"
conscious being decided upfront other than simply "what goes in here
goes out there, and vice versa".  What if, in the process of trying to
come up with an explanation, you come up with a god-awfully complex
mathematical structure (let's say, "sting theory"), which allows this
to occur, but makes it very unlikely.  The masters of the universe,
seeing that you've come up with this neat solution, and decide to help
you out by adjusting some bits here and there to make it look
"exactly" correct, which is great, so you suddenly decide that all
these other "possible universes" implied by this solution must
necessarily exist in the "physical existence == mathematical
existence" MUH according to your new "physical" TOE.  Your masters of
the universe though, have limited computational space and time in
their own frame, and quickly decide that most of the other implied
universes "couldn't contain consciousness" anyway, so decide not to
"physically" simulate this reality, so conspire to make it practically
impossible for you to do anything practical with this new "string
theory" except explain what you've already seen.  How unlikely is this
scenario? How many times does something like this happen in the MUH.
Isn't the answer necessarily an "infinite amount of times", if you
take MUH seriously?"

So, now what if the friendly aliens discovered some form of very large
hypercomputation and decide, for the sake of argument, to simulate the
full multiplicity of "string theory" universes that, in a sense, "you"
discovered.  Or maybe they just decide to simulate the ones with
intelligent life.  (Or maybe not--how would anyone know the
difference?) Does the decision to simulate or not simulate the full
multiplicity of "string theory" implied universes change their
relative measure in the universe? I think most on this forum would
intuitively say "no", but I would argue that maybe the answer is, in
fact, "yes", and that simulating particular universes would, in fact,
cause their measure to increase in some manner *VERY* vaguely like a
"high measure" individual having children transitively increase their
measure of the existence of their "children".  Why the hell not? In
both cases, you are giving "birth" to consciousness within the
deterministic rules of the system you are in: in one case, you're let
"biology" to the work, and the other cases you're doing it via
"computer science", but you're really effectively accomplishing the
same thing transitively through "math" either way.  Further, what's,
in principle, the difference between "talking" to children and
"inserting non-random information" into the simulation to "guide" your
new children? Nothing at all, really. And what's the difference
between that and "deciding to try to live a longer, healthier life".
Isn't that the same as increasing the measure of your "child" observer
moments? This is why I suspect that the measure function of the Level
IV multiverse may, in fact, be the ultimate democracy of the combined
subjective free will of all conscious beings, which may or may not, in
fact, converge stably (it depends on the nature of aleph^infinity, in
a sense)...if it does, then that stable converge really "would" be
God.

Finally, this model retrodicts what is probably the most stunning and
baffling problem with physics: the unreasonable effectiveness of math
in explaining the world.  It suggests that all conscious beings are
essentially probabilistic computers, trying to form their own "rules"
of reality as quickly as possible from what could, in principle, be
random data from an source.  Of course, the universe(s) you live in
are *always* mathematical, by MUH, in some respect, but may contain
fewer or more "rules" and more or less "state" information.  Who is to
say which is which? Fine-tuning is, equally, an argument for
intelligent design or anthropic selection of universes from a
multiverse, and *in principle* you're never going to know the
difference.  (Just like, *in principle*, you can never *know* the
difference between two exactly mathematically equivalent formulations
of QM with different "ontologies".)  That is, you never know, until
you discover a rule which suggests that you "do", until, possibly, you
discover a rule that suggests that you "might not", basically so and
so on forever (or until your consciousness regresses back to that of a
rock.)  In effect, the laws of the universe will then be an infinite
regression of more complicated and complicated rules, all the way to
aleph^infinity.

Anyway, so I argue that this picture is basically the one necessitated
by MUH, and is, in fact, completely consistent with reality and
possibly able to postdict qualitative features of physical laws.
Basically, it seems that (if you assume unitary QM), all the laws of
the universe that we "ever" discover are always of the form that "the
universe is completely determinstic, in principle, if you know
everything you need to know to predict it", where "everything you need
to know to predict it" just so happens to recede from you as fast as
you can know it.  This, in a nutshell, retrodicts the uncertainty
principle, relativity, and the separation of Hubble volumes in the
universe, and, I suggest, are *very* general versions of the same
phenomena.

Finally, I posit the following: a collection of observers
communicating by any mechanism, in fact, conscious to some degree,
depending on their individual consciousnesses and degree of coupling.
This is very strongly suggested by the "China brain" thought
experiment (i.e., what if you took the population of a large country,
encoded neural firing patterns into their telephone messages, and got
them all to start talking is a very well designed way") and, in
reverse, by the split brain phenomena (i.e. individuals with split
corpora callosa appear to have two "separate" consciousness, in some
sense).  And, well, this seems like a necessary consequence of
computationalism too.  So I actually hold that the "relativity of
reality" principle holds for "all" conscious observers at "all"
levels.  Basically, on a "civilization" level, the relativity of
reality suggests that our "civilization" will (probablistically)
discover physical laws consistent with our "civilization"'s existence
at some rate based on our "need" to discover them, but that they may
not, in fact, have been preordained to be in the structure that that
are.  So in a real sense, our civilization has some probabilistic
certainty that various physical models of the universe are true, and
the "civilization"-level consciousness will always evolve according to
it.  Scaling upward, if a galactic civilization (perhaps based on the
"FTL" strategy developed before), then that civilization-level
consciousness will always evolve according to the mathematical
requirements to support "its" subjective experiences and memories.

However, there's no reason why a member of civilization's personal
consciousness could not diverge from that of its civilization, based
on "its own" subjective certainty of begin in one place in the
multiverse versus another.  For instance, we can "in principle"
observe the Hubble volume, but, really, how much of it have "you" seen
personally, and how much have you learned transitively through other
individuals to whom you have some coupling (i.e. correlated
knowledge).  Furthermore, how many "civilization"-level
consciousnesses do you personally belong to? I assert that this answer
is definitely an infinite number of them, since you have very limited
knowledge of what other people are doing and thinking at any given
time.  So, really, each of those "civilization"-level consciousnesses
can, and will, diverge from you, and from each other, over the future
as well.  Basically, this is the statement that the relativity of
reality implies an "uncertainty principle" of reality at the level of
every possible conscious entity.  Paradoxically, this implies, in
fact, that there may, in fact, *be* true laws, because all "true" laws
will be of a particular general form given by these principles.  So,
again, indeterminism has converged to determism again, and this
suggests that "computation" (and all hyper-computation) is in, the
broadest sense, this convergence process exactly, and that God, if he
exists, is the "consciousness" of aleph^infinity, the ultimate
mathematician. who does NOT play dice with the MUH-level...because, in
fact, he *cannot*. So again, this suggests that the measure function
over the MUH-verse does "converge" to something, because God decided
it would.

Thanks,
F.H.

On Jun 3, 12:10 am, Russell Standish <li...@hpcoders.com.au> wrote:
> Hi "Felix",
>
> You have obviously put a lot of thought into this. It'll take some
> time to fully digest what you're saying, but I'll post a few comments
> now to get the conversation going.
>
> On Thu, Jun 02, 2011 at 07:39:03PM -0700, Felix Hoenikker wrote:
>
> > So, here goes:
>
> > ****Computability implies conservation of algorithmic information****
>
> > This follows from the definition of algorithmic (i.e. Kolmogorov)
> > complexity.  Let us assume the universe is computed over time, so we
> > can say that the number of bits required to specify a state of the
> > universe has constant cardinality over time.  (This must be true even
> > if you allow hypercomputation over an infinite number of bits...)
>
> This is only true if your computable process is reversible. In general,
> computations actually lose information - for example the "and" operation
> takes 2 bits and produces 1 bit.
>
> > ****Many worlds is uncomputable****
>
> > All many worlds theories imply the following form: some predecessor
> > state S_0 can lead to the successor states T_1 through T_X (where X
>
> I think you mean S_1 ... S_X
>
> > could be any natural or transfinite number), with some probability
> > distribution that preserves the information content in S_0.  This
>
> I'm not sure that all many worlds theories do. But certainly ones
> satisfying unitary evolution do.
>
>
>
> > >From an information theoretic point of view, this means the following:
> > every time a microscopic classical "bit" of information is apparently
> > added to the physical state of universe, an opposite "bit" must in
> > fact be subtracted from the rest of the physical universe, essentially
> > collapsing two macroscopic states (other than the microscoptic portion
> > changed) in some necessarily symmetric (and possibly instantaneous)
> > fashion.
>
> Careful - information is not additive in the sense the mass, energy,
> etc usually are.
>
>
>
>
>
>
>
>
>
> >  The most obvious way to do this is the following: pull
> > everything toward you by a tiny bit, until, on average, things are one
> > "bit" less differentiated.  Furthermore, the amount you "pull"
> > everything toward you should be in some (possibly non-linear) way
> > proportional to the number of unique states you could possibly have
> > moved toward, classically.  Let us, for the sake of argument, call
> > that number "mass-energy" and say that it is conserved.  This
> > retrodicts the universal theory of gravitation and mass-energy
> > equivalence, more or less.
>
> > Now, since gravity is proportional to mass-energy, then mass-energy
> > must be, in fact, some finite amount, in order to be compared between
> > gravitational bodies.  And in fact, if the universe is Turing
> > computable, then mass-energy should be discretely finite.  This
> > implies the quantization of energy, the original "quanta" providing
> > the impetus for QM (as least, as far as I understand my history...).
> > So essentially, this retrodicts a major part of QM without actually
> > assuming any QM upfront.
>
> If you have done this right, you may have (re-)discovered the entropic
> formulation of gravity, a la Verlinde.
>
>
>
> > Next, since the information in the universe is really constant, this
> > means that the black-hole information paradox is REALLY a problem.
> > And, if you think about it, it means that every "bit" of classical
> > information lost into a black hole must be counterbalanced by an
> > opposite "bit" radiating from the black hole in some symmetric
> > fashion.  This basically retrodicts Hawking radiation and the thermal
> > disappation of black holes.
>
> Yes - such a theory would imply that no physical nonunitary process
> could exist, including black holes, so indeed the resolution of this
> "paradox" is important.
>
>
>
>
>
>
>
>
>
>
>
> > Next, consider cosmology.  First, if we assume Turing computability,
> > then the universe must in fact contain finite information thus is
> > necessarily topologically closed on itself.  This agrees with current
> > models of the topology of space-time.  Furthermore, let us consider if
> > the universe, as presently thought, began in a small, energic, but
> > basically isotropic region of space (which, necessarily, must also be
> > closed, if the previous argument is correct.) In such an initial
> > state, every single point of space-time will, in fact, be pulled in an
> > almost radially symmetric fashion outward in all directions (think of
> > all points being points on a sphere, being pulled outward to all other
> > points of the sphere).  This is happening for all points on the sphere
> > symmetrically, so the net result is expansion.
>
> > Furthermore, since the universe is determinstic, all the information
> > that is present in it must, in fact, be present in the beginning.  So
> > whatever initial pattern is present in the universe must evolve and
> > start to encode itself in the physical structure of universe.  Since
> > the universe is still closed, energic, and very (classically) causal
> > connected, this means that the universe must inflate VERY VERY quickly
> > (i.e. much much faster than the speed of causality/light), in order to
> > continue embedding the same basic set of bits via some physical
> > encoding at larger and larger length scales, until the universe is big/
> > cool enough that most Hubble volumes are reasonably casually
> > disconnected (both classically, and through hidden variables.)  
>
> I don't follow this argument. Could you explain more please?
>
>
>
>
>
>
>
>
>
> > At
> > this point, the universe should cool, and, since the universe began
> > with a fixed amount of energy, the average energy density should
> > descrease in any volume over time.  This retrodicts much of our
> > current model of the Big Bang, as well as the second law of
> > theromodynamics and the "arrow of time" problem.
>
> > We can also conclude that, since inflation speed is related to the
> > energy density of the universe, and the size of the universe is
> > finite, then the universe should expand at an ever slowing rate until
> > possibly stopping.  This may or may not be true, given our current
> > cosmological picture (the main whole in our understanding lies in the
> > dark matter / dark energy).  But if you consider something else, you
> > get a very interesting postdiction: basically, the EPR paradox and
> > quantum entanglement implies that the number of distinct accessible
> > states for two (classically) closed systems together may be less than
> > the number each has individually, because the two could be entangled
> > (and, in the limit of complete entanglement, have only 1 accessible
> > successor state).  This implies that mass-energy is *not* classically
> > additive unless you take into account all quantum entangement within a
> > system.  So, this suggests that, there may be "negative pressure
> > energy" present in the universe in proportion to the amount of net
> > entanglement present between states.  This, in a nutshell, is a
> > postdiction of "dark energy" (and possibly dark matter?).
>
> Interesting...
>
>
>
>
>
>
>
>
>
>
>
> > Topologically, the uneven distribution of "matter" and "dark
> > matter"/"dark energy" may in fact mean that the universe, given enough
> > "initial" information, could, while closed, be in fact locally MUCH
> > more topologically interesting than simply flat everywhere.  This
> > implies, in fact, that the universe (if it started with "a lot" of
> > information) could be unimaginably large, and that appearances of
> > closure on some local scales could be deceiving.  In fact, in the
> > limit of countably "infinite" bits of initial information (or,
> > equivalently, no information, for those of you who "get" that), the
> > universe would, in fact, contain "all" patterns consistent consistent
> > with "all" initial states "somewhere" in the "physical" universe as
> > long as they were consistent with the mathematical physical laws
> > (whatever they are). For those of you that read Tegmark, that is
> > basically the Level I multiverse hypothesis, in which your
> > consciousness basically bounces between different physical
> > manifestations of you "somewhere else" in the metaverse.  Now, in
> > principle, you could be moving "through" time or "through" space, or
> > any linear combination thereof, and you shouldn't be able to know the
> > difference.  This retrodicts *relativity*, the constant *speed of
> > causality/light*, suggests that all space-time axes are symmetric, and
> > that, every space-time axis should be computable (using hidden
> > variables, in the say way) regardless of space-time rotation.
>
> Symmetry under Lorentz transformation is very natural if your starting
> point is (3+1) Minkowski space-time. The real question is why (3+1)
> Minkowski space-time? I'm not sure you have answered that question.
>
>
>
>
>
>
>
>
>
>
>
> > **** Going even futher! ****
>
> > So really, in the limit, you really cannot be sure, in principle, if
> > *you* are computing the universe, or the *universe* is computing
> > *you*.  Even if the MWI-verse is deterministic, there is
> > indeterminancy from any individual point of view.  "Someone" is adding
> > new bits all the time; is it you, or someone else? If, in fact,
> > someone you trusted very much in your youth suggested that unicorns
> > existed to you, what effect would that have on your future life? In
> > fact, by symmetry, maybe they're the "exact" same thing, and in
> > principle you can never know the difference.  So, in fact, "realism"
> > and "idealism" may actually be exactly equivalent viewpoints.  Taken
> > to the absolute limit, this, is, in fact, a somewhat reasonable
> > argument for solipism...but *only* if you can come up with a VERY good
> > reason to think you are the "only" one conscious being: essentially,
> > creating a measure function in which the only "conscious" being is you
> > (and, possibly, all the past/future/indetermined/etc yous, as well).
> > And well, I'll tell you right now, *I* for one, do think I am
> > conscious.  So really, unless you disbelieve me and everyone one, you
> > can't justify solipism..however, I actually argue from this that you
> > must take *all* living beings as, in some sense, *conscious* as well,
>
> How does this follow?
>
>
>
>
>
>
>
> > and if you extend the concept of consciousness down to the
>
> ...
>
> read more »

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to