On 9/27/2011 10:40 PM, Jason Resch wrote:
On Tue, Sep 27, 2011 at 11:52 PM, meekerdb <[email protected]
<mailto:[email protected]>> wrote:
On 9/27/2011 9:13 PM, Jason Resch wrote:
I don't think that. I just noted it's logically possible, contrary to
assertions that our universe must be duplicated infinitely many times.
If our universe is not duplicated a huge number of times, then quantum
computers
would not work. They rely on huge numbers of universes different from ours
aside
from a few entangled particles. Even normal interference patterns are
explained by
there existing a huge number of very similar universes.
Or by Feynmann paths that zigzag in spacetime. Don't become to enamored of
an
interpretation.
If you assume there is a single photon interfering with itself, how is it that this one
particle can evaluate a problem whose computational complexity would exceed that of any
conventional computer using all the matter in the universe?
Has such a problem been solved? Anyway, the answer is by the one particle cycling back
thru time, so it appears to us as many particles.
However, according to Vilenkin, Greene, and
Tegmark, a generic prediction of the theory of inflation is that
there
is an *infinite* number of Hubble volumes (what you are calling
universes). Let's call the hypothesis that all quantum-physical
possibilities are realized infinitely many times "the hypothesis of
Cosmic Repetition". Brian Greene argues for this hypothesis quite
persuasively. He says, "In an infinitely big universe, there are
infinitely many patches [i.e., Hubble volumes]; so, with only
finitely
many different particles arrangements, the arrangements of particles
within patches must be duplicated an infinite number of times." (The
Hidden Reality, pg. 33)
It's plausible - but not logically required. Suppose all the infinite
universes are number 1, 2, ... Number 1 is ours. Number 2 something
different. Numbers 3,4, ...inf are exact copies of number 2. So
there are
only two arrangements of particles; in spite of there being infinitely
many
universes.
Not logically required, but I would say it is not consistent with our
current
theories and observations.
As for the probability distribution of matter and/or outcomes, I'll
let Tegmark do the explaining:
"Observers living in parallel universes at Level I observe the exact
same laws of physics as we do, but with different initial conditions
than those in our Hubble volume.
This is questionable. Most theories of the universe starting from a
quantum
fluctuation or tunneling from a prior universe assume that the universe
must
start very small - no more than a few Planck volumes.
The generalized theory of inflation is eternal inflation. It leads to an
exponentially growing volume which expands forever.
This limits the amount of information that can possibly be provided as
initial
conditions. So where does all the information come from?
I haven't heard the theory that there is an upper bound on the information
content
for this universe set by the big bang.
In one Planck volume there is only room for one bit. That's the
holographic principle.
Yet our universe appears to take more than 1 bit to describe, and it seems to have a
possibly infinite volume.
That's why I provided the (possible) explanation below.
As to where information comes from, if all possibilities exist, the total
information content may be zero, and the appearance of a large amount of
information is a local illusion.
QM allows negative information (hidden correlations) so that one
possibility
is that the net information is zero or very small and the apparent
information
is created by the existence of the hubble horizon.
The currently favored theory is that
the initial conditions (the densities and motions of different types
of matter early on) were created by quantum fluctuations during the
inflation
epoch (see section 3). This quantum mechanism generates initial
conditions that are for all practical purposes random, producing
density fluctuations described by what mathematicians call an
ergodic
random field. Ergodic means that if you imagine generating an
ensemble
of universes, each with its own random initial conditions, then the
probability distribution of outcomes in a given volume is identical
to
the distribution that you get by sampling different volumes in a
single universe.
That's not what ergodic means. In the theory of stochastic processes it
means that
ensemble statistics are the same as temporal statistics. In the eternal
expansion
theory it is not assumed that the physics is the same in each bubble
universe.
This one "bubble" is infinitely big according to eternal inflation.
I don't think it is necessarily spacially infinite. But in anycase the the theory of
eternal inflations is that new bubble universes are eternally created. Some are finite
and collapse in a big crunch. Others, like ours, expand indefinitely.
It is hypothesized that the spontaneous symmetry breaking that results in
different coupling constants for the weak, strong, EM, and gravity forces is
random. That's how it provides and anthropic explanation for "fine-tuning"
- we're
in the one where the random symmetry breaking was favorable to life.
This is one hypothesis to explain fine tuning, I am not sure how well it is
supported.
In other words, it means that everything that could
in principle have happened here did in fact happen somewhere else.
Inflation in fact generates all possible initial conditions
But it's not initial conditions. It's random symmetry breaking.
with non-zero probability, the most likely ones being almost uniform
with fluctuations at the 10^5 level that are amplified by
gravitational clustering to form galaxies,
stars, planets and other structures. This means both that pretty
much
all imaginable matter configurations occur in some Hubble volume far
away, and also that we should
expect our own Hubble volume to be a fairly typical one — at least
typical among those that contain observers.
But this sort of undercuts the need for the anthropic explanation. If our
universe
is "typical" (i.e. probable) then there's no need to invoke infinitely many
others
to avoid the "fine-tuning" problem. You could just say it's the more
probable one
and so it's the one that happened.
Brent
"If an explanation could easily explain anything in the given field, then it
actually explains nothing."
Which explanation is this referring to?
Scientific explanations in general. The first chapter, "The Reach of Explanations" is
about the difference between good explanations and bad explanations. He argues that it is
not a question of testability, as sometimes claimed, but of scope and specificity.
Brent
Jason
--
You received this message because you are subscribed to the Google Groups "Everything
List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.