Re: TIME warp

2011-06-03 Thread Travis Garrett
Hi Roc,

   Sure.  Let me go ahead and start by assuming that we need to exist
in an environment that began in a state of low entropy (so that life
can evolve during the increasing entropy phase - I could also
examine this assumption, but that's another discussion...).  GR then
does some interesting things.  First, gravity in GR couples to energy
and momentum, and everything has energy and momentum, so, er, it
couples to everything (binding them all together like the one ring I
suppose).  It can thus essentially get everybody on the same page
when things are starting out - forcing everybody (all the particle
species) to pay attention and synchronize their behavior...

  GR can then do something quite cool.  If you feed the Einstein
equations with a scalar field that happens to have much more potential
energy than kinetic energy, then the spacetime responds by growing
exponentially (i.e. the curvature is in the time direction - the
spatial directions are driven to be very flat (i.e. the angles inside
a triangle add up to 180 degrees), with the overall scale factor
growing exponentially (i.e. the overall size of the triangle is
growing exponentially in time)).  Thus, consider some complex universe
with a lot of entropy.  Entropy is an extensive quantity, and thus if
we consider some tiny volume element dV then there can't be much
stuff inside dV, and therefore there is very little entropy inside
dV.  If we can get a scalar field inside that dV to satisfy the
condition that its potential energy is much larger than its kinetic
energy, then blammo, we get inflation and that dV region can grow
larger than our Hubble volume in a tiny fraction of a second (and then
scalar field can decay, ending inflation, to be followed by a
standard big bang...).

  It is by no means an open and shut case - there are lots of details
to be filled in - but I think the overall picture makes a lot of
sense...

Sincerely,
   Travis

On Jun 2, 6:35 am, Roc roc...@gmail.com wrote:
 nice answer.
 could you elaborate on this, though?

 Why then should spacetime be curved?  There are at least 2 good reasons:

 1) it allows for a big bang to happen, thus starting things off in a state

  of low entropy.

 thanks

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: TIME warp

2011-05-30 Thread Travis Garrett
Hi Selva,

   A straightforward and dry answer would be: it is a consequence of
the Einstein field equations of General Relativity (GR), and one
could then go on to do a derivation which demonstrates the time
dilation near a large dense mass.  The more interesting question
(which I think is what you are really getting at) is: Well, ok, fine,
but why do we exist in a universe which is governed by the equations
of GR?  I think the answer to this intriguing question lies in a
combination of (at least) three parts.

  The first is that GR includes Special Relativity (SR) as the limit
in flat spacetime (and also in small, local regions in curved
spacetime).  SR essentially stems from having an absolute speed limit
(in our case the speed of light), and an absolute speed limit is
useful because it makes causality well defined (e.g. the toddler
threw their juice on the floor because they weren't allowed any more
cookies, the dog then licks up the juice, the dog proceeds to pee on
the rug, etc. etc., the dad drives out to the beer store, etc.
etc...).  SR then links together space and time in a way which is
quite non-intuitive to us (which isn't too surprising since the speed
of light is so much faster than anything we deal with at the everyday
level) - so that for instance a clock moving past at high velocity
runs more slowly.

  As noted SR is then essentially embedded within the curved spacetime
of GR.  Why then should spacetime be curved?  There are at least 2
good reasons: 1) it allows for a big bang to happen, thus starting
things off in a state of low entropy.  And also: 2) GR includes
Newtonian gravity as the standard limiting case, which allows for very
long-lived orbits (in 3 spatial dimensions) as needed by biological
evolution to generate complex organisms.  And, now that I think about
it, eternal inflation (essentially preceding the big bang) allows for
viable effective field theories to be found among a landscape of
vacua, so that in total the big bang produces viable (~ Standard
Model) environments in an initial state of low entropy.

  I'd thus roughly guess that time dilation near massive bodies is
essentially a side effect of the equations that produce these other
vital effects... (althought conceivably there could also be some
reason for time dilation to be useful at some distant point in the
future...)

   Sincerely,
Travis

On May 29, 2:39 pm, selva selvakr1...@gmail.com wrote:
 why is there time dilation near a heavy mass ??

 On May 17, 12:31 am, selva selvakr1...@gmail.com wrote:



  hi everyone,

  can someone explain me what a time warp is ? or why there is a time
  warp ?
  well yes,it is due to the curvature of the space-time graph near a
  heavy mass.
  but how does it points to the center of the mass,how does it finds
  it..
  and explanation at atomic level plz..

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Observers Class Hypothesis

2011-02-18 Thread Travis Garrett
Hi Stephen,

   Sorry for the slow reply, I have been working on various things and
also catching up on the many conversations (and naming conventions) on
this board.  And thanks for your interest!  -- I think I have
discovered a giant low hanging fruit, which had previously gone
unnoticed since it is rather nonintuitive in nature (in addition to
being a subject that many smart people shy away from thinking
about...).

  Ok, let me address the Faddeev-Popov, gauge-invariant information
issue first.  I'll start with the final conclusion reduced to its most
basic essence, and give more concrete examples later.  First, note
that any one structure can have many different descriptions.  When
counting among different structures thus it is crucial to choose only
one description per structure, as including redundant descriptions
will spoil the calculation.  In other words, one only counts over the
gauge-invariant information structures.

  A very important lemma to this is that all of the random noise is
also removed when the redundant descriptions are cut, as the random
noise doesn't encode any invariant structure.  Thus, for instance, I
agree with COMP, but I disagree that white rabbits are therefore a
problem...  The vast majority of the output of a universal dovetailer
(which I call A in my paper) is random noise which doesn't actually
describe anything (despite optical illusions to the contrary...) and
can therefore be zapped, leaving the union of nontrivial, invariant
structures in U (which I then argue is dominated by the observer class
O due to combinatorics).

  Phrasing it differently, if anything goes then there is actually
nothing there!   This is despite the optical illusion of there being
a vast number of different possibilities as afforded by the
nonrestrictive policy of anything goes.  One thus needs
*constraints* so that only some things go  and not others in order
to generate nontrivial structures -- such as the constraints
introduced by existing within our physical universe (i.e. a complex,
nontrivial mathematical structure).

   Ok, time for examples.  I'll start with one so simple that it is
borderline silly...  Say professor X is tracking a species of squirrel
which comes in two populations: a short haired and a long haired
version (let's say the long hair version stems from a dominant
allele).  In one square kilometer of forest he counts 202 short haired
squirrels and 277 long haired.  2 years later, after 2 colder than
average winters, he sends out his 3 grad students to count the
populations again.  They all write down that there are 184 short
haired squirrels, but student 1 writes that there are 298 long haired
squirrels, student 2 writes down that there are 296 group B squirrels,
and student 3 records the existence of 301 shaggy squirrels.  In a
hurry prof X gathers the data, and seeing in the notes that the group
B and shaggy populations both have long hair, he adds them all up for
a total of 895 long-haired squirrels vs 184 short-haired -- a huge
change instead of a mild selection!  He rushes off his groundbreaking
paper to Science...  Anyways, one could concoct a more realistic
example (perhaps using more abstract labeling), but the main point
holds -- it doesn't matter which description is used (long-haired,
group B, or shaggy) but it does need to be consistent to avoid over-
counting and getting the wrong answer...

   A silly example, but things become much, much more subtle when
considering different  mathematical structures which can have various
mathematical descriptions!   Consider the case in general relativity.
Here the structure of spacetime can take different forms - you can
have flat, empty Minkowski space, the dimpled spacetime resulting
from a single star, or a double helix of dimples due to 2 stars
orbiting each other, or a wormhole geometry for a black hole and so
forth...  Each one of these spacetime structures can then be described
by many different coordinate systems and associated metrics.  For
instance, flat space can be mapped out by a Cartesian coordinate
system, or cylindrical coordinates, or spherical coordinates, or an
infinite number of alternatives, most of which completely obscure the
simple Minkowski spacetime structure.  Likewise for the black hole -
one can use Schwarzschild, or Eddington-Finkelstein, or Kruskal-
Szekeres or an infinite number of other variations...  Thus, consider
a complex metric which describes some highly warped spacetime geometry
as expressed in an intricate coordinate system.  It is natural to
wonder what components of the metric are directly informing on the
exotic shape of the spacetime, and which parts are just artifacts of
the peculiar coordinate system that has been chosen.  The answer is:
it's not clear in general!  Relativists thus depend heavily on scalar
invariants (The Ricci scalar, Kretschmann scalar and so forth) which
are the same in all coordinate systems for a given geometry, and on
asymptotic quantities defined at 

Re: Observers Class Hypothesis

2011-02-18 Thread Travis Garrett
Hmmm, it garbled my ascii-art integral even though it looked fine in
the preview - let me try that again...

 / +infinity   / +infinity
| |  -ax*x
Z(a)=   | |   e dx dy
| |
   / -infinity /  -infinity

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Observers and Church/Turing

2011-01-31 Thread Travis Garrett
Hi Russell,

   No problem at all - I myself confess to having skimmed papers in
the past, perhaps even in the last 5 minutes...  That I took a bit of
umbrage just shows that I haven't yet transcended into a being of pure
thought :-)

  Let me address your 3rd paragraph first.  Consider the statements:
3 is a prime number and 4 is a prime number.  Both of these are
well formed (as opposed to, say, =3==prime4!=!), but the first is
true and the second is false.  To be slightly pedantic, I would count
over the first statement (that is, in the process of counting all
information structures) and not the second.  Note that the first
statement can be rephrased in an infinite number of different ways,
2+1 is a prime number, the square root of 9 is not composite and
so forth.  However, we should not count over all of these
individually, but rather just the invariant information that is
preserved from translation to translation (This is the meta-lesson
borrowed from Faddeev and Popov).

  Consider then 4 is a prime number - which we can perhaps rephrase
as the square root of 16 is a prime number.  In this case we are now
carefully translating a false statement - but as it is false there is
no longer any invariant core that must be preserved - it would be fine
to also say the square root of 17 is a prime number or any other
random nonsense...  There is no there there, so to speak.  The same
goes for all of the completely random sequences - there seems to be a
huge number of them at first, but none of them actually encode
anything nontrivial.  When I choose to only count over the nontrivial
structures - that which is invariant upon translation - they all
disappear in a puff of smoke.  Or rather (being a bit more careful),
there really never was anything there in the first place: the
appearance that the random structures carry a lot of information (due
to their incompressibility) was always an illusion.

   Thus, when I propose only counting over the gauge invariant stuff,
it is not that I am skipping over a bunch of other stuff because I
don't want to deal with it right now - I really am only counting over
the real stuff.  Let me give an example that I thought about including
in the paper.  Say ETs show up one day - the solution to the Fermi
paradox is just that they like to take long naps.  As a present they
offer us the choice of 2 USB drives.  USB A) contains a large number
of mathematical theorems - some that we have derived, others that we
haven't (perhaps including an amazing solution of the Collatz
conjecture).  For concreteness say that all the thereoms are less than
N bits long as the USB drive has some finite capacity.  In contrast,
USB B) contains all possible statements that are N bits long or less.
One should therefore choose B) because it has everything on A), plus a
lot more stuff!  But of course by filling in the gaps we have not
only not added any more information, but have also erased the
information that was on A): the entire content of B) can be
compactified to the program: print all sequences N bits long or
less.

  The nontrivial information thus forms a sparse subset of all
sequences.  The sparseness can be seen through combinatorics.  Take
some very complex nontrivial structure which is composed of many
interacting parts: say, a long mathematical theorem, or a biological
creature like a frog.  Go in and corrupt one of the many interacting
parts - now the whole thing doesn't work.  Go and randomly change
something else instead, and again the structure no longer works: there
are many more ways to be wrong than to be right (with complete
randomness emerging in the limit of everything being scrambled).

  Note that it is a bit more subtle than this however - for instance
in the case of the frog, small changes in its genotype (and thus in
its phenotype) can slightly improve or decrease its fitness (depending
on the environment).  There is thus still a degree of randomness
remaining, as there must be for entities created through iterative
trial and error: the boundary between the sparse subset of nontrivial
structures and the rest of sequence space is therefore somewhat
blurry.  However, even if we add a very fat blurry buffer zone the
nontrivial structures still comprise a tiny subset of statement space
- although they dominate the counting after a gauge choice is made
(which removes the redundant and random).

  Does that make sense?



 Sorry about that, but its a sad fact of life that if I don't get the
 general gist of a paper by the time the introduction is over, or get
 it wrong, I am unlikely to delve into the technical details unless a)
 I'm especially interested (as in I need the results for something I'm
 doing), or b) I'm reviewing the paper.

 I guess I don't see why there's a problem to solve in why we observe
 ourselves as being observers. It kind of follows as a truism. However,
 there is a problem of why we observe ourselves at all, as opposed to
 disorganised random information 

JOINING: Travis Garrett

2011-01-27 Thread Travis Garrett
Hi everybody,

   My name is Travis - I'm currently working as a postdoc at the
Perimeter Institute.  I got an email from Richard Gordon and Evgenii
Rudnyi pointing out that my recent paper: http://arxiv.org/abs/1101.2198
is being discussed here, so yeah, I'm happy to join the conversation.
I'll respond to some specific points in the discussion thread, but
what the heck, I'll give an overview of my idea here...

  The idea flows from the assumption that one can do an arbitrarily
good simulation of arbitrarily large regions of the universe inside a
sufficiently powerful computer -- more formally I assume the physical
version of the Church Turing Thesis.  Everything that exists can then
be viewed as different types of information.  The Observer Class
Hypothesis then proposes that observers collectively form by far the
largest set of information, due to the combinatorics that arise from
absorbing information from many different sources (the observers
thereby roughly resemble the power set of the set of all
information).  One thus exists as an observer because it is by far the
most probable form of existence.

  A couple caveats are of crucial importance: when I say information,
I mean non-trivial, gauge-invariant, real information, i.e.
information that has a large amount of effective complexity (Gell-Mann
and Lloyd) or logical depth (Bennett).  I focus on gauge-invariant
because I can then borrow the Faddeev-Popov procedure from quantum
field theory: in essence, one does not count over redundant
descriptions.  I also borrow the idea of regularization from quantum
field theory: when considering systems where infinities occur, it can
be useful to introduce a finite cutoff, and then study the limiting
behavior as the cutoff goes to infinity.  For instance, regulating the
integers shows that the density of primes goes like 1/log(N) - without
the cutoff one can only say that there are a countable number of
primes and composites.  These ideas are well known in theoretical
physics, but perhaps not outside, and I am also using them in a new
setting...

  Let me give a simple example of the use of gauge invariance from the
paper - consider the mathematical factoid: {3 is a prime number}.
This can be re-expressed in an infinite number of different ways: {2+1
is a prime number}, {27^(1/3) is not composite}, etc, etc...  Thus, at
first it seems that just this simple factoid will be counted an
infinite number of times!  But no, follow Faddeev and Popov, and pick
one particular representation (it's fine to use, say, {27^(1/3) is not
composite}, but later we will want to use the most compact
representations when we regularize), and just count this small piece
of information once, which removes all of the redundant descriptions.
To reiterate, we only count over the gauge-invariant information.

  Consider a more complex example, say the Einstein equations: G_ab =
T_ab.  Like 3 is a prime number, they can be expressed in an
infinite number of different ways, but let's pick the most compact
binary representation x_EE (an undecidable problem in general, but say
we get lucky).  Say the most compact encoding takes one million bits.
Basic Kolmogorov complexity would then say that x_EE  contains the
same amount of information as a random sequence r_i one million bits
long - both are not compressible.  But x_EE contains a large amount of
nontrivial, gauge invariant information that would have to be
preserved in alternative representations, while the random sequence
has no internal patterns that must be preserved in different
representations: x_EE has a large amount of effective complexity, and
r_i has none.  Focusing on the gauge-invariant structures thus not
only removes the redundant descriptions, but also removes all of the
random noise, leaving only the real information behind.  For
instance, I posit that the uncomputable reals are nothing more than
infinitely long random sequences, which also get removed (along with
the finite random sequences) by the selection of a gauge.

In some computational representation, the real information structures
will thus form a sparse subset among all binary strings.  In the paper
I consider 3 cases - 1) there are a finite number of finitely complex
real information structures (which could be viewed as the null
assumption), 2) there are a infinite number of finitely complex
structures, and 3) there are irreducibly infinitely complex
information structures.  I focus on 1) and 2), with the assumption
that 3) isn't meaningful (i.e. that hypercomputers do not exist).
Even case 2) is extremely large, and it leads to the prediction of
universal observers: observers that continuously evolve in time, so
that they can eventually process arbitrarily complex forms of
information.  The striking fact that a technological singularity may
only be a few decades away lends support to this extravagant idea...

  Well anyways, that's probably enough for now.  I am interested in
seeing what people think of the 

Re: Observers and Church/Turing

2011-01-27 Thread Travis Garrett
I am somewhat flabbergasted by Russell's response.  He says that he is
completely unimpressed - uh, ok, fine - but then he completely
ignores entire sections of the paper where I precisely address the
issues he raises.  Going back to the abstract I say:

We then argue that the observers
collectively form the largest class of information
(where, in analogy with the Faddeev Popov procedure,
we only count over ``gauge invariant forms of
information).

The stipulation that one only counts over gauge-invariant (i.e.
nontrivial) information structures is absolutely critical!  This is a
well known idea in physics (which I am adapting to a new problem) but
it probably isn't well known in general.  One can see the core idea
embedded in the wikipedia article: 
http://en.wikipedia.org/wiki/Faddeev–Popov_ghost
- or in say Quantum Field Theory in a Nutshell by A. Zee, or
Quantum Field Theory by L. Ryder which is where I first learned
about it.  In general a number of very interesting ideas have been
developed in quantum field theory (also including regularization and
renormalization) to deal with thorny issues involving infinity, and I
think they can be adapted to other problems.  In short, all of the
uncountable number of uncomputable reals are just infinitely long
random sequences, and they are all eliminated (along with the
redundant descriptions) by the selection of some gauge.  Note also in
the abstract that I am equating the observers with the *nontrivial*
power set of the set of all information - which is absolutely distinct
from the standard power set!  I am only counting over nontrivial forms
of information - i.e. that which, say, you'd be interested in paying
for (at least in pre-internet days!).

I am also perfectly well aware that observers are more than just
passive information absorbers.  As I say in the paper:

Observers are included among these complex structures,
and we will grant them the special name $y_j$
(although they are also
another variety of information structure $x_i$).
For instance a young child $y_{c1}$ may know about
$x_{3p}$ and $x_{gh}$:
$x_{3p}, x_{gh} \in y_{c1}$, while having not yet
learned about $x_{eul}$ or $x_{cm}$.
This is the key feature of the observers that we will utilize:
the $y_j$ are entities that can absorb various
$x_i$ from different regions of $\mathcal{U}$.

That is: this is the key feature of the observers that we will
utilize

And 4 paragraphs from the 3rd section:

 Consider then the proposed observer $y_{r1}$
 (i.e. a direct element of $\mathcal{P}(\mathcal{U})$):
  $y_{r1} = \{ x_{tang}, x_3, x_{nept} \}$,
 where $x_{tang}$ is a tangerine, $x_{3}$ is the
 number 3, and $x_{nept}$ is the planet Neptune.
 This random collection of various information structures
 from $\mathcal{U}$ is clearly
 not an observer, or any other from of nontrivial information:
 $y_{r1}$ is redundant to its three elements, and would thus
 be cut by the selection of a gauge.
 This is the sense in which most of the direct elements of the
 power set of $\mathcal{U}$ do not add any new real information.

 However, one could have a real observer $y_{\alpha}$
 whose main interests happened to include types of fruit, the
integers, and
 the planets of the solar system and so forth.
 The 3 elements of $y_{r1}$ exist as a simple list,
 with no overarching structure actually uniting them.
 A physically realized computer, with some finite
 amount of memory and a capacity to receive
 input, resolves this by providing a
 unified architecture for the nontrivial
 embedding of various forms of information.
 A physical computer thus provides the glue to combine, say,
 $x_{tang}$, $x_{3}$, and $x_{nept}$ and
 form a new nontrivial structure in $\mathcal{U}$.

It is possible to also consider the existence
 of ``randomly organized computers
 which indiscriminately embed arbitrary
 elements of $\mathcal{U}$ -- these
 too would conform to no real $x_i$.
 This leads to the specification of ``physically realized
 computers, as the restrictions that
 arise from existing within a mathematical
 structure like $\Psi$ results in
 computers that process information in
 nontrivial ways.
 Furthermore, a structure like $\Psi$ allows for
 these physical computers to spontaneously
 arise as it evolves forward from an initial state of
 low entropy.
 Namely it is possible for replicating
 molecular structures to emerge, and
 Darwinian evolution can then drive to them
 to higher levels of complexity as they
 compete for limited resources.
 A fundamental type of evolutionary
 adaptation then becomes possible:
 the ability to extract pertinent information
 from one's environment so that it can
 be acted upon to one's advantage.
 The requirement that one extracts useful
 information
 is thus one of the key constraints that
 has guided the evolution of the
 sensory organs and nervous systems
 of the species in the animal kingdom.

 This evolutionary process has reached its current
 apogee with our species,
 as our brains are 

Re: JOINING: Travis Garrett

2011-01-27 Thread Travis Garrett
Hi Russell,

   You'll see that I immediately followed my joining post with an ever-
so-slightly irate response to your comment ;-)  I need to go have
dinner with my family, so let me quickly say that taking existing as
an observer for granted is a very easy thing to do, but it well may
need an explanation :-)

   Sincerely,
  Travis

On Jan 27, 5:18 pm, Russell Standish li...@hpcoders.com.au wrote:
 Hi Travis,

 Welcome to the list. Its great to see some new blood. I did get around
 to reading your paper a few days ago, and had a couple of comments
 which I posted.

 1) Your usage of the term Physic Church-Turing Thesis. What I thought
 you were assuming seemed more accurately captured by Bruno's COMP
 assumption, or Tegmark's Mathematical Universe Hyporthesis. For
 instance, Wikipedia, following Piccinini states the PCTT as:

 According to Physical CTT, all physically computable functions are
 Turing-computable.

 I guess one can argue about what precisely constitutes a physically
 computable function, but one implication of the PCTT would be that
 real random number generators are impossible, and that beta decay is
 not really random, but pseudo random. This is contradicted by COMP.

 But, this is only a debate about nomenclature, not about the worth of
 your paper.

 2) There can only be a countable number of observers, but an
 uncountable number of bits of information, so I was suspicious of your
 Observer Class Hypothesis. However, it looks like I missed your use of
 the Faddeev-Popov procedure, which eliminates most of those uncountable
 bits of information, so the ball is definitely back in my court!

 BTW - I don't think the problem you are trying to solve with the OCH
 is a problem that needs solving - the reference class of Anthropic
 Reasoning must always be a subset of the set of observers (or observer
 moments depending on how strong your self-sampling assumption is).

 But it would nevertheless be intriguing if the OCH were true, and I
 could see it having other applications. Thanks for the notion.

 On Thu, Jan 27, 2011 at 01:10:50PM -0800, Travis Garrett wrote:
  Hi everybody,

     My name is Travis - I'm currently working as a postdoc at the
  Perimeter Institute.  I got an email from Richard Gordon and Evgenii
  Rudnyi pointing out that my recent paper:http://arxiv.org/abs/1101.2198
  is being discussed here, so yeah, I'm happy to join the conversation.
  I'll respond to some specific points in the discussion thread, but
  what the heck, I'll give an overview of my idea here...

 --

 --- -
 Prof Russell Standish                  Phone 0425 253119 (mobile)
 Mathematics                              
 UNSW SYDNEY 2052                         hpco...@hpcoders.com.au
 Australia                                http://www.hpcoders.com.au
 --- -

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.