Matt: I understand your point #2 but it is a grand sweep without any detail. To 
give you an example of what I have in mind, let's consider the photon double 
slit experiment again. You have a photon emitter operating at very low 
intensity such that photons come out singly. There is an average rate for the 
photons emitted but the point in time for their emission is random - this then 
introduces the non-deterministic feature of nature. At this point, why doesn't 
the emitted photon just go through one or the other slit? Instead, what we find 
is that the photon goes through a specific slit if someone is watching but if 
no one is watching it somehow goes through both slits and performs a self 
interference leading to the interference pattern observed. Now my question: can 
it be demonstrated that this scenario of two alternate behaviour strategies 
minimizes computation resources (or whatever Occam's razor requires) and so is 
a necessary feature of a simulation? We already have a
 probability event at the very start when the photon was emitted, how does the 
other behaviour fit with the simulation scheme? Wouldn't it be computationally 
simpler to just follow the photon like a billiard ball instead of two 
variations in behaviour with observers thrown in?
   
  Eric B. Ramsay

Matt Mahoney <[EMAIL PROTECTED]> wrote:
  
--- "Eric B. Ramsay" wrote:

> Matt: I would prefer to analyse something simple such as the double slit
> experiment. If you do an experiment to see which slit the photon goes
> through you get an accumulation of photons in equal numbers behind each
> slit. If you don't make an effort to see which slit the photons go through,
> you get an interference pattern. What, if this is all a simulation, is
> requiring the simulation to behave this way? I assume that this is a forced
> result based on the assumption of using only as much computation as needed
> to perform the simulation. A radioactive atom decays when it decays. All we
> can say with any certainty is what it's probability distribution in time is
> for decay. Why is that? Why would a simulation not maintain local causality
> (EPR paradox)? I think it would be far more interesting (and meaningful) if
> the simulation hypothesis could provide a basis for these observations.

This is what I addressed in point #2. A finite state simulation forces any
agents in the simulation to use a probabilistic model of their universe,
because an exact model would require as much memory as is used for the
simulation itself. Quantum mechanics is an example of a probabilistic model. 
The fact that the laws of physics prevent you from making certain predictions
is what suggests the universe is simulated, not the details of what you can't
predict.

If the universe were simulated by a computer with infinite memory (e.g. real
valued registers), then the laws of physics might have been deterministic,
allowing us to build infinite memory computers that could make exact
predictions even if the universe had infinite size, mass, age, and resolution.
However, this does not appear to be the case.

A finite simulation does not require any particular laws of physics. For all
you know, tomorrow gravity may cease to exist, or time will suddenly have 17
dimensions. However, the AIXI model makes this unlikely because unexpected
changes like this would require a simulation with greater algorithmic
complexity.

This is not a proof that the universe is a simulation, nor are any of my other
points. I don't believe that a proof is possible.

> 
> Eric B. Ramsay
> Matt Mahoney wrote:
> --- "Eric B. Ramsay" wrote:
> 
> > Apart from all this philosophy (non-ending as it seems), Table 1. of the
> > paper referred to at the start of this thread gives several consequences
> of
> > a simulation that offer to explain what's behind current physical
> > observations such as the upper speed limit of light, relativistic and
> > quantum effects etc. Without worrying about whether we are a simulation of
> a
> > sinmulation of a simulation etc, it would be interesting to work out all
> the
> > qualitative/quantitative (?) implications of the idea and see if
> > observations strongly or weakly support it. If the only thing we can do
> with
> > the idea is discuss phiosophy then the idea is useless. 
> 
> There is plenty of physical evidence that the universe is simulated by a
> finite state machine or a Turing machine.
> 
> 1. The universe has finite size, mass, and age, and resolution. Taken
> together, the universe has a finite state, expressible in approximately
> hG/c^5T^2 = 1.55 x 10^122 bits ~ 2^406 bits (where h is Planck's constant, G
> is the gravitational constant, c is the speed of light, and T is the age of
> the universe. By coincidence, if the universe is divided into 2^406 regions,
> each is the size of a proton or neutron. This is a coincidence because h, G,
> c, and T don't depend on the properties of any particles).
> 
> 2. A finite state machine cannot model itself deterministically. This is
> consistent with the probabilistic nature of quantum mechanics.
> 
> 3. The observation that Occam's Razor works in practice is consistent with
> the
> AIXI model of a computable environment.
> 
> 4. The complexity of the universe is consistent with the simplest possible
> algorithm: enumerate all Turing machines until a universe supporting
> intelligent life is found. The fastest way to execute this algorithm is to
> run each of the 2^n universes with complexity n bits for 2^n steps. The
> complexity of the free parameters in many string theories plus general
> relativity is a few hundred bits (maybe 406).
> 
> 
> -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85351041-e7d6ad

Reply via email to