On Sun, Jul 27, 2014 at 10:46 AM, David Nyman <[email protected]> wrote:

> On 23 July 2014 17:49, Jesse Mazer <[email protected]> wrote:
>
> > So, why not adopt a Tegmark-like view where a "physical universe" is
> > *nothing more* than a particular abstract computation, and that can give
> us
> > a well-defined notion of which sub-computations are performed within it
> by
> > various "physical" processes?
>
> Essentially because of the argument of Step 7 of the UDA. The
> assumption here is that consciousness (i.e. the logic of the
> first-person) is derived from computation. It then follows that we
> cannot ignore the possibility in principle of "building a computer"
> that not only implements a UD but also runs it for long enough to
> generate its infinite trace, UD* (incorporating, by the way, a
> "fractal-like" infinity of such dovetailing). If denying such a
> possibility on grounds of a lack of "primitively-physical" resources
> is evasive, to deny it on grounds of a lack of "mathematical"
> resources is surely merely incoherent.
>
> But if we do not deny it, but rather embrace it, we can see that such
> a structure would inevitably dominate any "observational reality".



I don't see why that should follow at all, as long as there are multiple
infinite computations running rather than the UDA being the only one,
there's no particular reason why the UDA computation should "dominate" in
terms of its contribution to measure. See my most recent post to Bruno at
http://www.mail-archive.com/[email protected]/msg55617.html
, particularly this paragraph where I give a possible definition for how
one could define "physical measure":


'For example, say after N steps of the universal computation U, we can
count the number of times that some computation A has been executed within
it, and the number of times that another computation B has been executed
within it, and take the ratio of these two numbers; if this ratio
approaches some limit in the limit as N goes to infinity, then this limit
ratio could be defined as the ratio of the "physical" measure of A and B
within the universe/multiverse. So if A and B are two possible future
observer-moments for my current observer moment (say, an observer-moment
finding itself in Washington and another finding itself in Moscow in your
thought-experiment), then the ratio of their physical measure could be the
subjective probability that "I" will experience either one as my
next-observer moment.'


Would you say that even if we define "physical measure" this way, and even
if multiple infinite computations are running alongside the UDA
computation, for some reason the UDA computation will dominate? Consider
the situation I imagined in this paragraph of the same post:


'Also note that even if we have two different candidates for the "physical
universe" computation, call them U and U', and even if both contain a
never-ceasing universal dovetailer computation within them, it seems to me
this is not enough to guarantee that U and U' will both assign the same
physical measure to any two computations A and B, if we use a procedure
like the one I outlined to define "physical measure". Even though U and U'
will both compute all the same programs eventually since they both contain
a universal dovetailer, some programs might be computed more frequently
(more copies have been run after N steps) in U than in U'. For example, U
might be a physical simulation of a universe containing one physical
computer that's computing the universal dovetailer along with 1000 physical
computers computing copies of my brain experiencing being in Washington,
while U' might be a physical simulation of a universe containing one
physical computer that's computing the universal dovetailer along with 1000
physical computers computing copies of my brain experiencing being in
Moscow.'


To be more specific, imagine that these 1000 other simulated computers are
running *infinite* iterations of the "me in Washington" simulation--for
example, first it could spawn a copy of me arriving in Washington at 3 PM
and simulate my 1st hour experienced in Washington from 3 PM to 4 PM, then
it could spawn a newly-minted copy #2 of my brain and newly-minted copy of
Washington at 3 PM and re-simulate my brain's 1st hour in Washington from 3
PM to 4 PM, then it could go back to copy #1 and simulate its second hour
in Washington, then it could simulate copy #1's third hour, then it could
simulate copy #2's second hour, then it could spawn a new copy #3 and
simulate its first hour, and keep going this way following the same
ordering that Cantor used to order the rational numbers as shown at
http://www.homeschoolmath.net/teaching/rationals-countable.gif (with the
numerator as the copy # and the denominator as the hour #). Since such a
computer is constantly simulating copies of me in Washington, while the UDA
is only very occasionally simulating copies of me in Washington or Moscow
between all the other Turing machine programs it must simulate, then if I
want to compare the measure of "me experiencing Washington" vs. "me
experiencing Moscow", the contribution of the computers dedicated solely to
the "me experiencing Washington" should dwarf the contribution of the UDA.
At least that should be true if "physical measure" is defined the way I
suggested above, where you compare how many copies of each program have
been run so far after N steps of the universal program U.

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to