On 10/24/2014 9:04 AM, Bruno Marchal wrote:
Hi Jesse,
Sorry for replying late.
On 27 Jul 2014, at 18:27, Jesse Mazer wrote:
On Sun, Jul 27, 2014 at 10:46 AM, David Nyman <[email protected]
<mailto:[email protected]>> wrote:
On 23 July 2014 17:49, Jesse Mazer <[email protected]
<mailto:[email protected]>> wrote:
> So, why not adopt a Tegmark-like view where a "physical universe" is
> *nothing more* than a particular abstract computation, and that can give
us
> a well-defined notion of which sub-computations are performed within it by
> various "physical" processes?
Essentially because of the argument of Step 7 of the UDA. The
assumption here is that consciousness (i.e. the logic of the
first-person) is derived from computation. It then follows that we
cannot ignore the possibility in principle of "building a computer"
that not only implements a UD but also runs it for long enough to
generate its infinite trace, UD* (incorporating, by the way, a
"fractal-like" infinity of such dovetailing). If denying such a
possibility on grounds of a lack of "primitively-physical" resources
is evasive, to deny it on grounds of a lack of "mathematical"
resources is surely merely incoherent.
But if we do not deny it, but rather embrace it, we can see that such
a structure would inevitably dominate any "observational reality".
I don't see why that should follow at all, as long as there are multiple infinite
computations running rather than the UDA being the only one, there's no particular
reason why the UDA computation should "dominate" in terms of its contribution to measure.
The UD computation, the one appearing at the step 7 or the "UD Argument (UDA)". The UD
is the complete set of the possible executions of a universal machines, including an
extreme redundancies of those computations, and which does not depend on the choice of
the universal base chosen. To fix the thing I choose either Robinson Arithmetic
(Predicate logic + seven axioms, using the non logical symbols s, 0, +, *.), or the
combinators, using only the parentheses, =, S, and K.
'formal provability" in those theories is Sigma_1 complete. By the intensional Church
Thesis, which follows from the usual extension al one, that is equivalent with a
universal dovetailing. They instantiate it in arithmetic.
See my most recent post to Bruno at
http://www.mail-archive.com/[email protected]/msg55617.html ,
particularly this paragraph where I give a possible definition for how one could define
"physical measure":
'For example, say after N steps of the universal computation U, we can count the number
of times that some computation A has been executed within it, and the number of times
that another computation B has been executed within it,
What would "some computation" be? To compare to experience we need the relative measure
of some event conditioned on our experiencing it. As I understand it, in the UD this will
be represented by infinitely many threads of computation. Since the computation does not
go to completion the relative number of these threads compared to some other event thread
doesn't exist.
and take the ratio of these two numbers; if this ratio approaches some limit in the
limit as N goes to infinity, then this limit ratio could be defined as the ratio of the
"physical" measure of A and B within the universe/multiverse. So if A and B are two
possible future observer-moments for my current observer moment (say, an
observer-moment finding itself in Washington and another finding itself in Moscow in
your thought-experiment), then the ratio of their physical measure could be the
subjective probability that "I" will experience either one as my next-observer moment.'
Not exactly, because this could lead to an equivocal of "[]p" (the machine
utters/believes p") and "[]p & p", the non really definable the machine
utters/believes-truly p, which obeys a different logic, and structures the observer
moments differently than the "third person" machines.
The frequentist approach works locally in the normal worlds,
How do you get frequentist measures from the UD?
but for having the normal worlds to begin with,we must take into account that the
logic(s) is (are) constrained by the cognitive ability of the machine; notably in
perception and observation.
Would you say that even if we define "physical measure" this way, and even if multiple
infinite computations are running alongside the UDA computation, for some reason the
UDA computation will dominate? Consider the situation I imagined in this paragraph of
the same post:
The UD, with Church thesis, defines the battle field where the winners will win. The Z
logics illustrate that the winner seems to be quantum universal machine. If QM is
correct, the Z logic might explain why it has to be like that, and, by the difference
between Z and Z*, it would explain the qualia.
So are you simply assuming there is a "winner", i.e. that the relevant statistics exist in
the limit? Even if they do, it's not clear that they exist for our experience which is
not "in the limit". It seems that you are assuming something like "The probability of a
number being even is 1/2."
'Also note that even if we have two different candidates for the "physical universe"
computation, call them U and U',
Oops! I intepreted you "U" above by the running of the UD, or the sigma_1 truth.
and even if both contain a never-ceasing universal dovetailer computation within them,
it seems to me this is not enough to guarantee that U and U' will both assign the same
physical measure to any two computations A and B, if we use a procedure like the one I
outlined to define "physical measure". Even though U and U' will both compute all the
same programs eventually since they both contain a universal dovetailer, some programs
might be computed more frequently (more copies have been run after N steps) in U than
in U'. For example, U might be a physical simulation of a universe containing one
physical computer that's computing the universal dovetailer along with 1000 physical
computers computing copies of my brain experiencing being in Washington, while U' might
be a physical simulation of a universe containing one physical computer that's
computing the universal dovetailer along with 1000 physical computers computing copies
of my brain experiencing being in Moscow.'
To be more specific, imagine that these 1000 other simulated computers are running
*infinite* iterations of the "me in Washington" simulation--for example, first it could
spawn a copy of me arriving in Washington at 3 PM and simulate my 1st hour experienced
in Washington from 3 PM to 4 PM, then it could spawn a newly-minted copy #2 of my brain
and newly-minted copy of Washington at 3 PM and re-simulate my brain's 1st hour in
Washington from 3 PM to 4 PM, then it could go back to copy #1 and simulate its second
hour in Washington, then it could simulate copy #1's third hour, then it could simulate
copy #2's second hour, then it could spawn a new copy #3 and simulate its first hour,
and keep going this way following the same ordering that Cantor used to order the
rational numbers as shown at
http://www.homeschoolmath.net/teaching/rationals-countable.gif (with the numerator as
the copy # and the denominator as the hour #). Since such a computer is constantly
simulating copies of me in Washington, while the UDA is only very occasionally
This seems to rely on time as measured by number of steps of the UD; which is quite
different from experienced time. Does the measure of one's experience depend on the
number of times the UD computes that experience? I expect that is infinitely many times,
so there is the problem of taking the relative measure of infinities; which have
one-to-one maps to proper subsets.
simulating copies of me in Washington or Moscow between all the other Turing machine
programs it must simulate, then if I want to compare the measure of "me experiencing
Washington" vs. "me experiencing Moscow", the contribution of the computers dedicated
solely to the "me experiencing Washington" should dwarf the contribution of the UDA. At
least that should be true if "physical measure" is defined the way I suggested above,
where you compare how many copies of each program have been run so far after N steps of
the universal program U.
I don't think this comparison has a limit; i.e. it varies arbitrarily between 0 and 1 as
the UD runs. It will have some definite value after a finite number of steps N, but those
steps don't map into a contiguous interval of the calculate (physical) time.
Brent
All Universal dovetailing will brought such U and U', but also their counterparts. Such
extravagance exists in all UD, you can't beat them algorithmically.
But those sigma_1 sentences and proofs, are structured by the need of machines to have a
self-referential ability, and explore its possible universal neighbors.
Bruno
Jesse
--
You received this message because you are subscribed to the Google Groups "Everything
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected]
<mailto:[email protected]>.
To post to this group, send email to [email protected]
<mailto:[email protected]>.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/ <http://iridia.ulb.ac.be/%7Emarchal/>
--
You received this message because you are subscribed to the Google Groups "Everything
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected]
<mailto:[email protected]>.
To post to this group, send email to [email protected]
<mailto:[email protected]>.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.