Sorry for the comment delay, Jesse, (also, I sent this yesterday, but it seems not having go through).


On 25 Jul 2014, at 23:22, Jesse Mazer wrote:




On Thu, Jul 24, 2014 at 2:44 PM, Bruno Marchal <[email protected]> wrote:
HI Jesse, David,

On 23 Jul 2014, at 18:49, Jesse Mazer wrote:

Had some trouble following your post (in part because I don't know all the acronyms), but are you talking about the basic problem of deciding which computations a particular physical process can be said to "implement" or "instantiate"? If so, see my post at http://www.mail-archive.com/everything-list%40googlegroups.com/msg43484.html and Bruno's response at http://www.mail-archive.com/everything-list%40googlegroups.com/msg43489.html . I think from Bruno's response that he agrees that there is a well-defined way of deciding whether one abstract computation implements/instantiates some other abstract computation "within itself" (like if I have computation A which is a detailed molecular- level simulation of a physical computer, and the simulated computer is running another simpler computation B, then the abstract computation A can be said to implement computation B within itself).

So, why not adopt a Tegmark-like view where a "physical universe" is *nothing more* than a particular abstract computation, and that can give us a well-defined notion of which sub-computations are performed within it by various "physical" processes? This approach could also perhaps allow us to define the "number of separate instances" of a given sub-computation within the larger computation that we call "the universe", giving some type of measure on different subcomputations within that computational universe (useful for things like Bostrom's self-sampling assumption, which in this case would say we should reason as if we were randomly chosen from all self-aware subcomputations). So for example, if many copies of a given AI program are run in parallel in a computational universe, that AI could have a larger measure within that computational universe than an AI program that is only ever run once within it...of course, this does not rule out the possibility that there are other "parallel" computational universes where the second program is run more often, as would be implied by Tegmark's thesis and also by Bruno's UDA. But there is still at least the theoretical possibility that the multiverse is false and that only one unique computational universe exists, so the idea that all possible universes/computations are equally real cannot be said to follow logically from COMP.



To have the computations, all you need is a sigma_1 complete theory and/or a Turing universal machine, or system, or language.

Not sure I understand what you mean by "have the computations",


We need to start from assuming something (if we want do fundamental science).

By "to have the computation" I meant, to have the theory in which we assume enough so that we can define and prove the existence of the computations. Elementary arithmetic is enough, but there are other theories, like the combinators, or the abstract billiard ball, or quantum topology, etc.






and I didn't understand the mathematical arguments you made following that. My point above is basically that even if one accepts steps 1-6 of your argument, which together imply that I should identify my self/experience with a particular computation (or perhaps a finite sequence of computational steps rather than an infinite computation, but I'll just call such a finite sequence a 'computation' to save time), it still seems to me that there is an open possible that the *measure* on different computations is defined by how often each one is "physically" instantiated.

With step 1-6, yes. But less so with step 7 and 8 which still follows from the CTM).

With step 7, yes again, assuming a "small" (without big portion of UD*) primitive physical universe. (It already looks like avoiding a question/problem (measure problem). If I try to dig on your theory, I will have to ask eventually what you mean by "primitive physical universe", as it looks like "and now there is a miracle".

And step 8 just makes it worst. It shows that the miracle asks for an infinite amount of magic, so you need a specially weak Occam razor to expect this from reality.






Are you talking about some deriving some unique measure on all computations when you say "to have the computations, all you need..." or are you not talking about the issue of measure at all?

I was talking about what we have to assume to define the computations and reason about them, and to study the expectation of "simple" person (like the one described by the 8 arithmetical points of view on arithmetic).






The idea I'm suggesting for a "physically" based measure involves identifying the physical universe/multiverse with a particular unique computation--basically, consider a computation corresponding to something like a Planck-level simulation of our universe, or an exact simulation of the evolution of the the universal wavefunction, then say that this computation *is* what we mean by the "physical universe/multiverse".


That is not excluded, but if that is the case, it would mean that there is a computation which win the competition between all computations, and that needs to be derived from arithmetic only, through the logic of self-reference, to clearly distinguish the observable from the conceivable, the believable, the knowable, truth, false, etc.

It would mean that one computation would win on all computation. It would mean that it exists a number p such that phi_p emulates the physical universe, with its infinitely many twins, the q such that phi_p = phi_q.

I am quite open on this. Probably this + the random oracle, that you get freely in arithmetic by the self-multiplication.



Then, if you agree there is some well-defined notion of whether a given computation "contains within it" some other computation (and that we can count the number of times some sub-computation has run within the larger computation after N steps of the larger computation), the measure on all computations could be determined by how frequently they each appear in the unique computation that we identify with the physical universe/multiverse.

We can always compare both (as far as you can identify that physical universe). As long as that fits, we get indirect evidence for the parts which are not studied by the physicists, which are the non propositional knowledge of the knower, the non justifiable bets, well a sort of toy theology, of the ideally correct person emulated by a body/machine.





For example, say after N steps of the universal computation U, we can count the number of times that some computation A has been executed within it, and the number of times that another computation B has been executed within it, and take the ratio of these two numbers; if this ratio approaches some limit in the limit as N goes to infinity, then this limit ratio could be defined as the ratio of the "physical" measure of A and B within the universe/multiverse. So if A and B are two possible future observer-moments for my current observer moment (say, an observer-moment finding itself in Washington and another finding itself in Moscow in your thought- experiment), then the ratio of their physical measure could be the subjective probability that "I" will experience either one as my next-observer moment.


OK if U is any UD, as this is given for free once we believe in anything complex enough as to behave like a computer.

Then you come, and tell me that the UD does not count, only a special U, (executed by the UD too, of course), win the games.

You can explain me this in two ways:

1) by showing that such an U indeed fits the 1P and 3p requirements, but this will lead you to derive U from some "integral" on the UD.

2) by saying that the U is implemented by a special purpose God, called "primitive matter", so we don't need to show that he win the UD game.






Also note that even if we have two different candidates for the "physical universe" computation, call them U and U', and even if both contain a never-ceasing universal dovetailer computation within them, it seems to me this is not enough to guarantee that U and U' will both assign the same physical measure to any two computations A and B, if we use a procedure like the one I outlined to define "physical measure". Even though U and U' will both compute all the same programs eventually since they both contain a universal dovetailer, some programs might be computed more frequently (more copies have been run after N steps) in U than in U'.


I think you are wrong on this. That is a point seen by Schmidhuber, and theoretical computer scientists in general, which is that you can't change what a UD does, if you want it to remain a UD. The measure will not depend on any choice you do to implement the UD. You can fail it on very strong large initial fragment, but the 1p measure is unchanged on the limits.

So, again, you can save physicalism, by making a NON computable transform of the UD, but that's he type of move the MGA shows to be "adding complexity" to bias the solution of a problem.






For example, U might be a physical simulation of a universe containing one physical computer that's computing the universal dovetailer along with 1000 physical computers computing copies of my brain experiencing being in Washington, while U' might be a physical simulation of a universe containing one physical computer that's computing the universal dovetailer along with 1000 physical computers computing copies of my brain experiencing being in Moscow.

That will change nothing, by the compiler theorem. Technically, this is related to the closure of the partial computable functions for Cantor-Kleene diagonalization, and why Gödel called that being a miracle, as it protect machines soul from all formal reductionism. It is related to the conceptual argument in favor of the Church thesis.




I don't necessarily advocate deriving measure from some unique "physical universe" computation U myself, but do you see anything basically incoherent about the idea?

I think it is coherent with step 1-6, and much less already with step 7, due to that generality (sum up in Church-Turing thesis). And quasi no more rational with a very light occam razor after step 8.

I believe that there is a physical winner, and that it is a multi- worlds, but to solve the mind body problem, and to keep all views in the process (including the non justifiable), we have to extract that physical winner by the study of the universal mind and its relation with consciousness.




If not that would suggest that accepting steps 1-6 does not necessarily require accepting later steps where (unless I misunderstand your argument) you argue that the measure on all computations can be derived from pure mathematics.

No, I argue that If we assume the computationalist theory of mind (CTM), then the observable (and lawful/invariant) part which results from the measure on all computations has to be derivable from "pure mathematics", specifically from the mathematical theology of the universal (Löbian) numbers.


Bruno





Jesse




It would take many pages to describe formally elementary arithmetic (including the formal predicate calculus), which is indeed already such a sigma_1 complete system/machine/theory, but a simpler one can be given in less line, like the Putnam-Davis-Robinson-Matiyazevic- Jone universal diophantine polynomials. Or the combinators, whose sigma_1 complete theory is given by the axioms

x = x
x = y & y = z  ->. x = z

xy = xz / y = z
yx = zx / y = z

Kxy = x
Sxyz = xz(yz)

(I recall that a combinator is either K or S, or a combination of combinators (X, Y), so a combinator is for example

(K (K S)) S) which we abbreviate K(KS)S as we can suppress all the left parenthesis for ease of readability.

You can compute ((K K) K), or better KKK. By the second axiom you get KKK = K. But K(KK) does not match any axioms, and thus stay calm: K(KK) gives K(KK) as a stopping result.

For the theory of everything, we need no more. Oh, well to avoid having just one combinators, you can add the axiom
 ~(K = S),
but for the ontology it is not really needed. It is a Turing universal language, and all universal interpreters can be coded through a combinator. In particular, you can easily find combinators which mirrors faithfully the sigma_1 complete part of arithmetic, like you can find combinators which solves the PDRMJ universal diophantine polynomial equation.


In that theory, we can define those very theories formally, and they all are instances of universal combinators, or universal numbers. A computation is what a number do relatively to a universal number. But by the FPI, a physical computation will be the one done, strictly speaking, by infinities of universal (and non universal also) numbers/combinators.

The absolute (relative measure) laws does not depend of choosing arithmetic, or combinators, that is, the laws of physics will not depend on the choice of the universal numbers. But the "winner", or "winners" which support(s) and stabilize(s) your current state of mind is "unknown", today. Except that when you interview the löbian machine, which are those who knows that they are sigma_1 complete, that they know that they are Turing universal, then you get that the winner has some quantum favor, as the many dreams in the combinatoric reality (the FPI on the sigma_1 complete reality).

David, I think that with the combinators, I might more quickly explains different senses of "going at a higher level description", making possible to better delineate where I agree with Brent and where I agree with you.

I mean I will think about it.

Liz? Is it not time to study at least one, conceptually simple, computer programing language?

The point is that we need a universal system, and then we can define computations, the UD and and all finite portions of UD*, guided through the FPI on the limit "winning measure(s)".

Or we use the phi_i?

All this already gives a very rich 3p reality, and does not address the 1p, except by that FPI. But the löbian numbers have the introspective ability to distinguish the representational (belief) from the non representational (knowledge, hope).

The löbian number can already understand that they are "lost" in the arithmetical reality, and confronted with the arithmetical truth, most part of it being non justifiable. Yet, those non justifiable truth obey a mathematics of their own, constraining different modes of the apprehension of truth (arithmetical truth, combinatorical truth, c++al truth, whatever).

David, The modes (like []p, []p & p, etc) emerges from the fact that universal numbers develops complex 3p objects, like when proving theorems, or building planes, or getting self-referential (even just in the 3p way). We have to be platonist on the 3p constructions (the programs, the data, the activity of a machines relatively to another machine), to be able to be platonist on the 3p realities given by []p & p, which are much more evanescent, as they create an umbilic cord with truth, and pay the big price, as they loose their name/ identity/description.

Here your argument against Brent is at risk to be extended into an argument against CTM. You gently save me by observing that I do justify the existence of the modes, but that works only because ultimately we say that "yes" to the doctor and on the fact that we bet the doctor get the right "[]p" concerning us. We "know" already that the *truth* of p will take care of itself (by the minimal realism asked to have a notion of numbers or combinators, to begin with.

There is 31° celcius, without much air, and my head is boiling hot, apologies for the typos.

Bruno













Jesse


On Wed, Jul 23, 2014 at 9:38 AM, David Nyman <[email protected]> wrote: Recent discussions, mainly with Brent and Bruno, have really got me thinking again about the issues raised by CTM and the UDA. I'll try to summarise some of my thoughts in this post. The first thing to say, I think, is that the assumption of CTM is equivalent to accepting the existence of an effectively self-contained "computationally-observable regime" (COR). By its very definition, the COR sets the limits of possible physical observation or empirical discovery. In principle, any physical phenomenon, whatever its scale, could be brought under observation if only we had a big enough collider. But by the same token, no matter how big the collider, no such observable could escape its confinement within the limits of the COR.

If we accept that the existence of a COR is entailed by assuming CTM, we come naturally to the question of what might be "doing the computation". In terms of the UDA, by the time we get to Step 7, it should be obvious that, in principle, we could build a computer from "primitive" physical components that would effectively implement the infinite trace of the UD (UD*). Furthermore, if such a computer were indeed to be implemented, the COR would necessarily exist in its entirety somewhere within the infinite redundancy of that trace. This realisation alone might well persuade us, on grounds of explanatory parsimony and the avoidance of somewhat strained or ad hoc reservations, to accept FAPP that UD*->COR. Should we be so persuaded, any putative underlying "physical computer" would have already become effectively redundant to further explanation.

Notwithstanding this, we may still feel the need to retain reservations of practicability. Perhaps the physical universe isn't actually sufficiently "robust" to permit the building of such a computer? Or, even if that were granted, could it not just be the case that no such computer actually exists? Reservations of this sort can indeed be articulated, although worryingly, they may still seem to leave us rather vulnerable to being "captured" by Bostrom- type simulation scenarios. The bottom line however seems to be this: Under CTM, can we justify the "singularisation", or confinement, of a computation, and hence whatever is deemed to be observable in terms of that computation, to some particular physical computer (e.g. a brain)? More generally, can we limit all possibility of observation to a particular class of computations wholly delimited by the activity of a corresponding sub-class of physical objects (uniquely characterisable as "physical computers") within the limits of a definitively "physical" universe?

This is where Step 8 comes in. Step 7 seeks to destabilise our naive intuition about an exclusive 1-to-1 relationship between computations and particular physical objects by pointing to the consequences of a physical implementation of UD*. Step 8 however is a change of tactic. First, it postulates a scenario where physical tokens have been contrived to represent a "conscious computation" (either in terms of a brain or in terms of a substitute "computer"). Then it sets out to shows how all putatively "computational" relations between such tokens could in principle be disrupted without change in the net physical action or environmental relations of the system that embodies them. Step 8 differs from Step 7 in that it seeks in the first instance to undermine the very notion that physical activity can robustly embody *any* second-order relations above and beyond those of net physical action. Accepting such a stringent conclusion would then seem to rule out CTM prima facie. The only possibility of salvaging it would lie in an explanatory strategy in terms of which computational relations take logical precedence over physical ones. Given that computational relations are effectively arithmetical, this in turn leads to the conclusion that CTM->UD*->COR (or more generally, that each implies the others).

Notwithstanding this it would seem that Step 8 is not wholly persuasive to everybody, so is there yet another tack? The line of argument that I've been pursuing with Brent has led me to consider the following analogy, which I'm sure you'll recognise. Consider something like an LCD screen as constituting the "universe of all possible movie-dramas". In terms of this analogy, what are the referents of any "physical observations" on the part of the dramatis personae featured in such presentations? IOW what are we to suppose Joe Friday to be referring to when he asks for "Just the facts, ma'am"? Well, the one thing we can be sure of is that NO such reference can allude to the "underlying physics" (i.e. the pixels and their relations) of the LCD display. If this analogy holds, at least in general outline, what justification, under CTM, could remain for any assumption that our own observations and references might "accidentally" allude to some "LCD-physics" postulated, mutatis mutandis, as underlying the COR? Would it not seem extraordinary that any such underlying physics could contrive to "refer to itself" through the medium of its merely computational derivatives?

This last point might seem determinative, but might there not still be a last-ditch redemption, of a physics underlying computation, in terms of "evolution"? IOW, might it not be argued that the acquisition of internal "computational" models of their physical environment confers a survival advantage on the physical creatures that embody them? But any such argument would, of course, be completely circular; assuming CTM, it begins and ends in the COR. IOW, arguing in this way would be to ignore the fact that the history of such creatures, their survival, and the environment in which this is supposed to take place, all lie within the COR, not the putative regime of any "underlying physics". THAT "physics" would necessarily be entirely inscrutable and inaccessible for reference at the level of the COR (think of the LCD analogy). And hence we simply would have no a priori justification for assuming the observational physics of the COR to be isomorphic with some notional underlying "LCD-physics". In fact, once having assumed CTM, we would have no further basis for assigning THAT physics any role whatsoever in our explanatory strategy.

David

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to everything- [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to everything- [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to