2017-05-08 9:14 GMT+02:00 Bruce Kellett <bhkell...@optusnet.com.au>:

> On 8/05/2017 5:01 pm, Quentin Anciaux wrote:
>
> 2017-05-08 8:58 GMT+02:00 Bruce Kellett <bhkell...@optusnet.com.au>:
>
>> On 8/05/2017 4:41 pm, Quentin Anciaux wrote:
>>
>> 2017-05-08 8:26 GMT+02:00 Bruce Kellett < <bhkell...@optusnet.com.au>
>> bhkell...@optusnet.com.au>:
>>
>>> On 8/05/2017 3:59 pm, David Nyman wrote:
>>>
>>> On 8 May 2017 4:53 a.m., "Bruce Kellett" < <bhkell...@optusnet.com.au>
>>> bhkell...@optusnet.com.au> wrote:
>>>
>>> On 8/05/2017 3:14 am, David Nyman wrote:
>>>
>>>
>>> I've been thinking a bit more about this and I'd like to set out some
>>> further tentative remarks about the above. Your professional expertise in
>>> these matters is orders of magnitude greater than mine and consequently any
>>> comments you might make would be very helpful. By the way, it would also be
>>> helpful if you would read beyond the next paragraph before commenting
>>> because I hope I will come by myself to the fly in the ointment.
>>>
>>> Firstly, and "assuming computationalism" on the basis of CT + YD, we are
>>> led to the view that UD* must include all possible "physical" computational
>>> continuations (actually infinitely reiterated). This of course is also to
>>> assume that all such continuations are finitely computable (i.e. halting).
>>> Now, again on the same assumptions, it might seem reasonable that our
>>> observing such a physics in concrete substantial form is evidence of its
>>> emergence (i.e. epistemologically) as the predominant computational
>>> mechanism underlying those very perceptions. Hence it might seem equally
>>> reasonable to conclude that this is the reason that these latter
>>> correspondingly appear to supervene on concrete physical manifestations in
>>> their effective environment.
>>>
>>> Now wait a minute. We cannot escape the question of measure. Why would
>>> it be reasonable to assume that a physics of this sort should predominate
>>> in the manner outlined above? Well, firstly, it would seem that the
>>> generator of the set of possible physical computations is infinitely
>>> reiterative​ and hence very robust (both in the sense of computational
>>> inclusiveness a la step 7, and that of internal self-consistency). But who
>>> is to say that the generators of "magical" or simply inconsistent
>>> continuations aren't equally or even more prevalent? After all we're
>>> dealing with a Library of Babel here and the Vast majority of any such
>>> library is bound to be gibberish. Well, I'm wondering​ about an analogy
>>> with Feynman's path integral idea (comments particularly appreciated here).
>>> Might a kind of least action principle be applicable here, such that
>>> internally consistent computations self-reinforce, whereas inconsistent
>>> ones in effect self-cancel?
>>>
>>> Also, absence of evidence isn't evidence of absence. I'm thinking here
>>> about the evaluation of what we typically remember having experienced. I
>>> can't help invoking Hoyle here again (sorry). Subjectively speaking,
>>> there's a kind of struggle always in process between remembering and
>>> forgetting. So on the basis suggested above, and from the abstract point of
>>> view of Hoyle's singular agent (or equally Bruno's virgin machine),
>>> inconsistent paths might plausibly tend to result, in effect, in a net
>>> (unintelligible) forgetting and contrariwise, self-consistent paths might
>>> equally plausibly result in a net (intelligible) remembering. I'm speaking
>>> of consistent and hence intelligible "personal histories" here. But perhaps
>>> you would substitute "implausibly" above. Anyway, your comments as ever
>>> particularly appreciated.
>>>
>>>
>>> I think the problem here is the use of the word "consistent". You refer
>>> to "internally consistent computations" and "consistent and hence
>>> intelligible 'personal histories'." But what is the measure of such
>>> consistency? You cannot use the idea of 'consistent according to some
>>> physical laws', because it is those laws that you are supposedly deriving
>>> -- they cannot form part of the derivation. I don't think any notion of
>>> logical consistency can fill the bill here. It is logically consistent that
>>> my present conscious moment, with its rich record of memories of a physical
>>> world, stretching back to childhood, is all an illusion of the momentary
>>> point in a computational history: the continuation of this computation back
>>> into the past, and forward into the future, could be just white noise! That
>>> is not logically inconsistent, or comutationally inconsistent. It is
>>> inconsistent only with the physical laws of conservation and persistence.
>>> But at this point, you do not have such laws!
>>>
>>> In fact, just as Boltzmann realized in the Boltzmann brain problem,
>>> states of complete randomness both before and after our current conscious
>>> moment are overwhelmingly more likley than that our present moment is
>>> immersed in a physics that involves exceptionless conservation laws, so
>>> that the past and future can both be evolved from our present state by the
>>> application of persistent and pervasive physical laws.
>>>
>>> Unless you can give some meaning to the concept of "consistent" that
>>> does not just beg the question, then I think Boltzmann's problem will
>>> destroy your search for some 'measure' that makes our experience of
>>> physical laws (any physical laws, not just those we actually observe)
>>> overwhelmingly likely.
>>>
>>>
>>> Thanks for this. However I'm not sure you've fully addressed my "path
>>> integral" point, for what it's worth. Feynman's idea, if I've got the gist
>>> of it, was that an electron could be considered as taking every possible
>>> path from A to B, but that the direct or short paths could be considered as
>>> mutually reinforcing and the indirect or longer paths as mutually
>>> cancelling.
>>>
>>>
>>> Feynman's ideas relies on a physical theory within which one can
>>> calculate the phase change along each possible path. The upshot is that
>>> paths far away from the path of least action have phases that cancel in the
>>> quantum superposition sense. Note that the crucial input into this picture
>>> is that there is an underlying physical theory, in terms of which one can
>>> calculate the phase changes along each path. Also, it is important to
>>> remember that Feynman's path integral is only one means of calculating
>>> probability amplitudes in QM -- there are many other means of calculating
>>> these, and all give the same results.
>>>
>>> Hence the derivation of the principle of least action. So the analogy,
>>> more or less, that I have in mind is that Boltzmann-type random subjective
>>> states would, computationally speaking, mutually reinforce identical states
>>> supervening on the generator of "consistent" physical continuations (bear
>>> with me for a moment on the applicable sense of "consistent" here). IOW "If
>>> I am a machine I cannot know which machine I am". So as long as the
>>> generator of those consistent states is encapsulated by UD* - which is
>>> equivalent to saying as long as the computable evolution of physical states
>>> is so encapsulated (which it is by assumption) - then we can plausibly
>>> suppose that the net subjective consequences would be indistinguishable.
>>>
>>>
>>> I agree that conscious states, whether Boltzmann brains or parts of a
>>> longer calculation that does not start and end in white noise, are, insofar
>>> as thise states are conscious, they are indistinguishable. But the
>>> Boltzmann brain-type states cannot reinforce the path that leads to
>>> coherent physics, because the continuations are disjoint.
>>>
>>> As to your most reasonable request for a non question begging notion of
>>> consistent in this context, my tentative answer rests on my remarks about
>>> the "struggle between remembering and forgetting". Here's where I use
>>> Hoyle's pigeon hole analogy, which is pretty much equivalent to Barbour's
>>> time capsule one (as he acknowledges in TEOT)
>>>
>>>
>>> Both Hoyle's pigeon holes and Barbour's time capsules assume that there
>>> is a coherent underlying physics with regular exceptionless laws. Until you
>>> have something like that, you cannot define consistent continuations.
>>>
>>>
>> I think that white noise -> consciousness -> white noise must be of an
>> infinitsimal measure for programs generating consciousness moments, because
>> it needs a lot of informations to go from consciousness moment to white
>> noise, more than from consciousness->consciousness-next moment, hence
>> there must be shorter program that goes from one instant to the next than
>> from one thing to totally something else...
>>
>>
>> What has program length got to do with the UD? All programs, of any
>> length at all are present, and there are infinitely more programs that will
>> go from some chance occurrence of a conscious moment to something that is
>> not conscious, be it white noise or white rabbits, than there are programs
>> that produce consistent physical laws generating the sequence of conscious
>> moments. I think Russell's "Occam's catastrophe" is relevant here. The
>> information content of white noise is much greater than the content of
>> physical laws -- program length is not an issue.
>>
>
> Something lie the speed prior... yes the UD has all of them, but the
> measure function (which we don't have) must render the consistency, thing
> like complexity and size could be a way to explain why consciousness->white
> noise have low measure.
>
>
> Those are just arbitrary assumptions, designed to give you some handle on
> what you want. For consistency, the definition of 'consistent
> continuations' for the measure must come from logic and/or arithmetic alone.
>
>
> as for the anthropic principle, you can use  the same as in MWI, you are
> not where you're dead, in fact you are only found in "you" moment, hence
> consciousness->white noise is not a valid continuation and should be
> discarded for measure counting.
>
>
> How do you know that you will not be white noise in the next instant?
>
> Because white noise by definition is not me, so I have to only consider me
> moment for continuation.
>
>
> You have some insight into the future? How do you know that "you" will
> continue in any form? If you use this criterion for consistency, then you
> have built physics into your derivation of physics.
>

If computationalism is true, every me moment exists, so of course, there
are next me moment with computationalism... no need to have future insight,
the thing is to have a measure function on them.

Quentin


>
>
> Bruce
>
>
> Quentin
>
>
>> The anthropic principle assumes regular physical laws, and those are
>> exactly what we are trying to derive.
>>
>> Bruce
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to