On 8 May 2017 4:53 a.m., "Bruce Kellett" <[email protected]> wrote:

On 8/05/2017 3:14 am, David Nyman wrote:

On 6 May 2017 11:04 p.m., "Brent Meeker" < <[email protected]>
[email protected]> wrote:


On 5/6/2017 2:45 PM, David Nyman wrote:

On 6 May 2017 10:16 p.m., "Brent Meeker" <[email protected]> wrote:



But that's what I mean when I say Bruno's theory has no predictive
success.  QM (and Everett) would correctly predict that alcohol molecules
in the blood will interfere with neuronal function and THEN invoking the
physicalist theory of mind, i.e. that mind supervenes on material events,
it predicts that your ability to do arithmetic will be impaired by drinking
tequila.  It will NOT predict the contrary with more than infinitesimal
probability.  So it's misdirection to say that it's just a measure
problem.  Without having the right measure a probabilistic theory is just
fantasy...or magic as Bruno would say.


I have no idea why you say that. I thought it was clear that if
computationalism doesn't (ultimately) predict that its predominating
computational mechanism (i.e. the one effectively self-selected by complex
subjects, in this case, like ourselves) is the physics those selfsame
subjects observe,


That would certainly be an accomplishment - which in another post Bruno
says is trivially accomplished even in RA (I don't see it).  But to succeed
in prediction it is not enough to show that some world exists in which mind
and physics are consistent (that the physics that mind infers is also the
real physics that predicts effects on the mind).  You need also to show
this has large measure relative to contrary worlds.  One can make a logic
chopping argument that it must be that way for otherwise minds would not be
making sense of the physics they perceived - but that makes the whole
computational argument otiose.


I've been thinking a bit more about this and I'd like to set out some
further tentative remarks about the above. Your professional expertise in
these matters is orders of magnitude greater than mine and consequently any
comments you might make would be very helpful. By the way, it would also be
helpful if you would read beyond the next paragraph before commenting
because I hope I will come by myself to the fly in the ointment.

Firstly, and "assuming computationalism" on the basis of CT + YD, we are
led to the view that UD* must include all possible "physical" computational
continuations (actually infinitely reiterated). This of course is also to
assume that all such continuations are finitely computable (i.e. halting).
Now, again on the same assumptions, it might seem reasonable that our
observing such a physics in concrete substantial form is evidence of its
emergence (i.e. epistemologically) as the predominant computational
mechanism underlying those very perceptions. Hence it might seem equally
reasonable to conclude that this is the reason that these latter
correspondingly appear to supervene on concrete physical manifestations in
their effective environment.

Now wait a minute. We cannot escape the question of measure. Why would it
be reasonable to assume that a physics of this sort should predominate in
the manner outlined above? Well, firstly, it would seem that the generator
of the set of possible physical computations is infinitely reiterative​ and
hence very robust (both in the sense of computational inclusiveness a la
step 7, and that of internal self-consistency). But who is to say that the
generators of "magical" or simply inconsistent continuations aren't equally
or even more prevalent? After all we're dealing with a Library of Babel
here and the Vast majority of any such library is bound to be gibberish.
Well, I'm wondering​ about an analogy with Feynman's path integral idea
(comments particularly appreciated here). Might a kind of least action
principle be applicable here, such that internally consistent computations
self-reinforce, whereas inconsistent ones in effect self-cancel?

Also, absence of evidence isn't evidence of absence. I'm thinking here
about the evaluation of what we typically remember having experienced. I
can't help invoking Hoyle here again (sorry). Subjectively speaking,
there's a kind of struggle always in process between remembering and
forgetting. So on the basis suggested above, and from the abstract point of
view of Hoyle's singular agent (or equally Bruno's virgin machine),
inconsistent paths might plausibly tend to result, in effect, in a net
(unintelligible) forgetting and contrariwise, self-consistent paths might
equally plausibly result in a net (intelligible) remembering. I'm speaking
of consistent and hence intelligible "personal histories" here. But perhaps
you would substitute "implausibly" above. Anyway, your comments as ever
particularly appreciated.


I think the problem here is the use of the word "consistent". You refer to
"internally consistent computations" and "consistent and hence intelligible
'personal histories'." But what is the measure of such consistency? You
cannot use the idea of 'consistent according to some physical laws',
because it is those laws that you are supposedly deriving -- they cannot
form part of the derivation. I don't think any notion of logical
consistency can fill the bill here. It is logically consistent that my
present conscious moment, with its rich record of memories of a physical
world, stretching back to childhood, is all an illusion of the momentary
point in a computational history: the continuation of this computation back
into the past, and forward into the future, could be just white noise! That
is not logically inconsistent, or comutationally inconsistent. It is
inconsistent only with the physical laws of conservation and persistence.
But at this point, you do not have such laws!

In fact, just as Boltzmann realized in the Boltzmann brain problem, states
of complete randomness both before and after our current conscious moment
are overwhelmingly more likley than that our present moment is immersed in
a physics that involves exceptionless conservation laws, so that the past
and future can both be evolved from our present state by the application of
persistent and pervasive physical laws.

Unless you can give some meaning to the concept of "consistent" that does
not just beg the question, then I think Boltzmann's problem will destroy
your search for some 'measure' that makes our experience of physical laws
(any physical laws, not just those we actually observe) overwhelmingly
likely.


Thanks for this. However I'm not sure you've fully addressed my "path
integral" point, for what it's worth. Feynman's idea, if I've got the gist
of it, was that an electron could be considered as taking every possible
path from A to B, but that the direct or short paths could be considered as
mutually reinforcing and the indirect or longer paths as mutually
cancelling. Hence the derivation of the principle of least action. So the
analogy, more or less, that I have in mind is that Boltzmann-type random
subjective states would, computationally speaking, mutually reinforce
identical states supervening on the generator of "consistent" physical
continuations (bear with me for a moment on the applicable sense of
"consistent" here). IOW "If I am a machine I cannot know which machine I
am". So as long as the generator of those consistent states is encapsulated
by UD* - which is equivalent to saying as long as the computable evolution
of physical states is so encapsulated (which it is by assumption) - then we
can plausibly suppose that the net subjective consequences would be
indistinguishable.

As to your most reasonable request for a non question begging notion of
consistent in this context, my tentative answer rests on my remarks about
the "struggle between remembering and forgetting". Here's where I use
Hoyle's pigeon hole analogy, which is pretty much equivalent to Barbour's
time capsule one (as he acknowledges in TEOT) or for that matter the "point
of view" of a machine computing a partitioned multitasking OS. All of these
analogies, or heuristics​ as I prefer to think of them, enable one to think
about the entirety of subjective experience as though from the first person
perspective of a single agent - one of course with a massive case of
multiple personality accompanied by extreme dissociation between each of
the personalities.

The only connectivity between discrete states of the overall system is that
which is logically internal to each state. Of course on reflection we
realise that most plausibly the brain must somehow contrive just such
relations between states, as becomes most obvious when this mechanism goes
wrong in dementia and other neurocognitive insults. So "consistency" here
would reflect the fact that these very conversations, for example, form
part of a coherent ​internally linked history of remembering, whereas
inummerable incoherent states simply *cannot be recalled* from the
perspective of such consistent histories. Hence what is consistent is
equivalent to what is, in the net, remembered (recalling in passing the
etymology of this word) as distinct from what is, in the net, disremembered.

I'm reasonably confident that this justification isn't merely circular, or
that if it is, it may well be one of Brent's virtuously circular
explanations. What ​do you think?

David



Bruce


-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to