2010/1/7 Brent Meeker <meeke...@dslextreme.com>:

>> A program that generates S2 as it were out of nowhere, with false
>> memories of an S1 that has not yet happened or may never happen, is a
>> perfectly legitimate program and the UD will generate it along with
>> all the others. If the UD is allowed to run forever, this program will
>> be a lower measure contributor to S2 than the program that generates
>> it sequentially;
> How do you know this?

Why S2 is unlikely to appear out of nowhere is equivalent to the White
Rabbit problem in ensemble theories, which has been often discussed
over the years on this list. Russell's "Theory of Nothing" book
provides a summary. The general idea is that structures generated by
simpler algorithms have higher measure, and it is simpler to write a
program that computes a series of mental states iteratively than one
that computes a set of disconnected mental states from ad hoc data.

>> and similarly in any physicalist theory. But although
>> S2 may guess from such considerations that he is more likely to have
>> been generated sequentially, the point remains that there is nothing
>> in the nature of his experience to indicate this. That is, the fact
>> that S2 remembers S1 as being in the past and remembers a smooth
>> transition from S1 to S2 is no guarantee that S1 really did happen in
>> the past, or even at all.
> We're assuming that thought is a kind of computation, a processing of
> information.  And we're also assuming that this processing can consist of
> static states placed in order.  So given two static states, what is the
> relation  that makes their ordering into a computational process?  One
> answer would be that they are successive states generated by some program.
> But you seem to reject that.  To say that S2 remembers S1 doesn't seem to
> answer the question because "remembering" is itself a process, not a static
> state.  I tried to phrase it in terms of the entropy, or information
> content, of S1 and S2 which would be a static property - as for example, if
> S2 simply contained S1.  But that hardly seems a proper representation of
> states of consciousness - I'm certainly not conscious of my memories most of
> the time.  Even as I type this I obviously remember how to type (though
> maybe not how to spell :-) ) but I'm not conscious of it.

You've made this point in the past but I still don't understand it. If
S1 and S2 are periods of experience generated consecutively in your
brain in the usual manner, do you agree that you would still be
experience them as consecutive if they were generated by chance by
causally disconnected processes? The requirement would be only that
the respective experiences have the same subjective content in both
cases. Memory is only one aspect of subjective content, if an
important one. If S1-S2 spans the typing of a sentence, then both S1
and S2 have to remember how to type and what the sentence they are
typing is. It may seem to be unconscious but obviously it can't be
completely unconscious, otherwise it could be left out without making
any difference. Your digestion is an example of a completely
unconscious process that need not be taken into account in a
simulation of your mind. Another example is your name: you may have no
awareness at all of your name during S1-S2 so it could safely be left
out of the simulation, although at S3 when you reach the end of your
post and you need to sign it you need to remember what it is.

Stathis Papaioannou
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to