On 3/29/2012 7:37 PM, Stephen P. King wrote:
On 3/29/2012 9:20 PM, David Nyman wrote:
On 29 March 2012 20:47, meekerdb<meeke...@verizon.net>  wrote:

You don't know that.  It's an assumption based on the idea that conscious
experience is something a certain physical body, a brain, does.  But if
conscious experience is a process then it is certainly possible to create a
process that is aware of being in both Washington and Moscow at the same
time.  Think of a brain wired via RF links to eyeballs in M and W.   Or The
Borg of Star Trek.  Of course that experience would be strange and we would
tend to say, "Yes but it's still one consciousness."  So then the question
becomes what do you mean by not experiencing duplication?  Is it a mere
tautology based on how you define 'consciousness'?
Surely it's just a necessary prerequisite for accepting the
possibility of either MWI or comp?  IOW, if one rejects, on whatever
grounds, that a unique subjective perspective could be consistent with
the objective existence of multiple copies (as I think is the case
with Kent) then one is forced also to reject both MWI and comp.  Given
such a view, neither theory could be a viable explanation for one's
lived experience of observing "one universe at a time".

AFAICS, the more exotic examples you give above, e.g. a distributed
process, or a Borg-type group-mind, present no difficulties beyond
that for "ordinary" consciousness.  Again, either one accepts that
duplication of these states of affairs would be compatible, mutatis
mutandis, with the corresponding "single universe" perspective
(however exotic) or not.

Given the above, what makes it difficult to make sense of John's
objections to Bruno's argument is precisely that he accepts the
possibility of multiple copies in a comp or MWI scenario, whilst
ignoring the necessity of recovering a singular perspective.  But the
latter step is a prerequisite, in any scenario, for reproducing the
empirically uncertain state of affairs.  Without it, the "probability"
of every outcome - as John has continually reiterated - can only ever
be 100%!

"Selection", even if only implicit, is an ineliminable feature of any
theory seeking to explain the empirical facts.  Kent's proposal is a
process that eliminates all branches but one, albeit on a somewhat
different basis than Copenhagen.  Similarly, the heuristic I suggested
in an earlier post entails "selection", but in a non-destructive
manner.  BTW, I had long retained a dim recollection of a similar
selection metaphor involving "pigeon holes" from my youthful SF
reading, which I recently re-discovered to be Fred Hoyle's 1960's
novella "October the First is Too Late".  I also found that John
Gribbin refers to this very notion in his recent Multiverse book
(apparently he was a student of Hoyle's), relating it to the ideas of
Deutsch and Barbour.  This reinforced my suspicion that they do rely
implicitly on such a selection principle, though AFAICS neither of
them acknowledge it explicitly.


Hi David and Brent,

I have a question. Could it be that the "sense of self as being-in-the-world" (ala Nagel's bat) is a phenomenon not at all unlike the uniqueness of a fixed point <http://en.wikipedia.org/wiki/Fixed_point_%28mathematics%29> on a manifold?

Under what mapping?

It seems to me that one of the key aspects of the sense of self or "I" that it is unique in its association with its location and its memories.

Location is just part of one's model of their body. If you had two bodies, one in Moscow and one in Washington, you'd have two viewpoints in 3-space and you'd develop a model of having bodies in both places.

Being in two places at the same time would at least be confusing, Try navigating with a combination of two maps - overlay the maps of Washington and Moscow and try to figure out where you are.

Ask someone who flies radio control planes. They become able to 'place themselves' in the plane.

I offer the movie "12 Monkeys <http://en.wikipedia.org/wiki/12_Monkeys>" as a fictional narrative exploring what happens when one's localization is split in the time sense. In my studies, I have considered how the various pathologies of consciousness sketch for us some of the fundamental aspects of consciousness in a 3-p'ish way. For example, the various dissassociative disorders ranging from phantom limb to multiple personality disorders and schizophrenia tell us that an individual's sense of self is strongly correlated with the synchronization of both temporal and spatial queues both internal to the brain and of a person's location. Thus "being in two places at the same time" is not that much different from "having two sets of sense data that are disjoint" (in the being in Moscow and Washington) such that unless a single reconciliation of the two is possible, there will be inevitably a splitting of the "I"'s or 1-p.

Not at all. I take the opposite lesson. Your brain invents your sense of self. The difference between the schizophrenic hearing voices in his head and you thinking thoughts is that you recognize the voice as being yourself. The reason people have phantom limbs is that their body is an internal model.

BTW, this line of reasoning argues strongly against the "Borg" group-mind idea as possibly yielding a consciousness of the same kind as the one that we have because of the inability to define a single "point of view" given the wide and even disjoint nature of the panorama of sense data that would be involved.

It might not be possible for humans, because they evolved to integrate the vision from two eyes; but in principle I think a person could learn to see the point of view at two different locations. I certainly see nothing logically contradictory or nomologically impossible.

Consciousness is, I argue, fixed to a single point and cannot be distributed. Distributed behavior would more correlate to what the psychologist like to call "the unconscious". Consciousness, or at least self-aware-consciousness, emerges from the unconscious only if and when a fixed point like function can be defined on it. (Compactness and closure properties must exist...)

Take my favorite thought experiment. Suppose I design two Mars Rovers and I want them to coordinate their movements in order to round up Martian sheep. I can easily distribute the artificial intelligence between the two of them, using data links so whatever one sees the other sees (incidentally this, minus the AI, is what combat aircraft software does now) and so there is a single top level decision routine on top of local decision routines about maneuvering around obstacles and managing internal states.


Speaking of SciFi writers, I recommend books by James P. Hogan, such as Paths to Otherwhere <http://www.amazon.com/Paths-Otherwhere-James-P-Hogan/dp/0671877674/ref=pd_sim_b_2> and The Proteus Operation <http://www.amazon.com/The-Proteus-Operation-James-Hogan/dp/0671877577/ref=sr_1_19?s=books&ie=UTF8&qid=1333074338&sr=1-19> which consider some aspects of the questions that we are asking here.



No virus found in this message.
Checked by AVG - www.avg.com <http://www.avg.com>
Version: 2012.0.1913 / Virus Database: 2114/4903 - Release Date: 03/29/12

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to