On 11/10/2012 11:43 PM, meekerdb wrote:
On 11/10/2012 8:00 PM, Russell Standish wrote:
On Sat, Nov 10, 2012 at 07:02:04PM -0800, meekerdb wrote:
On 11/10/2012 5:44 PM, Russell Standish wrote:
I think the argument is that association with a body (or brain)
is required for intersubjectivity between minds. It is an
But how does the requirement for intersubjectivity follow from COMP?
Is it just an anthropic selection argument?
I'm not sure how Bruno argues for it, but my version goes something
1) Self-awareness is a requirement for consciousness
2) We expect to find ourselves in an environment sufficiently rich and
complex to support self-aware structures (by Anthropic Principle), but
not more complex than necessary (Occams Razor). Sort of like
Einstein's principle "As simple as possible, and no simpler."
But this is the step I questioned. Why not be like the Borg, i.e. one
consciousness with many bodies?
Don't forget the problem of "whose point of view is that of the
consciousness" of the Borg! I guess we can think of each Borg cyberbody
as a sense organ for the Collective, but how is all that data correlated
into a single Boolean Satisfiable whole? Satisfiability requires that
all of the propositions of the BA(lgebra) are mutually consistent, no?
I think we only 'expect' to find ourselves as we are because we
don't have good theory about how we might be otherwise.
COMP proposes to explain how we are by the UDA, but it needs to
explain why we are associated with bodies - not just assume it to
Absolutely! This is more than the arithmetic body problem; this is
a book keeping problem - how do the bodies locate themselves such that
even if they have identical minds they can use their differences in
location to define a 'external' 3p'ish difference?
3) The simplest environment generating a given level of complexity is
one that has arisen as a result of evolution from a much simpler
initial state. This is the evolution in the multiverse observation,
that evolution is the only creative (or information generating)
4) Evolutionary proccesses work with populations, so automatically,
you must have other self-aware entities in your world, and
Note that Bruno does not agree with 1). So I'm not quite sure how he
gets to the anti-solipsist veiwpoint.
1) comes from the fact that applying 2), without something like 1)
being true, leads to the Occam catastrophe, namely we should expect to
find ourselves in a very simple boring world with nothing complex like
brains in it. Given that we can conceive of ourselves as being born
into a virtual reality (eg matrix style) where the virtual reality
generator renders nothing at all, the occams catastrophe situation is
certainly conceivable. Hence my interest at what happens in sensory
deprivation experiments. If you put a newborn baby in one of those, it
may never become conscious (not that that experiment is ethical
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at