On Tue, Mar 25, 2008 at 11:48:17AM -0700, Jason wrote:
> Thanks for your answers. I think my description and understanding of
> the Doomsday argument was overly simplified, but there is a very
> similar anthropic reasoning problem I heard before. The situation is
> something like this:
> There are 5,000 females and 5 males that are created as part of some
> experiment. They are the only humans that exist at the time. 200
> years later after all the humans in the study have died the alien
> experimenter creates 5 females and 5,000 males. Now if you are in
> this experiment and find yourself to be male, you could reasonably
> guess that you are part of the second batch of humans, since 99.9% of
> males in the experiments belong to the second batch.
> The original 5 males short of being given extra information will also
> conclude they are part of the second group. What would they conclude,
> however, if they were told they are part of the first batch?
> Personally I would conclude, "Well it had to be someone, there was
> after all, an original group. I just happen to be one of those few
> rare ones."
That you are even in such an experiment at all is a rare situation,
and a cause to look for explanations elsewhere.
> ASSA looks for explanations elsewhere, such as concluding
> that perhaps the aliens likely never go through with the second half
> of the experiment in the future, or maybe the aliens mess up the
> creation of the humans and create non-conscious zombie males. The
> danger I see in using SSAs to determine what animals can be conscious
> is that inevitably there are human generated OMs, regardless of there
> being ant OMs or not. We may be extremely rare in the set of all OMs,
> but I don't see the probability that my OM is experienced as some
> value between 0 and 1, I see it as 1. This I think is the root of the
> difference in conclusions between you and I.
I don't think thought experiments work. We cannot arbitrarily define
the reference class like you did above. Sorry, but I don't find this
line of argument convincing.
> Regarding the mirror test, I see two problems with it. The first is
> that there is some bias to it, it is biased towards animals that are
> visually orientated. Dogs for example, do not pass. However can a
> dog recognize its own scent? I believe they can, they can follow
> their own tracks. If Dogs ruled the world and subjected humans to a
> self-smell test would humans be capable of passing? Would it be valid
> for the dogs to conclude humans weren't conscious because they
> couldn't identify themselves by their scent?
Absolutely agree. I comment on this very thing on page 95 of my book.
> My second objection is that I don't think self-awareness is a
> necessary requirement for consciousness. I equate awareness with
> consciousness. One could be aware of many things without being aware
> of the self. The phenomenon of "ego death"
> is a case when humans can lose the sense of self, yet they don't lose
> consciousness in the process.
As I have said before, it is only a requirement that one be capable of
self-awareness in order to be capable of consciousness, (this comes
from the anthropic principle reasoning) not that self-awareness is
required every waking moment. Nevertheless, it is still of passing
interest to see what people who've experienced this phenomenon
actually report of their consciousness, just as it is interesting to
see what people report of their experiences of sensory deprivation.
> I think there is some process in the
> brain that generates the sensation of the self being a distinct actor
> within an environment but I think this is just a tool that evolution
> My view on consciousness is similar to Chalmer's, in that perhaps all
> informational processes might be conscious in some manner:
> Chalmers also mentions the more conservative view that only certain
> types of informational processes are conscious.
An informational process is just a physical process that is
interpreted in a symbolic way by some observer. Consciousness must be
an objective property of a process. In order to be restrict
consciousness to just information processes independent of all other
observers, it has to self-interpret symbollically, ie be
Of course informational processes need not be self-aware, but in that
case they are information only by virtue of being so interpreted by
another conscious entity.
> Perhaps some type of
> self-reference is required, but I am not yet convinced. I have Godel,
> Escher, Bach on my reading queue and may change my opinion after
> reading it, as my understanding of it is that it says consciousness is
> the result of "strange loops".
Hofstadter has some extremely interesting speculations along these
lines, and GEB is a very entertaining read, however you won't find the
idea of strange loops well developed in that book. You would probably
learn more from the Wikipedia page. Apparently, he has written a more
recent book on the topic, or you could try Ian Stewart and Jack
Cohen's "Figment of Reality".
Incidently, strange loops are a recurring theme in my Theory of Nothing
A/Prof Russell Standish Phone 0425 253119 (mobile)
UNSW SYDNEY 2052 [EMAIL PROTECTED]
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at