Kory Heath wrote:
> On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
>> Doesn't the question go away if it is nomologically impossible?
> I'm sort of the opposite of you on this issue. You don't like to use
> the term "logically possible", while I don't like to use the term
> "nomologically impossible". I don't see the relevance of nomological
> possibility to any philosophical question I'm interested in. For
> anything that's nomologically impossible, I can just imagine a
> cellular automaton or some other computational or mathematical
> "physics" in which that thing is nomologically possible. And then I
> can just imagine physically instantiating that universe on one of our
> real computers. And then all of my philosophical questions still apply.
> I can certainly imagine objections to that viewpoint. But life is
> short. My point was that, since you already agreed that it's
> nomologically possible for a random robot to outwardly behave like a
> conscious person for some indefinite period of time, we can sidestep
> the (probably interesting) discussion we might have about nomological
> vs. logical possibility in this case.
>> Does a random number generator have computational functionality just
>> in case it
>> (accidentally) computes something? I would say it does not. But
>> referring the
>> concept of zombie to a capacity, rather than observed behavior,
>> makes a
>> difference in Bruno's question.
> I think that Dennett explicitly refers to computational capacities
> when talking about consciousness (and zombies), and I follow him. But
> Dennett's point is that computational capacity is always, in
> principle, observed behavior - or, at least, behavior that can be
> observed. In the case of Lucky Alice, if you had the right tools, you
> could examine the neurons and see - based on how they were behaving! -
> that they were not causally connected to each other. (The fact that a
> neuron is being triggered by a cosmic ray rather than by a neighboring
> neuron is an observable part of its behavior.) That observed behavior
> would allow you to conclude that this brain does not have the
> computational capacity to compute the answers to a math test, or to
> compute the trajectory of a ball.
>> I would regard it as an empirical question about how the robots
>> brain worked.
>> If the brain processed perceptual and memory data to produce the
>> behavior, as in
>> Jason's causal relations, I would say it is conscious in some sense
>> (I think
>> there are different kinds of consciousness, as evidenced by Bruno's
>> list of
>> first-person experiences). If it were a random number generator, i.e.
>> accidental behavior, I'd say not.
> I agree. But why do you say you're puzzled about how to answer Bruno's
> question about Lucky Alice? I think you just answered it - for you,
> Lucky Alice wouldn't be conscious. (Or do you think that Lucky Alice
> is different than a robot with a random-number-generator in its head?
> I don't.)
I think Alice is different. She has the capacity to be conscious. This is
potentially, temporarily interrupted by some mysterious failure of gates (or
neurons) in her brain - but wait, these failures are serendipitously canceled
out by a burst of cosmic rays, so they all get the same input/output as if
nothing had happened. So, functionally, it's as if the gates didn't fail at
all. This functionality is beyond external behavior; it includes forming
memories, paying attention, etc. Of course we may say it is not causally
related to Alice's environment, but this depends on a certain theory of
causality, a physical theory. If the cosmic rays exactly replace all the gate
functions to maintain the same causal chains then from an informational
perspective we might say the rays were caused by the relations to her
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at