Colin Geoffrey Hales wrote:
>                       <[EMAIL PROTECTED]>
> In-Reply-To: <[EMAIL PROTECTED]>
> Hi Brent,
> Please see the post/replies to Quentin/LZ.
> I am trying to understand the context in which I can be wrong and how
> other people view the proposition. There can be a mixture of mistakes and
> poor communication and I want to understand all the ways in which these
> things play a role in the discourse.
> So...
> >> So, I have my zombie scientist and my human scientist and I
> >> ask them to do science on exquisite novelty. What happens?
> >> The novelty is invisible to the zombie, who has the internal
> >> life of a dreamless sleep.
> >
> > Scientists don't literally "see" novel theories - they invent
> > them by combining other ideas.  "Invisible" is just a metaphor.
> I am not talking about the creative process. I am talking about the
> perception of a natural world phenomena that has never before been
> encountered. There can be no a-priori scientific knowledge in such
> situations. It is as far from a metaphor as you can get. I mean literal
> invisibility. See the red photon discussion in the LZ posting. If all you
> have is a-priori abstract (non-phenomenal) rules of interpretation of
> sensory signals to go by, then one day you are going to misinterpret
> because the signals came in the same from a completely different source
> and you;d never know it. That is the invisibility I claim at the center of
> the zombie's difficulty.
> >
> >> The reason it is invisible is because there is no phenomenal
> >> consciousness. The zombie has only sensory data to use to
> >> do science. There are an infinite number
> >> of ways that same sensory data could arrive from an infinity
> >> of external natural world situations. The sensory data is
> >> ambiguous - it's all the same - action potential pulse trains
> >> traveling from sensors to brain. The zombie cannot possibly
> >> distinguish the novelty from the sensory data
> >
> > Why can it not distinguish them as well as the limited human scientist?
> Because the human scientist is distinguishing them within the phenomenal
> construct made from the sensory data, not directly from the sensory data -
> which all the zombie has. The zombie has no phenomenal construct of the
> external world. It has an abstraction entirely based on the prior history
> of non-phenonmenal sensory input.

All the evidence indicates that humans have only an
abstraction entirely based on the prior history
of phenomenal sensory input -- which itself contains omly information
previously present in  abstraction entirely based on the prior history
of non-phenonmenal sensory input.

> >
> >> and has no awareness of the external world or even its own boundary.
> >
> > Even simple robots like the Mars Rovers have awareness of the
> > world, where they are, their internal states, and
> No they don't. They have an internal state sufficiently complex to
> navigate according to the rules of the program (a-priori knowledge) given
> to them by humans, who are the only beings that are actually aware where
> the rover is. Look at what happens when the machine gets hung up on
> novelty... like the rock nobody could allow for.... who digs it out of it?
> no the rover... humans do...

Because it lacks phenomenality? Or because it is not
a very smart robot?

> .The rover has no internal life at all. Going
> 'over there' is what the human sees. 'actuate this motor until until this
> number equals that number' is what the rover does.
> >
> > No.  You've simply assumed that you know what "awareness" is and you
> have the defined a zombie as not having it.  You might as
> > well have just defined "zombie" as "just like a person, but can't do
> science" or "can't whistle".  Whatever definition you give
> > still leaves the question of whether a being whose internal
> > processes (and a fortiori the external processes) are
> > functionally identical with a human's is conscious.
> This is the nub of it. It's where I struggle to see the logic others see.
> I don't think I have done what you describe. I'll walk myself through it.

> What I have done is try to figure out a valid test for phenomenal
> consciousness.
> When you take away phenomenal consciousness what can't you do? It seems
> science is a unique/special candidate for a variety of reasons. Its
> success is critically dependent on the existence of a phenomenal
> representation of the external world.

So is art. So is walking around without bumping into things.
So, no science is not unique.

> The creature that is devoid of such constructs is what we typically call a
> zombie. May be a mistake to call it that. No matter.
> OK, so the real sticking point is the 'phenomenal construct'. The zombie
> could have a 'construct' with as much detail in it as the human phenomenal
> construct, but that is phenomenally inert (a numerical abstraction). Upon
> what basis could the zombie acquire such a construct?

The same way a human did, but without the phenomenality, I suppose.

>  It can't get it from
> sensory feeds without knowing already what sensory feeds relate to what
> part of the natural world.

Why would it have to "know already"? Do humans "know already"?

And surely the whole point of a "model" or construct" is that
is exploratory, hypothetical.

> That a-priori knowledge is not available.

And it is to human.? Why should apriori knowledge
be available to humans?

> what the zombie is trying to find out. This is the logical loop from my
> perspective.

hypothesise -> test -> falsify -> reject -> hypothesise again
*is* a loop.

> So who's in the logical loop here? I am assuming zero a-priori scientific
> knowledge in the human and the zombie.
>  How does each get to a state of
> non-zero scientific knowledge of the external natural world? For this is
> what has actually happened in an evolutionary sense. We have phenomenal
> consciousness for a reason.

How do you get to be a complex biological organism?

That starts from "nothing", too.

> If you zero out all a-priori knowledge in two entities, one with and one
> without phenomenal consciousness the only one that can make any progress
> is the one with phenomenal consciosueness - the One that has experiences
> of the external world generated in their head.

As usual, you have just assumed that out of thin air.

> In a sense the a-priori knowledge that the human has is 'hard-wired' in a
> capability to construct phenomenal scenes from sensory data. That a-priori
> 'knowledge' is not scientific knowledge of the type found by using that
> faculty. The phenomenal scenes make some assumptions and they can
> mis-inform. But they do connect the scientist with the world outside the
> scientist in a direct way that means that when something acts in
> contradiction to previous behaviour that novelty is phenomenally visible.

But that process is only apriori for the individual. As far a
the life in general is concerned, it started from nothing,

> I have made no assumptions of a-priori scientific knowledge found using
> phenomenal consciousness. I can show technically how the lack of
> phenomenal consciousness prevents the zombie'd scientist from ever
> accurately gettinmg at laws of the external natural world.

I don't think so.

> On the other hand if you assume a computation (a numerical abstraction)
> capable of doing it then you are assuming phenomenal consciousness,

Of course not.

> because all of the a-priori knowledge inherent in such a device has to be
> bestowed upon the creature by humans.

Humans didn't get their apriori knowledge form other humans
ad infinitum. If the evolutionary process generated it form scratch,
then it can be regenerated by an artificial evolutionary process.

> That model has been created with
> scientific exploration made possible with phenomenal consciousness. That
> such a creature then automatically has access to the external world is
> just an assumption.

Sensory feeds *are* access.

> Sensory feeds have no phenomenal content! Sensory feeds poking an
> abstraction have nothing to say about the world external to the zombie.

Yes they do. They convey information about it. They
wouldn't meaningfully be "sensory" otherwise.

> I can't see I have assumed anything. Indeed I see everyone else as
> assuming something about the nature of sensory feeds and the availability
> of a-priori knowledge in a situation where there is none.

You're assuming "there will be none". Since apriori
knowledge is not phenomenality, a zombie would have, and a robot
could have it.

> colin

 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to