Brent Meeker writes:

> >>Why not?  Can't we map bat conscious-computation to human 
> >>conscious-computation; 
> >>since you suppose we can map any computation to any other.  But, you're 
> >>thinking, 
> >>since there a practical infinity of maps (even a countable infinity if you 
> >>allow 
> >>one->many) there is no way to know which is the correct map.  There is if 
> >>you and the 
> >>bat share an environment.
> > 
> > 
> > You're right that the correct mapping is the one in which you and the bat 
> > share the 
> > environment. That is what interaction with the environment does: forces us 
> > to choose 
> > one mapping out of all the possible ones, whether that involves talking to 
> > another person 
> > or using a computer. However, that doesn't mean I know everything about 
> > bats if I know 
> > everything about bat-computations. If it did, that would mean there was no 
> > difference 
> > between zombie bats and conscious bats, no difference between first person 
> > knowledge 
> > and third person or vicarious knowledge.
> > 
> > Stathis Papaioannou
> 
> I don't find either of those conclusions absurd.  Computationalism is 
> generally 
> thought to entail both of them.  Bruno's theory that identifies knowledge 
> with 
> provability is the only form of computationalism that seems to allow the 
> distinction 
> in a fundamental way.

The Turing test would seem to imply that if it behaves like a bat, it has the 
mental states of a 
bat, and maybe this is a good practical test, but I think we can keep 
computationalism/strong AI 
and allow that it might have different mental states and still behave the same. 
A person given 
an opiod drug still experiences pain, although less intensely, and would be 
easily able to fool the 
Turing tester into believing that he is experiecing the same pain as in the 
undrugged state. By 
extension, it is logically possible, though unlikely, that the subject may have 
no conscious experiences 
at all. The usual argument against this is that by the same reasoning we cannot 
be sure that our 
fellow humans are conscious. This is strictly true, but we have two reasons for 
assuming other 
people are conscious: they behave like we do and their brains are similar to 
ours. I don't think 
it would be unreasonable to wonder whether a digital computer that behaves like 
we do really 
has the same mental states as a human, while still believing that it is 
theoretically possible that a 
close enough analogue of a human brain would have the same mental states.

Stathis Papaioannou
_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---

Reply via email to