Pete Carlton writes: > Let's say that you were able to completely specify one Eric, by giving > a (possibly infinitely) long description. Let's call the entity you > have thus specified "Eric01". Our point of difference seems to be > this: You believe that when Eric01 says "I", he is referring precisely > to Eric01. I believe that when Eric01 says "I", he is referring to the > entire ensemble of Erics who are identical to Eric01 in all the ways > Eric01 is capable of detecting. Because each member of this ensemble > is also saying "I", and meaning the same thing by it.
Here is the question I wonder about. Is it meaningful for Eric01 to consider the concept of precisely the one Eric that he is? Or would you say that it is fundamentally impossible for a system (e.g. Eric01) to accurately conceive of the concept of itself as a completely specified and single entity, since this requires discrimination beyond its powers of perception, and, as you note, a possibly infinitely detailed description? Perhaps we could consider a simpler example: a conscious computer program, an AI. Run the same program in lock step on two computers. Suppose the program is aware of these circumstances. Is it meaningful for that program to have a concept of "the particular computer that is running this program"? Hal