Stathis Papaioannou wrote:
> Brent Meeker writes:
>>> You're implying that the default assumption should be that 
>>> consciousness correlates more closely with external behaviour
>>> than with internal activity generating the behaviour: the tape
>>> recorder should reason that as the CD player produces the same
>>> audio output as I do, most likely it has the same experiences as
>>> I do. But why shouldn't the tape recorder reason: even though the
>>> CD player produces the same output as I do, it does so using
>>> completely different technology, so it most likely has completely
>>> different experiences to my own.
>> Here's my reasoning: We think other people (and animals) are
>> conscious, have experiences, mainly because of the way they behave
>> and to a lesser degree because they are like us in appearance and
>> structure.  On the other hand we're pretty sure that consciousness
>> requires a high degree of complexity, something supported by our
>> theories and technology of information.  So we don't think that
>> individual molecules or neurons are conscious - it must be
>> something about how a large number of subsystems interact.  This
>> implies that any one subsystem could be replaced by a functionally
>> similar one, e.g. silicon "neuron", and not change consciousness.
>> So our theory is that it is not technology in the sense of digital
>> vs analog, but in some functional information processing sense.
>> So given two things that have the same behavior, the default
>> assumption is they have the same consciousness (i.e. little or none
>> in the case of CD and tape players).  If I look into them deeper
>> and find they use different technologies, that doesn't do much to
>> change my opinion - it's like a silicon neuron vs a biochemical
>> one.  If I find the flow and storage of information is different,
>> e.g. one throws away more information than the other, or one adds
>> randomness, then I'd say that was evidence for different
>> consciousness.
> I basically agree, but with qualifications. If the attempt to copy
> human intelligence is "bottom up", for example by emulating neurons
> with electronics, then I think it is a good bet that if it behaves
> like a human and is based on the same principles as the human brain,
> it probably has the same types of conscious experiences as a human.
> But long before we are able to build such artificial brains, we will
> probably have the equivalent of characters in advanced computer games
> designed to pass the Turing Test using technology nothing like a
> biological brain. If such a computer program is conscious at all I
> would certainly not bet that it was conscious in the same way as a
> human is conscious, just because it is able to fool us into thinking
> it is human.

Such computer personas will probably be very different in terms of information 
storage and processing  - although we may not know it when they are developed 
simply because we still won't know how humans do it.  But a good example would 
be a neural net vice a production system.  At some level I'm sure you can get 
the same behavior out of them, but at the information processing level they're 
very different.

Incidentally, I wonder if anybody remembers that the test Turing proposed was 
for an AI and a man to each try to fool an interrogator by pretending to be a 

Metaphysics is a restaurant where they give you a 30,000 page menu and no food.
        --- Robert Pirsig

 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to