Stathis Papaioannou wrote:
2010/1/14 Brent Meeker <>:

Yes, I can see that.  By aggregating the brain into one computation do you
mean replacing it with a synchronous digital computer whose program would
not only reproduce the I/O of individual neurons, but also the instantaneous
state on signals which were traveling between them (since presumably timing
is important to the neurons function)?  Or do you mean replacing it with a
synchronous digital computer which produces the same I/O at the afferent and
efferent nerves?  In the former case, it seems that "thoughts" would be
distributed over many, not necessarily sequential, computational steps.  In
the later it would not be possible to map the the computational steps to
brain states at all since they are only required to be the same at the I/O;
and hence difficult to say what constituted a thought.
Given these to two possible models of functionalism, I'm not clear on what
"the same computation" means.  Are these two doing the same computation
because they have the same I/O?  Over what range of I does the O have to be
the same - all possible?  all actually experienced?  those experienced in
the last 2minutes?

I think it would be enough for the AI to reproduce the I/O of the
whole brain in aggregate. That would involve computing a function
controlling each efferent nerve, accepting as data input from the
afferent nerves. The behaviour would have to be the same as the brain
for all possible inputs, otherwise the AI might fail the Turing test.

To have the same output for all possible inputs is a very strong condition and seems to go beyond functionalism. Suppose (as seems likely) there are inputs that "crash" the brain (e.g. induce epileptic seizures). Would the AI brain be less conscious because it didn't experience these seizures? Passing or failing the Turing test is a rather crude measure - after all interlocutor might simply guess right.

It's not clear if the modelling would have to be at the molecular,
cellular or some higher level in order to achieve this, but in any
case I expect that there would be many different programs that could
do the job even if the hardware and operating system are kept the
same. It could therefore be a case of multiple computations leading to
the same experience. Pinning down a thought to a location in time and
space would pose no more of a problem for the AI than for the brain.
Then among those AI brains with different computations but the same I/O, you would have to find the same OMs constituted by different sequences of computational steps.

My intuition is that having the same O for "most" (some very large set of ) I would be enough to instantiate consciousness - just not the same consciousness. I think there may be different kinds of consciousness, so a look-up-table (like Searle's Chinese Room) may be conscious but in a different way.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to