On 2/3/2011 5:17 AM, Bruno Marchal wrote:
On 03 Feb 2011, at 01:18, Brent Meeker wrote:
On 2/2/2011 2:00 AM, Stathis Papaioannou wrote:
On Wed, Feb 2, 2011 at 6:45 PM, Brent
I think it very likely that the brain can be so modeled. But the
that simulated brain, as expressed in it's output decisions
inputs is dependent on the rest of the world, or at least of it
the brain will interact - including the past evoutionary history
up to the brain. Its computations have no canonical interpretation in
You can connect the simulated brain to transducers which convert
environmental inputs into electrical signals. But then, what would
happen if the same electrical signals were input from data on disk
rather than the environment? Would the brain's experience be
different? If so, how would it know where the data was coming from?
It wouldn't know; and it's responses would have no meaning except to
someone who did know. Context is essential. Otherwise you get the
rock that calculates everything.
If the context is needed and is not Turing emulable, then comp is just
If it is Turing emulable then the reasoning go through, unless you
have an objection, and it would be nice you try to say where.
My reservation is that the context will be Turing emulable, but it will
have to be so large as to constitute a whole world. That this is what
is required that be self-interpreting.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at