on 05.02.2011 02:27 Colin Hales said the following:

Stathis Papaioannou wrote:
On Fri, Feb 4, 2011 at 12:05 PM, Colin Hales
<c.ha...@pgrad.unimelb.edu.au> wrote:


I understand that this is your position but I would like you to
consider a poor, dumb engineer who neither knows nor cares about
philosophy of mind. All he cares about is making an accurate model
which will predict the pattern of motor neuron firings for a human
brain given a certain initial state. Doing this is equivalent to
constructing a human level AI, since the simulation could be given
information and would respond just as a human would given the same
information. Now, I take it that you don't believe that such
predictions can be made using a mathematical model. Is that right?
I am also a poor dumb engineer (that has examined far too much
philosophy of mind. Enough to be quite irritated by it :-). I started
as an engineer with the 'black box' idea and eventually found enough
 evidence in human behaviour (specifically scientific behaviour) to
doubt we can make an AGI that can do science like us when the black
box is full of computer running software. I use the scientist as my
target because its behaviour is testable. I conclude that I am more
likely to succeed if the 'black box' includes more than mere software
models of a brain in it.

I have recently read Jaron Lanier, You Are Not a Gadget: A Manifesto. I believe it should be pretty helpful for a dump engineer. Say there is a nice critique of the Turing test there and many other interesting thoughts related to open culture, swarm intelligence and singularity. It is not directly related to the current discussion though.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to