On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales
<cgha...@unimelb.edu.au> wrote:

Read all your comments....cutting/snipping to the chase...

It is a little unfortunate you did not answer all of the questions.  I
hope that you will answer both questions (1) and (2) below.


Yeah sorry about that... I'm really pressed at the moment.


        [Jason ]

        Your belief that AGI is impossible to achieve through computers
depends on at least one of the following propositions being true:
        1. Accurate simulation of the chemistry or physics underlying
the brain is impossible
        2. Human intelligence is something beyond the behaviors
manifested by the brain
        Which one(s) do you think is/are correct and why? 




        I think you've misunderstood the position in ways that I suspect
are widespread...


        1) simulation of the chemistry or physics underlying the brain
is impossible

Question 1:

Do you believe correct behavior, in terms of the relative motions of
particles is possible to achieve in a simulation?  




YES, BUT Only if you simulate the entire universe. Meaning you already
know everything, so why bother?


So NO, in the real practical world of computing an agency X that is
ignorant of NOT_X.


For a computed cognitive agent X, this will come down to how much impact
the natural processes of NOT_X (the external world) involves itself in
the natural processes of X. 


I think there is a nonlocal direct impact of NOT_X on the EM fields
inside X. The EM fields are INPUT, not OUTPUT.

But this will only be settled experimentally. I aim to do that.


For example, take the example of the millennium run.  The simulation did
not produce dark matter, but the representation of dark matter behaved
like dark matter did in the universe (in terms of relative motion).  If
we can simulate accurately the motions of particles, to predict where
they will be in time T given where they are now, then we can peek into
the simulation to see what is going on.

Please answer if you agree the above is possible.  If you do not, then I
do not see how your viewpoint is consistent with the fact that we can
build simulations like the millenium run, or test aircraft designs
before building them, etc.

Question 2:

Given the above (that we can predict the motions of particles in
relation to each other) then we can extract data from the simulation to
see how things are going inside.  Much like we had to convert a large
array of floating point values representing particle positions in the
Millennium simulation in order to render a video of a fly-through.  If
the only information we can extract is the predicted particle locations,
then even though the simulation does not create EM fields or fire in
this universe, we can at least determine how the different particles
will be arranged after running the simulation.

Therefore, if we simulated a brain answering a question in a
standardized test, we can peer into the simulation to determine in which
bubble the graphite particles are concentrated (from the simulated
pencil, controlled by the simulated brain in the model of particle
interactions within an entire classroom).  Therefore, we have a model
which tells us what an intelligent person would do, based purely on
positions of particles in a simulation.

What is wrong with the above reasoning?  It seems to me if we have a
model that can be used to determine what an intelligence would do, then
the model could stand in for the intelligence in question.



I think I already answered this. You can simulate a human if you already
know everything, just like you can simulate flight if you simulate the
environment you are flying in. In the equivalent case applied to human
cognition, you have to simulate the entire universe in order that the
simulation is accurate. But we are trying to create an artificial
cognition that can be used to find out about the universe outside the
artificial cognition ... like humans, you don't know what's outside...so
you can't do the simulation. The reasoning fails at this point, IMO.


The above issue about the X/NOT_X interrelationship stands, however.


The solution is: there is/can be no simulation in an artificial
cognition. It has to use the same processes a brain uses: literally.
This is the replication approach.


Is it really such a big deal that you can't get AGI with computation?
Who cares? The main thing is we can do it using replication. We are in
precisely the same position the Wright Bros were when making artificial


This situation is kind of weird. Insisting that simulation/computation
is the only way to solve a problem is like saying 'all buildings must be
constructed out of paintings of bricks and only people doing it this way
will ever build a building.'. For 60 years every building made like this
falls down. 


Meanwhile I want to build a building out of bricks, and I have to
justify my position?


Very odd.




I literally just found out my PhD examination passed ! Woohoo! 

So that's .....


Very odd.


Dr. Colin



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to