On Mon, Aug 15, 2011 at 7:21 PM, Colin Geoffrey Hales <
cgha...@unimelb.edu.au> wrote:

> On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <
> cgha...@unimelb.edu.au> wrote:****
>
> Read all your comments....cutting/snipping to the chase...****
>
> It is a little unfortunate you did not answer all of the questions.  I hope
> that you will answer both questions (1) and (2) below.****
>
> ** **
>
> Yeah sorry about that... I’m really pressed at the moment.****
>
>
No worries.


>  ****
>
> [Jason ]****
>
>
> Your belief that AGI is impossible to achieve through computers depends on
> at least one of the following propositions being true:
> 1. Accurate simulation of the chemistry or physics underlying the brain is
> impossible
> 2. Human intelligence is something beyond the behaviors manifested by the
> brain
> Which one(s) do you think is/are correct and why? ****
>
>
> Thanks,
>
> Jason****
>
>  ****
>
> [Colin] ****
>
> I think you’ve misunderstood the position in ways that I suspect are
> widespread...****
>
>  ****
>
> 1) simulation of the chemistry or physics underlying the brain is
> impossible****
>
>
> Question 1:
>
> Do you believe correct behavior, in terms of the relative motions of
> particles is possible to achieve in a simulation?  ****
>
> ** **
>
> [Colin] ****
>
> ** **
>
> YES, BUT *Only if you simulate the entire universe*. Meaning you already
> know everything, so why bother?****
>
> **
>

Interesting idea.  But do you really think the happenings of some asteroid
floating in interstellar space in the Andromeda galaxy makes any difference
to your intelligence?  Could we get away with only simulating the light cone
for a given mind instead of the whole universe?


> **
>
> So NO, in the real practical world of computing an agency X that is
> ignorant of NOT_X.****
>
> ** **
>
> For a computed cognitive agent X, this will come down to how much impact
> the natural processes of NOT_X (the external world) involves itself in the
> natural processes of X. ****
>
> ** **
>
> I think there is a nonlocal direct impact of NOT_X on the EM fields inside
> X. The EM fields are INPUT, not OUTPUT.****
>
> But this will only be settled experimentally. I aim to do that.
>

I think I have a faint idea of what you are saying, but it is not fully
clear.  Are you hypothesizing there are non-local effects between every
particle in the universe which are necessary to explain the EM fields, and
these EM fields are necessary for intelligent behavior?


> ****
>
> ** **
>
> ****
>
> For example, take the example of the millennium run.  The simulation did
> not produce dark matter, but the representation of dark matter behaved like
> dark matter did in the universe (in terms of relative motion).  If we can
> simulate accurately the motions of particles, to predict where they will be
> in time T given where they are now, then we can peek into the simulation to
> see what is going on.
>
> Please answer if you agree the above is possible.  If you do not, then I do
> not see how your viewpoint is consistent with the fact that we can build
> simulations like the millenium run, or test aircraft designs before building
> them, etc.
>
> Question 2:
>
> Given the above (that we can predict the motions of particles in relation
> to each other) then we can extract data from the simulation to see how
> things are going inside.  Much like we had to convert a large array of
> floating point values representing particle positions in the Millennium
> simulation in order to render a video of a fly-through.  If the only
> information we can extract is the predicted particle locations, then even
> though the simulation does not create EM fields or fire in this universe, we
> can at least determine how the different particles will be arranged after
> running the simulation.
>
> Therefore, if we simulated a brain answering a question in a standardized
> test, we can peer into the simulation to determine in which bubble the
> graphite particles are concentrated (from the simulated pencil, controlled
> by the simulated brain in the model of particle interactions within an
> entire classroom).  Therefore, we have a model which tells us what an
> intelligent person would do, based purely on positions of particles in a
> simulation.
>
> What is wrong with the above reasoning?  It seems to me if we have a model
> that can be used to determine what an intelligence would do, then the model
> could stand in for the intelligence in question.****
>
> ** **
>
> [Colin] ****
>
> I think I already answered this. You can simulate a human if you already
> know everything,
>

We would need to know everything to be certain it is an accurate simulation,
but we don't need to know everything to attempt to build a model based on
our current knowledge.  Then see whether or not it works.  If the design
fails, then we are missing something, if it does work like a human mind
does, then it would appear we got the important details right.


> just like you can simulate flight if you simulate the environment you are
> flying in.
>

But do we need to simulate the entire atmosphere in order to simulate
flight, or just the atmosphere in the immediate area around the surfaces of
the plane?  Likewise, it seems we could take shortcuts in simulating the
environment surrounding a mind and get the behavior we are after.


> In the equivalent case applied to human cognition, you have to simulate the
> entire universe in order that the simulation is accurate. But we are trying
> to create an artificial cognition that can be used to find out about the
> universe outside the artificial cognition ... like humans, you don’t know
> what’s outside...so you can’t do the simulation.
>

Why couldn't we simulate a space station, with a couple of intelligent
agents on it, and place that space station in a finite volume of vacuum in
which after particles pass a certain point we stop simulating them?  They
would see no stars, but I don't know why seeing stars would be necessary for
intelligence.


> The reasoning fails at this point, IMO.
>

The idea that something outside this universe is necessary to explain the
goings on in this universe is like the idea that an invisible undetectable
(from inside this universe) soul exists and is necessary to explain why some
things are conscious while others are not.

If something is truly outside the universe then it can't make a difference
within this universe.  Are you suggesting the intervention of forces outside
this universe determine whether or not a process can be intelligent?


> ****
>
> ** **
>
> The above issue about the X/NOT_X interrelationship stands, however.****
>
> ** **
>
> The solution is: there is/can be no simulation in an artificial cognition.
> It has to use the same processes a brain uses: literally. This is the
> replication approach.****
>
> **
>

If we replicate the laws of physics in a simulation, then a brain in that
simulation is a replication of a real physical brain is it not?


> **
>
> Is it really such a big deal that you can’t get AGI with computation?
>

It would be a very surprising theoretical result.


> Who cares? The main thing is *we can do it using replication*.
>


What is the difference between simulation and replication?  Perhaps all our
disagreement stems from this difference in definitions.


> We are in precisely the same position the Wright Bros were when making
> artificial flight. ****
>
> ** **
>
> This situation is kind of weird. Insisting that simulation/computation is
> the only way to solve a problem is like saying ‘*all buildings must be
> constructed out of paintings of bricks and only people doing it this way
> will ever build a building.’*. For 60 years every building made like this
> falls down.
>

Its not that all brains are computers, its that the evolution of all finite
processes can be determined by a computer.  There is a subtle difference
between saying the brain is a computer, and saying a computer can determine
what a brain would do.

I think your analogy is a little off.  It is not that proponents of strong
AI suggest that houses need to be made of paintings of bricks, it is that
the anti-strong-AI suggests that there are some bricks whose image cannot be
depicted by a painting.

A process that cannot be predicted by a computer is like a sound that cannot
be replicated by a microphone, or an image that can't be captured by a
painting or photograph.  It would be very surprising for such a thing to
exist.


> ****
>
> ** **
>
> Meanwhile I want to build a building out of bricks, and I have to justify
> my position?****
>
> **
>

You can build your buildings out of bricks, but don't tell the artists that
it is impossible for some bricks to be painted (or that they have to paint
every brick in the universe for their painting to be look right!), unless
you have some reason or evidence why that would be so.



> **
>
> Very odd.****
>
> ** **
>
> Colin****
>
> ** **
>
> I literally just found out my PhD examination passed ! Woohoo! ****
>
> So that’s .....****
>
> ** **
>
> Very odd.****
>
> ** **
>
> Dr. Colin****
>
> J****
>
>
>
>
>
Congratulations! :-)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to