On Feb 4, 1:05 am, Colin Hales <c.ha...@pgrad.unimelb.edu.au> wrote:
> Stathis (....Down below...)
> Stathis Papaioannou wrote:
> > On Thu, Feb 3, 2011 at 9:35 AM, Colin Hales
> > <c.ha...@pgrad.unimelb.edu.au> wrote:
> >> This means we are hooked into the external world in ways that are not
> >> present in the peripheral nerves. Looking at the (nerves pulses) signals, 
> >> it
> >> is impossible to tell if they are vision, smell, touch or anything else.
> >> Those that think that a computer can add this extra bit of connectivity to
> >> the external world, believe in comp/COMP. When you replace the brain with a
> >> model of a brain using a computer, that "extra" bit, the connection with 
> >> the
> >> outside world we get from our qualia,...the qualia created by the brain
> >> matter itself, is replaced by the qualia you get by 'being' the computer.
> >> If you believe comp/COMP, then you believe that the computer's model -or -
> >> the computer hardware itself -  somehow replaces the function of the 
> >> qualia,
> >> by analysing the sensory signalling, which is fundamentally degenerately
> >> related to the external world. Only a human with qualia can, from sensory
> >> signals, provide any sort of model for our 'computer-in-a-vat' that might
> >> stand-in for an external world. Having done that, the world being explored
> >> by our computer-in-a-vat is the world of the human model generated from the
> >> sensory signals, not the world itself. When an encounter with the unknown
> >> happens, then the unknown will be chacterized by a human model's response 
> >> to
> >> the unknown, not the (unknown) actual world. The extent to which these
> >> things are different is the key.
> >> Neuroscience is beginning to progress from NCC (Neural correlates of
> >> consciousness) to EMCC (electromagnetic correlates of consciousness).
> >> Researchers are slowly discovering that certain aspects of cognition and
> >> behaviour correlate better with the LFP (local field 
> >> potential/extracellular
> >> field) than mere action potentials.
> >> If the EM fields are the difference, then in replacing the fields of the
> >> brain with the fields of the computer running a model...and your
> >> qualia/cognition go with it.
> >> So when you think of the 'input/output' relations for a computer, the
> >> sensory signalling is only part of it. There is another complete set of
> >> 'input' relations, qualia, that together with the sensory signals, form our
> >> real connection to the outside world. So the old black-box replacement idea
> >> is right - but only if the black box has a whole other set of 'input'
> >> signals, from the qualia. The only way you can computationally replace 
> >> these
> >> signals is to already know everything about the external world already. 
> >> Your
> >> alternative? Keep the qualia in your 'black box'. To me that means
> >> generating the fields as well.
> >> Don't get me wrong. Lots of really nifty AI can result from the
> >> 'computer-in-a-vat'. However, that's not what I am aiming at. I want AGI. G
> >> for General.
> > Can the behaviour of the neurons including the electric fields be
> > simulated? For example, is it possible to model what will happen in
> > the brain (and what output will ultimately go to the muscles via
> > peripheral nerves) if a particular sequence of photons hits the
> > retina? If that is a theoretical impossibility then where exactly is
> > the non-computable physics, and what evidence do you have that it is
> > non-computable?
> Lots of aspects to your questions.... and I'll try and answer Bruno at
> the same time.
> 1) I am in the process of upgrading neural modelling to include the
> fields in the traditional sense of simulation of the fields. The way to
> think of it is that the little capacitor in the Hodgkin-Huxley
> equilvalent circuit is about to get a whole new role.
> 2) Having done that, one can do simulations of single unit,  multiple
> unit, populations etc etc...You may be able to extract something
> verifiable in the wet-lab.
> 3) However, I would hold that no matter how comprehensive the models, no
> matter how many neurons ... even the whole brain and the peripheral
> nerves...they will NOT behave like the real thing in the sense that such
> a brain model cannot ever 'be' a mind. The reason is that we 'BE' the
> fields. We do not 'BE' a description of the fields. The information
> delivered by 'BE'ing the field acts in addition to that described by the
> 3rd-person-validated system of classical partial differential equations
> that are Maxwell's equations.
> 4) A given set of photons,  can result from an infinity of different
> configurations of the distal world. A single red photon can come across
> the room from your xmas decorations or across the galaxy from a
> supernova. It is a fundamentally degenerate relationship. Yet the brain
> inherits enough information to converge on a visual scene that captures
> the difference. HOW?

a) It guesses. It isn;t always right.
b) the eyes receive millions of photons. It doesn;t build a scene
from them individually, it comes up with a hypothesis
that explains them jointly, a gestalt. That greatly
constrains the problem space (although not always uniquely,
as the Necker cube shows).

> I think I know, but that explanation is too long
> and doesn't matter. The fact is that the EM fields deliver _extra_
> information inherited from their relationship with space itself. It has
> to. There's no place else or it to come from!

That is, again, a magical solution to a non existent

> 5) Regardless of my wacky ideas about space, I'd like to reinforce the
> implications of the particular case of the scientist, who is trying to
> find out about the distal natural world from position of fundamental
> ignorance. If you claim that we have enough information to overcome the
> degeneracy, then you already have what the scientist wants...knowledge
> of the unknown external distal world....so you are not actually doing
> science. You already know.

You are conflating the idea that we have enough
information to get started with the idea that we know

>This is the killer logical position. If you
> say a computer can do it, you are saying, in effect, that science does
> nothing/already knows everything.
> ============
> In the end, then, I am not saying that there is something uncomputable
> in the sense that it is impossible to 'simulate' it. You can simulate
> anything! What I am saying is that if you could _you wouldn't bother_
> because you'd already know everything. To accurately simulate a
> scientist you have to simulate (a) the scientist and (b) the entire
> environment of the scientist, when the scientist is trying to uncover
> the unknown and you can't simulate it because you don;t know it. I use
> the scientist as a model for the generally intelligent behaviour.

And that conflates not being able to simulate intelligence with
not being able to simulate an environment

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to