Stathis (....Down below...)

Stathis Papaioannou wrote:
On Thu, Feb 3, 2011 at 9:35 AM, Colin Hales
<> wrote:

This means we are hooked into the external world in ways that are not
present in the peripheral nerves. Looking at the (nerves pulses) signals, it
is impossible to tell if they are vision, smell, touch or anything else.
Those that think that a computer can add this extra bit of connectivity to
the external world, believe in comp/COMP. When you replace the brain with a
model of a brain using a computer, that "extra" bit, the connection with the
outside world we get from our qualia,...the qualia created by the brain
matter itself, is replaced by the qualia you get by 'being' the computer.

If you believe comp/COMP, then you believe that the computer's model -or -
the computer hardware itself -  somehow replaces the function of the qualia,
by analysing the sensory signalling, which is fundamentally degenerately
related to the external world. Only a human with qualia can, from sensory
signals, provide any sort of model for our 'computer-in-a-vat' that might
stand-in for an external world. Having done that, the world being explored
by our computer-in-a-vat is the world of the human model generated from the
sensory signals, not the world itself. When an encounter with the unknown
happens, then the unknown will be chacterized by a human model's response to
the unknown, not the (unknown) actual world. The extent to which these
things are different is the key.

Neuroscience is beginning to progress from NCC (Neural correlates of
consciousness) to EMCC (electromagnetic correlates of consciousness).
Researchers are slowly discovering that certain aspects of cognition and
behaviour correlate better with the LFP (local field potential/extracellular
field) than mere action potentials.

If the EM fields are the difference, then in replacing the fields of the
brain with the fields of the computer running a model...and your
qualia/cognition go with it.

So when you think of the 'input/output' relations for a computer, the
sensory signalling is only part of it. There is another complete set of
'input' relations, qualia, that together with the sensory signals, form our
real connection to the outside world. So the old black-box replacement idea
is right - but only if the black box has a whole other set of 'input'
signals, from the qualia. The only way you can computationally replace these
signals is to already know everything about the external world already. Your
alternative? Keep the qualia in your 'black box'. To me that means
generating the fields as well.

Don't get me wrong. Lots of really nifty AI can result from the
'computer-in-a-vat'. However, that's not what I am aiming at. I want AGI. G
for General.

Can the behaviour of the neurons including the electric fields be
simulated? For example, is it possible to model what will happen in
the brain (and what output will ultimately go to the muscles via
peripheral nerves) if a particular sequence of photons hits the
retina? If that is a theoretical impossibility then where exactly is
the non-computable physics, and what evidence do you have that it is

Lots of aspects to your questions.... and I'll try and answer Bruno at the same time.

1) I am in the process of upgrading neural modelling to include the fields in the traditional sense of simulation of the fields. The way to think of it is that the little capacitor in the Hodgkin-Huxley equilvalent circuit is about to get a whole new role.

2) Having done that, one can do simulations of single unit, multiple unit, populations etc etc...You may be able to extract something verifiable in the wet-lab.

3) However, I would hold that no matter how comprehensive the models, no matter how many neurons ... even the whole brain and the peripheral nerves...they will NOT behave like the real thing in the sense that such a brain model cannot ever 'be' a mind. The reason is that we 'BE' the fields. We do not 'BE' a description of the fields. The information delivered by 'BE'ing the field acts in addition to that described by the 3rd-person-validated system of classical partial differential equations that are Maxwell's equations.

4) A given set of photons, can result from an infinity of different configurations of the distal world. A single red photon can come across the room from your xmas decorations or across the galaxy from a supernova. It is a fundamentally degenerate relationship. Yet the brain inherits enough information to converge on a visual scene that captures the difference. HOW? I think I know, but that explanation is too long and doesn't matter. The fact is that the EM fields deliver _extra_ information inherited from their relationship with space itself. It has to. There's no place else or it to come from!

5) Regardless of my wacky ideas about space, I'd like to reinforce the implications of the particular case of the scientist, who is trying to find out about the distal natural world from position of fundamental ignorance. If you claim that we have enough information to overcome the degeneracy, then you already have what the scientist wants...knowledge of the unknown external distal you are not actually doing science. You already know. This is the killer logical position. If you say a computer can do it, you are saying, in effect, that science does nothing/already knows everything.

In the end, then, I am not saying that there is something uncomputable in the sense that it is impossible to 'simulate' it. You can simulate anything! What I am saying is that if you could _you wouldn't bother_ because you'd already know everything. To accurately simulate a scientist you have to simulate (a) the scientist and (b) the entire environment of the scientist, when the scientist is trying to uncover the unknown and you can't simulate it because you don;t know it. I use the scientist as a model for the generally intelligent behaviour.

Finally, for Bruno .... abstract/ideal numbers, the universe and everything....

to me, the universe is a massively parallel theorem prover. Literally. I am literally a 'truth' in a system of 10^(a lot) equations interacting. The symbols are not 'about' the universe. They ARE the universe. These systems of formal statements are not made of abstract/ideal numbers. They are made of natural entities of some kind. Something. I can guess. But it doesn't matter what. The point is they are not abstract numbers. At least presupposing ideal numbers is not justified. So. My 'presupposition' is that the universe is made of _something_. That _something_ is NOT numbers in the sense we use in computer science to make abstract descriptions of the world. You presuppose they are abstract ideal numbers. My way of thinking is more logically justified from the point of view of understanding how we might describe such a system. You might be able to say that something is going on in a system of ideal abstract numbers that might 'be'/stand in for, say, a 1st person perspective. So what? It say nothing about the real world of

But then the UDA logic goes further. It says that if you get sufficient computation ...meaning a computer MADE OF WHATEVER OUR UNIVERSE IS MADE OF (the above _something_) and run a bunc of symbols that represent abstract ideal numbers, that the entity in the computer will have the 1st person perspective evident ion the descriptions.

This is the bit that cannot be justified. None of the numbers in the program are actually reified. You can point at the parts of the program running that 'represent' a first person perspective. But there is no actual _something_ implemented in the form of the program. There is merely symbols flying about according to rules for a 1st person perspective.

This means you will be 100% right, Bruno. In the UDA, you can point at the machinations of a model of 1st person perspective. We can all agree with you. What you have not delivered is an actual first person perspective, from the point of view of 'being' the UDA/Turing megalith.

What we know, empirically, is that a 1st person perspective exists in us. Science would be impossible without it. Science proves it exists merely by existing. There's no 'law' that has to be produced by scientists. We ARE the evidence.

Scientists produce models. In relation to these models:

A) I cannot and will not ever confuse a model of a thing with the thing itself. There is no justification that the universe is somehow literally made of our descriptions! B) I cannot logically claim that the descriptions of how a universe appears and descriptions of systems of 'interacting _something_' are the same descriptions. C) I cannot ever logically claim that either set of descriptions in (B), on a computer implemented as a collection of _something_ literally IS the universe it purports of describe. (That a simulated thing is the thing)

Horrible, convoluted, but it is a self consistent position that facilitates scientific decisions that leads to testable outcomes.

The massive blind spot that science current has is (B). If you get stuck at (B) confusion, you start making the mistake that (C) avoids.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to