Alan,

Thanks for the references and comments. I read some of the Wikipedia article
and some of the research article about the cat experiment. The idea I had in
mind is a little different. You see, in Neuroscience, the most critical
point about my theory that needs to be proved to neuroscientists is that
neurons do EI. Or, in other words, that the brain binds information as it
processes it. It is not enough to tell them that neural cliques prove the
presence of EI in the brain. An observational prove is needed, an experiment
involving very few neurons that receive clearly unbound information and
output the same information but now clearly bound. I thought a small piece
of a retina could help to do that. Another possible way would be with a
brain-on-a-dish experiment. 

The cat experiment, if I understand well (somewhat unlikely, because the
terminology alone makes me dizzy), does not help. They have their sensing
electrodes at a point far removed from the source information, and a large
number of neuronal circuits are involved. But that's not the problem. The
problem is, in my limited understanding, that even after all that long path
that the information has followed, it still remains unbound, or at the very
least not fully bound (they say the images are fuzzy). Here's why. There is
a difference between 100M dots of light, and "hi mom." Dots of light do not
make hi mom. What they capture from the cat's brain are dots of light, not
an image. Subsequently, they print those dots of light in their paper, I
look at them, my 100M dots of light are formed, and MY BRAIN, not the cat's,
makes the final hi mom (or hi whoever is in the picture). The experiment
only proves that the 100M dots are transmitted, relatively unbound, all the
way from the retina to the electrodes. 

The feature I want to reveal, is how compression works in the retina. I know
there is compression in the retina because the optical nerve is too narrow
to carry all the information captured by the eye to the brain. And I also
know that EI compresses information (all the way down to Kolmogorov
complexity, I believe but have not yet proved). Fig. 3 of my Complexity
paper shows a hierarchy with 5 levels. Each level contains exactly the same
information, but the information is progressively more and more compressed
as you move to higher and higher levels. The compression factor from level
to level is in the range 1.43 to 2.0 in this figure. The example in Fig.
1(b) of my AGI-11 paper has 7 levels with compression in the range 1.25 to
2.0. I would be a very stupid scientist if I didn't compare the two things I
know - that the retina compresses information and that EI compresses
information. It is only legitimate to ask if they have something to do with
each other. For example, it would help to know how much is the compression
factor of the retina, and whether it falls in the range of just one level or
would require several levels to explain (is it not true that the retina has
4 layers of neurons? perhaps one level per layer?). 

You see, neuroscientists don't know about EI. They don't know it exists.
They are trying to explain binding (the 100M to hi mom conversion) as a
feature of a very complicated neural implementation that does not involve
EI. I know, without a doubt in my mind, they will fail. And I feel
responsible for telling them about EI. 

The situation is a little different with your own calculations. You
mentioned you had been studying compression in the retina. This is exactly
what I want to look into. Can you dig that up, or perhaps just remember what
it was that you were trying to do? I'd like to review that material off
line, if you don't mind. Also, it would be very useful to find experiments
where the actual output from the retina itself, without involving any other
neurons, was directly measured with electrodes. Then, if we knew this, we
would be able to compare that output with the input and determine if it is
bound or not.  


ALAN> At higher levels of cognition, those kinds of chains are far from
apparent, furthermore there is a relatively massive amount of endogenous
activity in the brain.
SERGIO> This is consistent with what I would have predicted. It's nothing
but the n! effect. When the causet becomes very large, then it takes the
brain a very long time to bind it and draw final structures. You see this in
all thinkers. They sometimes take information in and then think for months,
years about their ideas, in an endogenous manner, while taking very little
if any additional information in. Compare with modern teenagers, who text
and play and watch TV and talk all the time but think very little. They
prefer to deal with small causets. 
EI is a map, only I didn't make it. It's a natural map. 

Sergio


-----Original Message-----
From: Alan Grimes [mailto:[email protected]] 
Sent: Saturday, June 16, 2012 3:40 PM
To: AGI
Subject: Re: [agi] Issues

Sergio Pissanetzky wrote:
> Alan,
> Regarding embodiment and grounding. You may have heard of the blind 
> mountain climber who can "see" with the help of a camera connected to 
> a matrix of electrodes pressed against his tongue. The camera is 
> located of the side of his head, and moves with the head. This 
> climber's brain has learned to recognize images captured by the camera 
> at that location, and he can climb without help from others. However, 
> the brain has not been "notified" that the images come from a sensor 
> different from his eyes, and located at a position different than his 
> eyes. The brain itself was able to figure that out. There followsthat, 
> if you want an AGI that is grounded and embodied, you do not need to 
> hard-code into it the geometrical position of the sensors. EI will 
> figure that out by itself. In fact, you can not engineer an AGI in any 
> way. I realize how hard to accept this must be for a person who is 
> dedicated to engineering, and that's why I say that AGI requires the 
> ultimate sacrifice: the sacrifice of yourself. You must just let go, in
the same way you let go of you grown up child.

I'm sorry, you missed my point. I wasn't insisting that visual information
must always reach V1 of the occipital lobe, rather that spatial information
not be discarded in the process. The re-mapping of spatial signals to the
densely innervated tongue is not very remarkable because the signals sent to
the tongue are mapped to the homunculus in the post-central gyrus, which
again is a spatially mapped representation of your somatic nerves (except
pain). So even though you are sending the information to a different part of
the brain, one that wouldn't seem suited to processing visual information,
you are still preserving most of the spatial information and, from that,
using the common algorithms all areas of the neocortex use, you obtain a
usable computer mediated vision.

I'm not disputing that part of that algorithm DOES resemble EI, only that EI
must be extended and augmented in order to account for observed fact.

> Perharps inadvertently, you have sugested a very interesting 
> experiment. It can be done with a piece of retina from some animal. 
> Shine some light on it and measure the output. The light can be be 
> controlled very precisely, but measuring the output may not be easy. 
> Then, apply EI to the input, calculate the output, and compare with 
> whatever measurments are available. EI should design the retina. There 
> is no need for great computer power. One can shine just one or a few 
> dots at a time. I'm sure people must have tried this, I mean measuring 
> the output, so I would recommend to start with a literature search.

Yeah, those experiments are well documented and have been carried out over
the span of decades. The understanding of the retina that I find most
plausible is that it accomplishes basic pattern enhancement and compression
operations. Here's wikipedia's coverage of the subject.
http://en.wikipedia.org/wiki/Receptive_field

I did some experimentation with this back in 2004 and I found that if I
simulated neural convergence by repeatedly averaging 2x2 blocks of pixels, I
could reconstruct an edge-enhanced version of the original image by
re-compositing the scaled images. -- there's probably a more elegant way of
doing the same thing, furthermore, I believe this to be an enhancement
operation and not really crucial for successful machine vision.

Recent laboratory results include:
http://berkeley.edu/news/media/releases/99legacy/10-15-1999.html Which
proves that spatial information is preserved, and is important. Now if you
can show me how to encode this in posets, my interest in EI will be renewed.

> Causality is of the essence. Causality does not exclude learning, of
course.
> The ability to learn is fundamental in artificial intelligence. 
> Learning takes place when an outside event is captured by sensors, (or 
> by sensory organs, in the case of the brain). The sensor receives a 
> signal, such as a beam of light, and originates some internal 
> response, for example an action potential in a neuron. This 
> constitutes a cause-effect relationship: the beam of light causes the 
> action  potential at the location where the sensor is. The system sees 
> this as a *spontaneous event*, because it could not have been 
> predicted. There are other spontaneous events, such as the random 
> firing of a neuron by itself, or anything else that is random. I am 
> saying this because you may have concluded from my writings that 
> causality excludes spontaneous events. But I have not only considered 
> learning as a spontaneous event, but I have even published details years
ago. That case is learning with a teacher, but with EI teacher or no teacher
makes no difference.

Any time you over-use a word, take the word "capability" in the context of
computer security, or Bean in the context of enterprise Java programming, it
becomes meaningless. The critical question here is whether focusing on the
"causal nature" of things adds any useful or important information to the
mix. In the first article linked above, the brain doesn't care about flat
areas because retinal cells are unreliable and because a region of some
given color doesn't communicate any information. It is the edges that give
an object shape and form, and hence why our nervous systems are tuned to
detect them. Regardless of the literal truth of causality, one must ask
whether it is such a primary constituent of cognition or whether it can be
discarded in favor of an even more powerful theory.

> Any spontaneous event starts a causal chain of events. That's why you 
> write programs for computer simulations like this:
> IF (event) THEN ...
> ELSE  .....
> and after the THEN, and also after the ELSE, you write a sequence of 
> statements with no logical interruptions. Those are the causal chains, 
> and they are different.

At higher levels of cognition, those kinds of chains are far from apparent,
furthermore there is a relatively massive amount of endogenous activity in
the brain. My challenge to you is to answer whether you've finally found the
underlying mechanism or whether you've made yet another map. The notion of
causality, as just stated, does seem to have some descriptive power, but
that doesn't mean it can actually serve for what it is describing.


--
E T F
N H E
D E D

Powers are not rights.

-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
Modify Your Subscription:
https://www.listbox.com/member/?&;
d2
Powered by Listbox: http://www.listbox.com





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to