Sergio Pissanetzky wrote:
> Alan, 
> Regarding embodiment and grounding. You may have heard of the blind mountain
> climber who can "see" with the help of a camera connected to a matrix of
> electrodes pressed against his tongue. The camera is located of the side of
> his head, and moves with the head. This climber's brain has learned to
> recognize images captured by the camera at that location, and he can climb
> without help from others. However, the brain has not been "notified" that
> the images come from a sensor different from his eyes, and located at a
> position different than his eyes. The brain itself was able to figure that
> out. There followsthat, if you want an AGI that is grounded and embodied,
> you do not need to hard-code into it the geometrical position of the
> sensors. EI will figure that out by itself. In fact, you can not engineer an
> AGI in any way. I realize how hard to accept this must be for a person who
> is dedicated to engineering, and that's why I say that AGI requires the
> ultimate sacrifice: the sacrifice of yourself. You must just let go, in the
> same way you let go of you grown up child. 

I'm sorry, you missed my point. I wasn't insisting that visual
information must always reach V1 of the occipital lobe, rather that
spatial information not be discarded in the process. The re-mapping of
spatial signals to the densely innervated tongue is not very remarkable
because the signals sent to the tongue are mapped to the homunculus in
the post-central gyrus, which again is a spatially mapped representation
of your somatic nerves (except pain). So even though you are sending the
information to a different part of the brain, one that wouldn't seem
suited to processing visual information, you are still preserving most
of the spatial information and, from that, using the common algorithms
all areas of the neocortex use, you obtain a usable computer mediated
vision.

I'm not disputing that part of that algorithm DOES resemble EI, only
that EI must be extended and augmented in order to account for observed
fact.

> Perharps inadvertently, you have sugested a very interesting experiment. It
> can be done with a piece of retina from some animal. Shine some light on it
> and measure the output. The light can be be controlled very precisely, but
> measuring the output may not be easy. Then, apply EI to the input, calculate
> the output, and compare with whatever measurments are available. EI should
> design the retina. There is no need for great computer power. One can shine
> just one or a few dots at a time. I'm sure people must have tried this, I
> mean measuring the output, so I would recommend to start with a literature
> search. 

Yeah, those experiments are well documented and have been carried out
over the span of decades. The understanding of the retina that I find
most plausible is that it accomplishes basic pattern enhancement and
compression operations. Here's wikipedia's coverage of the subject.
http://en.wikipedia.org/wiki/Receptive_field

I did some experimentation with this back in 2004 and I found that if I
simulated neural convergence by repeatedly averaging 2x2 blocks of
pixels, I could reconstruct an edge-enhanced version of the original
image by re-compositing the scaled images. -- there's probably a more
elegant way of doing the same thing, furthermore, I believe this to be
an enhancement operation and not really crucial for successful machine
vision.

Recent laboratory results include:
http://berkeley.edu/news/media/releases/99legacy/10-15-1999.html Which
proves that spatial information is preserved, and is important. Now if
you can show me how to encode this in posets, my interest in EI will be
renewed.

> Causality is of the essence. Causality does not exclude learning, of course.
> The ability to learn is fundamental in artificial intelligence. Learning
> takes place when an outside event is captured by sensors, (or by sensory
> organs, in the case of the brain). The sensor receives a signal, such as a
> beam of light, and originates some internal response, for example an action
> potential in a neuron. This constitutes a cause-effect relationship: the
> beam of light causes the action  potential at the location where the sensor
> is. The system sees this as a *spontaneous event*, because it could not have
> been predicted. There are other spontaneous events, such as the random
> firing of a neuron by itself, or anything else that is random. I am saying
> this because you may have concluded from my writings that causality excludes
> spontaneous events. But I have not only considered learning as a spontaneous
> event, but I have even published details years ago. That case is learning
> with a teacher, but with EI teacher or no teacher makes no difference. 

Any time you over-use a word, take the word "capability" in the context
of computer security, or Bean in the context of enterprise Java
programming, it becomes meaningless. The critical question here is
whether focusing on the "causal nature" of things adds any useful or
important information to the mix. In the first article linked above, the
brain doesn't care about flat areas because retinal cells are unreliable
and because a region of some given color doesn't communicate any
information. It is the edges that give an object shape and form, and
hence why our nervous systems are tuned to detect them. Regardless of
the literal truth of causality, one must ask whether it is such a
primary constituent of cognition or whether it can be discarded in favor
of an even more powerful theory.

> Any spontaneous event starts a causal chain of events. That's why you write
> programs for computer simulations like this:
> IF (event) THEN ...
> ELSE  .....
> and after the THEN, and also after the ELSE, you write a sequence of
> statements with no logical interruptions. Those are the causal chains, and
> they are 
> different.

At higher levels of cognition, those kinds of chains are far from
apparent, furthermore there is a relatively massive amount of endogenous
activity in the brain. My challenge to you is to answer whether you've
finally found the underlying mechanism or whether you've made yet
another map. The notion of causality, as just stated, does seem to have
some descriptive power, but it that doesn't mean it can actually serve
for what it is describing.


> Sergio

> -----Original Message-----
> From: Alan Grimes [mailto:[email protected]] 
> Sent: Wednesday, June 13, 2012 9:52 PM
> To: AGI
> Subject: [agi] Re: Issues
> 
> I've been talking with Sergio off-list for several weeks now. I am bringing
> this back on-list because the discussion of these topics are quite active. I
> will be replying to both the thread and to several related themes.
> 
> Sergio is a great mathemetician, I can't dispute that. However, his ideas
> are showing increasing evidence of crackpotitis.
> 
> There's a place called mathland. It's a perfect world where every object is
> a perfect representative of a class of objects, all positions are discreet,
> and everything adds up to some kind of ideal. Thats great, there are plenty
> of useful analogies to be made between the real world and mathland. However,
> the instant you try to take something out of math-land, you must conform it
> to the real world. That's called engineering. "Engineering" is not a bad
> word, when practiced correctly.
> It merely is the art of applying theory to real microprocessors and real
> environments. Therefore, engineering must be embraced.
> 
> Furthermore, you need to have a good understanding of what role your theory
> plays in the system you're building. I agree that there are only a few
> stacked algorithms in the cortex. Something resembling EI might indeed have
> a place in that stack, but it is not, alone, sufficient to do anything
> useful. I'm not saying that the GOERTZEL is even close to right with his
> OpenCog framework; far from it. It's also not the case that Sergio's EI
> framework can stretch all the way from receptor cells to muscle fibers.
> (!!!)  That seems to be his actual position, I tried to see if I could place
> some wedges and shims in place to make room for the rest of the system that
> is obviously present in human neuro-anatomy but that doesn't seem to be the
> case.
> 
> Sergio Pissanetzky wrote:
>> Alan,
> 
> [Causets as a representation ]
>> REPLY TO 2. 
>> When light impinges on a cone on the surface of the retina, it 
>> generates an electric pulse. That's causality, right there. Everything 
>> else that follows, the processing in the retina, transmission in the 
>> optical nerve (irrespective of its structure), processing in the 
>> brain, reaction to the stimulus ("hi, mom") is causal. The 
>> reconstruction algorithm is EI, and is not an algorithm. What you are 
>> really doing here, you are proposing an experiment. It is the same I 
>> did with my 167 points. How far along are you with the development of the
> code that you'll need for the experiment.
> 
> I think I understand your code well enough to write a slow O(N!) algorithm
> based on numbering permutations. Such an algorithm is not worth either my,
> or my CPU's time for two basic reasons. 1. It is O(N!), 2. The theory
> doesn't seem to be applicable to anything without a way to reversibly encode
> basic sensory data.
> 
> In the machine learning class, one of the algorithms recognized numbers by
> converting the image to a vector of 400 elements. When it did this, all
> spatial information was lost (or rather it became inaccessible to the
> algorithm). So the algorithm basically learned to recognize features of the
> numbers based on where the number was usually drawn on the image.
> Our eyes move around several times a second, (called saccades (sp?)).
> Obviously, any successful visual system must work independently of the
> hardware used to sample the incoming light. Therefore, the spatial
> relationships of light and dark patterns are absolutely essential. If these
> can't be encoded, then the algorithm can't be salvaged.
> Furthermore, retinal cells basically report back a statistical average of
> the quantity of light that they've been exposed to (there's a bit more going
> on but basically...). Saying a photon causes a neural spike (hey! look it's
> causitive!) is silly at best.
> 
> Furthermore, you don't seem to be taking into account learning and internal
> states. How do you obtain a mental image of your mom that you can recognize?
> Fixating on causation doesn't seem to help. Identifying structure, such as
> block systems in an otherwise noisy input stream, on the other hand, seems
> to be essential. (hence my interest.)
> 
> EI does look like it should be useful as a tool for analysis. However, a
> complete system must combine both synthesis and analysis to be successful.
> You don't seem to be leaving any room for synthesis, therefore your idea
> must be flawed.
> 
>> REPLY TO 3. 
>> Well, not quite exactly. The asymptotic complexity of an algorithm is 
>> a limit case. If I keep growing n to approach the limit, while keeping 
>> the brain constant, the problem will revert to n! complexity (or 
>> rather (n/m)!, where m=nbr.  of neurons).
> 
>> I can write several pages to explain in full what I said. In brief, 
>> there is a combination of attenuating circumstances. The first, is the 
>> fact that the functional is local, it is the positive sum of positive 
>> numbers, each of which depends on the connections of a single neuron. 
>> Then, each neuron can, in principle, minimize its own contribution 
>> independently from the others, and they all work at the same time. 
>> Remember, n! is a worst-case scenario, the actual number of legal
> permutations is small.
> 
> That sounds kinda nice, If that's workable, that's the kind of approach I
> would want to try first. I'm not sure how each of those could efficiently
> get a list of permutations to try and the set of distances needed for the
> computation.
> 
>> The second, is that the neurons are not *completely* independent. 
>> There is a certain amount of coupling among them. So, as  neurons 
>> adjust their positions, they affect each other, and may have to 
>> readjust. This iteration converges very rapidly in "most" cases, when 
>> the interactions are "first neighbour" only,  as we know from 
>> observing the brain, and you can recognize a retinal image of your mother
> in ~0.5 sec as Hofstadter says.
> 
> Yes, this coupling means that the spatial relationships of the signals
> reaching the cortex is essential, even though you seem to want to dismiss
> it.
> 
>> I still believe we must first get a grip on causality and EI before we 
>> start studying possible violations.
> 
> OK. I still like a number of features of the approach, and agree that those
> features will exist, in some form, in any valid AGI solution, however, I
> still need some grounding, and proof that I can actually feed real
> information into it and get a meaningful result back.
> 
>> You are not a software developer. You are a thinker. Developers don't 
>> think this much. But, on a different note, and even as I am delighted 
>> with the thinking, I really have to go back to work.
> 
> Yeah, philosophy is great but it doesn't pay, so I have to pawn myself off
> as a programmer to make myself a decent living.

>> Sergio


-- 
E T F
N H E
D E D

Powers are not rights.





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to