Alan, 

I am very glad you took this back online. I was going to do it myself next
time. Here, I will touch only on some key points. 

There is a difference between a mathematical model of a physical system, and
the physical system itself. I explained this before, but later I may have
been sloppy when I wrote "neuron" where I should have said artifical neuron
or model neuron. I also emphasized that we can and should learn from the
brain, because it is the only existing intelligent system. Still, the brain
is a physical system, and there is a theory of Physics supporting its
operation. It is that theory what I am after. The brain itself is an
implementation of that theory. I don't need to know all the details of the
implementation to move forward. I appreciate your explanation about the
optical nerve, but I rejected it because I don't need it. 

Another key point, is Engineering. I have the utmost respect for Engineers.
I enjoy their work every second of my life, and I complain about poor
engineering each time a reclosable bag doesn't reclose (perhaps
manufacturers fire Engineers to "save" money). But I want to define the
difference between AI and AGI a little better than what it is defined now.
To me, the difference is in where EI is running. If it is running in a
person's brain, and the conclusions are used to engineer an "intelligent"
machine, then the machine is not intelligent and this is narrow AI. AGI is
defined as a machine with a human level of intelligence. Such a machine must
have EI running on it and be able to do the job by itself. Just as you can
not open the lid from someone's brain and plant knowledge inside, but are
instead forced to *teach* him whatever knowledge you may have and expect him
to "absorb" that knowledge (meaning EI it by himself), you can not
"engineer" an AGI. You can only teach an AGI. 

So Engineering is what makes AGI different from narrow AI. Sharply
different. Engineers use their own inference to draw conclusions (in my
language, they use EI in their brains). Then, they write software that
simulates their conclusions. They design machines that play chess or win
Jeopardy contests or drive cars, but they are smart people, they know this
is not intelligence, hence narrow AI. In AGI, the inference that draws the
conclusions (EI in my language) is supposed to be in the machine, and do the
conclusions on its own, independently of humans. So I use EI to tell narrow
AI from real AGI (crackpotitis? we'll see about that). 

Regarding embodiment and grounding. You may have heard of the blind mountain
climber who can "see" with the help of a camera connected to a matrix of
electrodes pressed against his tongue. The camera is located of the side of
his head, and moves with the head. This climber's brain has learned to
recognize images captured by the camera at that location, and he can climb
without help from others. However, the brain has not been "notified" that
the images come from a sensor different from his eyes, and located at a
position different than his eyes. The brain itself was able to figure that
out. There followsthat, if you want an AGI that is grounded and embodied,
you do not need to hard-code into it the geometrical position of the
sensors. EI will figure that out by itself. In fact, you can not engineer an
AGI in any way. I realize how hard to accept this must be for a person who
is dedicated to engineering, and that's why I say that AGI requires the
ultimate sacrifice: the sacrifice of yourself. You must just let go, in the
same way you let go of you grown up child. 

Perharps inadvertently, you have sugested a very interesting experiment. It
can be done with a piece of retina from some animal. Shine some light on it
and measure the output. The light can be be controlled very precisely, but
measuring the output may not be easy. Then, apply EI to the input, calculate
the output, and compare with whatever measurments are available. EI should
design the retina. There is no need for great computer power. One can shine
just one or a few dots at a time. I'm sure people must have tried this, I
mean measuring the output, so I would recommend to start with a literature
search. 

Causality is of the essence. Causality does not exclude learning, of course.
The ability to learn is fundamental in artificial intelligence. Learning
takes place when an outside event is captured by sensors, (or by sensory
organs, in the case of the brain). The sensor receives a signal, such as a
beam of light, and originates some internal response, for example an action
potential in a neuron. This constitutes a cause-effect relationship: the
beam of light causes the action  potential at the location where the sensor
is. The system sees this as a *spontaneous event*, because it could not have
been predicted. There are other spontaneous events, such as the random
firing of a neuron by itself, or anything else that is random. I am saying
this because you may have concluded from my writings that causality excludes
spontaneous events. But I have not only considered learning as a spontaneous
event, but I have even published details years ago. That case is learning
with a teacher, but with EI teacher or no teacher makes no difference. 

Any spontaneous event starts a causal chain of events. That's why you write
programs for computer simulations like this:
IF (event) THEN ...
ELSE  .....
and after the THEN, and also after the ELSE, you write a sequence of
statements with no logical interruptions. Those are the causal chains, and
they are 
different. It's like you were considering two different worlds, one where
the event happened, another where it didn't, and you are indeed. The
informed reader should immediately see an analogy with Schroedinger's famous
"box with a cat" and the cat is both dead and alive at the same time. The
event is "the cat has died", but you don't know that unless you  open the
box. 

Great Mathematician?!! Well, thanks. Mathematician perhaps, but I am not
sure about great. 

I will momentarily pause pursuing this blog because I have work to do. It is
only temporary. Regards. 

Sergio



-----Original Message-----
From: Alan Grimes [mailto:[email protected]] 
Sent: Wednesday, June 13, 2012 9:52 PM
To: AGI
Subject: [agi] Re: Issues

I've been talking with Sergio off-list for several weeks now. I am bringing
this back on-list because the discussion of these topics are quite active. I
will be replying to both the thread and to several related themes.

Sergio is a great mathemetician, I can't dispute that. However, his ideas
are showing increasing evidence of crackpotitis.

There's a place called mathland. It's a perfect world where every object is
a perfect representative of a class of objects, all positions are discreet,
and everything adds up to some kind of ideal. Thats great, there are plenty
of useful analogies to be made between the real world and mathland. However,
the instant you try to take something out of math-land, you must conform it
to the real world. That's called engineering. "Engineering" is not a bad
word, when practiced correctly.
It merely is the art of applying theory to real microprocessors and real
environments. Therefore, engineering must be embraced.

Furthermore, you need to have a good understanding of what role your theory
plays in the system you're building. I agree that there are only a few
stacked algorithms in the cortex. Something resembling EI might indeed have
a place in that stack, but it is not, alone, sufficient to do anything
useful. I'm not saying that the GOERTZEL is even close to right with his
OpenCog framework; far from it. It's also not the case that Sergio's EI
framework can stretch all the way from receptor cells to muscle fibers.
(!!!)  That seems to be his actual position, I tried to see if I could place
some wedges and shims in place to make room for the rest of the system that
is obviously present in human neuro-anatomy but that doesn't seem to be the
case.

Sergio Pissanetzky wrote:
> Alan,

[Causets as a representation ]
> REPLY TO 2. 
> When light impinges on a cone on the surface of the retina, it 
> generates an electric pulse. That's causality, right there. Everything 
> else that follows, the processing in the retina, transmission in the 
> optical nerve (irrespective of its structure), processing in the 
> brain, reaction to the stimulus ("hi, mom") is causal. The 
> reconstruction algorithm is EI, and is not an algorithm. What you are 
> really doing here, you are proposing an experiment. It is the same I 
> did with my 167 points. How far along are you with the development of the
code that you'll need for the experiment.

I think I understand your code well enough to write a slow O(N!) algorithm
based on numbering permutations. Such an algorithm is not worth either my,
or my CPU's time for two basic reasons. 1. It is O(N!), 2. The theory
doesn't seem to be applicable to anything without a way to reversibly encode
basic sensory data.

In the machine learning class, one of the algorithms recognized numbers by
converting the image to a vector of 400 elements. When it did this, all
spatial information was lost (or rather it became inaccessible to the
algorithm). So the algorithm basically learned to recognize features of the
numbers based on where the number was usually drawn on the image.
Our eyes move around several times a second, (called saccades (sp?)).
Obviously, any successful visual system must work independently of the
hardware used to sample the incoming light. Therefore, the spatial
relationships of light and dark patterns are absolutely essential. If these
can't be encoded, then the algorithm can't be salvaged.
Furthermore, retinal cells basically report back a statistical average of
the quantity of light that they've been exposed to (there's a bit more going
on but basically...). Saying a photon causes a neural spike (hey! look it's
causitive!) is silly at best.

Furthermore, you don't seem to be taking into account learning and internal
states. How do you obtain a mental image of your mom that you can recognize?
Fixating on causation doesn't seem to help. Identifying structure, such as
block systems in an otherwise noisy input stream, on the other hand, seems
to be essential. (hence my interest.)

EI does look like it should be useful as a tool for analysis. However, a
complete system must combine both synthesis and analysis to be successful.
You don't seem to be leaving any room for synthesis, therefore your idea
must be flawed.

> REPLY TO 3. 
> Well, not quite exactly. The asymptotic complexity of an algorithm is 
> a limit case. If I keep growing n to approach the limit, while keeping 
> the brain constant, the problem will revert to n! complexity (or 
> rather (n/m)!, where m=nbr.  of neurons).

> I can write several pages to explain in full what I said. In brief, 
> there is a combination of attenuating circumstances. The first, is the 
> fact that the functional is local, it is the positive sum of positive 
> numbers, each of which depends on the connections of a single neuron. 
> Then, each neuron can, in principle, minimize its own contribution 
> independently from the others, and they all work at the same time. 
> Remember, n! is a worst-case scenario, the actual number of legal
permutations is small.

That sounds kinda nice, If that's workable, that's the kind of approach I
would want to try first. I'm not sure how each of those could efficiently
get a list of permutations to try and the set of distances needed for the
computation.

> The second, is that the neurons are not *completely* independent. 
> There is a certain amount of coupling among them. So, as  neurons 
> adjust their positions, they affect each other, and may have to 
> readjust. This iteration converges very rapidly in "most" cases, when 
> the interactions are "first neighbour" only,  as we know from 
> observing the brain, and you can recognize a retinal image of your mother
in ~0.5 sec as Hofstadter says.

Yes, this coupling means that the spatial relationships of the signals
reaching the cortex is essential, even though you seem to want to dismiss
it.

> I still believe we must first get a grip on causality and EI before we 
> start studying possible violations.

OK. I still like a number of features of the approach, and agree that those
features will exist, in some form, in any valid AGI solution, however, I
still need some grounding, and proof that I can actually feed real
information into it and get a meaningful result back.

> You are not a software developer. You are a thinker. Developers don't 
> think this much. But, on a different note, and even as I am delighted 
> with the thinking, I really have to go back to work.

Yeah, philosophy is great but it doesn't pay, so I have to pawn myself off
as a programmer to make myself a decent living.


> Sergio

> -----Original Message-----
> From: Alan Grimes [mailto:[email protected]]
> Sent: Saturday, June 09, 2012 4:57 PM
> To: Sergio Pissanetzky
> Subject: Issues

> 1. Causality is a dogma; not a science.

> The notion of causality is an assertion that people make in order to 
> assert that the universe actually does fit their idea of logic. It has 
> no basis in actual science. Indeed, several experiments indicate that 
> time itself is more complex than previously thought. In the words of 
> the 10th, and second greatest Doctor, "Time is not crystaline, it's 
> made up wibbily-wobbily timey-wimey stuff." In some experiments, 
> signals have been detected moving in a retro-causal direction from 
> their apparent cause. In the notorious 2-slit experiments, it has been 
> found that manipulating one beam causes a change in an EARLIER 
> measurement of the other beam. In psychology, it has been shown that
people exhibit reactions to events prior to the stimulus.
> Furthermore, training AFTER the test actually improves performance on 
> the test. I sometimes start thinking about something up to two days 
> before I encounter it. I, and many other people (much better at it 
> than myself, I might add), can guess the outcome of computerized 
> card-flip games better than chance.

> 2. Reversibly encoding plausible stimulus channels.

> You have argued that your posets can represent computations. I can 
> kinda see that in a data-dependency driven way. However, I'm far from 
> convinced that you can encode generalized sensory information. To 
> convince me, I require an algorithm that will encode some matrix or 
> lattice of pixels. To make it easy, I only require the luminance 
> channel for each pixel. The encoding must be reversible such that the 
> re-constructed image is accurate to within 5% for any given image.

> 3. O(N!) Time to O(1) P-time with finite processors; Really???

> You just asserted that a direct computation of your algorithm is NC 
> (!!). So you have proposed a N! algorithm that deals with permutations 
> in a nearly brute-force manner. You insist that the brain can do it in 
> close to constant time, yet the brain is much smaller than N! neurons, 
> only 20-30% of which are in the visual system. You suggested a 
> partitioning algorithm, (One processor per element; iirc) but you 
> haven't said much of anything specific about what each of those 
> processors was supposed to be doing; it would seem to contradict the claim
that the problem was in NC...


--
E T F
N H E
D E D

Powers are not rights.





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57
Modify Your Subscription:
https://www.listbox.com/member/?&;
d2
Powered by Listbox: http://www.listbox.com





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to