I've been talking with Sergio off-list for several weeks now. I am
bringing this back on-list because the discussion of these topics are
quite active. I will be replying to both the thread and to several
related themes.
Sergio is a great mathemetician, I can't dispute that. However, his
ideas are showing increasing evidence of crackpotitis.
There's a place called mathland. It's a perfect world where every object
is a perfect representative of a class of objects, all positions are
discreet, and everything adds up to some kind of ideal. Thats great,
there are plenty of useful analogies to be made between the real world
and mathland. However, the instant you try to take something out of
math-land, you must conform it to the real world. That's called
engineering. "Engineering" is not a bad word, when practiced correctly.
It merely is the art of applying theory to real microprocessors and real
environments. Therefore, engineering must be embraced.
Furthermore, you need to have a good understanding of what role your
theory plays in the system you're building. I agree that there are only
a few stacked algorithms in the cortex. Something resembling EI might
indeed have a place in that stack, but it is not, alone, sufficient to
do anything useful. I'm not saying that the GOERTZEL is even close to
right with his OpenCog framework; far from it. It's also not the case
that Sergio's EI framework can stretch all the way from receptor cells
to muscle fibers. (!!!) That seems to be his actual position, I tried
to see if I could place some wedges and shims in place to make room for
the rest of the system that is obviously present in human neuro-anatomy
but that doesn't seem to be the case.
Sergio Pissanetzky wrote:
> Alan,
[Causets as a representation ]
> REPLY TO 2.
> When light impinges on a cone on the surface of the retina, it generates an
> electric pulse. That's causality, right there. Everything else that follows,
> the processing in the retina, transmission in the optical nerve
> (irrespective of its structure), processing in the brain, reaction to the
> stimulus ("hi, mom") is causal. The reconstruction algorithm is EI, and is
> not an algorithm. What you are really doing here, you are proposing an
> experiment. It is the same I did with my 167 points. How far along are you
> with the development of the code that you'll need for the experiment.
I think I understand your code well enough to write a slow O(N!)
algorithm based on numbering permutations. Such an algorithm is not
worth either my, or my CPU's time for two basic reasons. 1. It is O(N!),
2. The theory doesn't seem to be applicable to anything without a way to
reversibly encode basic sensory data.
In the machine learning class, one of the algorithms recognized numbers
by converting the image to a vector of 400 elements. When it did this,
all spatial information was lost (or rather it became inaccessible to
the algorithm). So the algorithm basically learned to recognize features
of the numbers based on where the number was usually drawn on the image.
Our eyes move around several times a second, (called saccades (sp?)).
Obviously, any successful visual system must work independently of the
hardware used to sample the incoming light. Therefore, the spatial
relationships of light and dark patterns are absolutely essential. If
these can't be encoded, then the algorithm can't be salvaged.
Furthermore, retinal cells basically report back a statistical average
of the quantity of light that they've been exposed to (there's a bit
more going on but basically...). Saying a photon causes a neural spike
(hey! look it's causitive!) is silly at best.
Furthermore, you don't seem to be taking into account learning and
internal states. How do you obtain a mental image of your mom that you
can recognize? Fixating on causation doesn't seem to help. Identifying
structure, such as block systems in an otherwise noisy input stream, on
the other hand, seems to be essential. (hence my interest.)
EI does look like it should be useful as a tool for analysis. However, a
complete system must combine both synthesis and analysis to be
successful. You don't seem to be leaving any room for synthesis,
therefore your idea must be flawed.
> REPLY TO 3.
> Well, not quite exactly. The asymptotic complexity of an algorithm is a
> limit case. If I keep growing n to approach the limit, while keeping the
> brain constant, the problem will revert to n! complexity (or rather (n/m)!,
> where m=nbr. of neurons).
> I can write several pages to explain in full what I said. In brief, there is
> a combination of attenuating circumstances. The first, is the fact that the
> functional is local, it is the positive sum of positive numbers, each of
> which depends on the connections of a single neuron. Then, each neuron can,
> in principle, minimize its own contribution independently from the others,
> and they all work at the same time. Remember, n! is a worst-case scenario,
> the actual number of legal permutations is small.
That sounds kinda nice, If that's workable, that's the kind of approach
I would want to try first. I'm not sure how each of those could
efficiently get a list of permutations to try and the set of distances
needed for the computation.
> The second, is that the neurons are not *completely* independent. There is a
> certain amount of coupling among them. So, as neurons adjust their
> positions, they affect each other, and may have to readjust. This iteration
> converges very rapidly in "most" cases, when the interactions are "first
> neighbour" only, as we know from observing the brain, and you can recognize
> a retinal image of your mother in ~0.5 sec as Hofstadter says.
Yes, this coupling means that the spatial relationships of the signals
reaching the cortex is essential, even though you seem to want to
dismiss it.
> I still believe we must first get a grip on causality and EI before we start
> studying possible violations.
OK. I still like a number of features of the approach, and agree that
those features will exist, in some form, in any valid AGI solution,
however, I still need some grounding, and proof that I can actually feed
real information into it and get a meaningful result back.
> You are not a software developer. You are a thinker. Developers don't think
> this much. But, on a different note, and even as I am delighted with the
> thinking, I really have to go back to work.
Yeah, philosophy is great but it doesn't pay, so I have to pawn myself
off as a programmer to make myself a decent living.
> Sergio
> -----Original Message-----
> From: Alan Grimes [mailto:[email protected]]
> Sent: Saturday, June 09, 2012 4:57 PM
> To: Sergio Pissanetzky
> Subject: Issues
> 1. Causality is a dogma; not a science.
> The notion of causality is an assertion that people make in order to assert
> that the universe actually does fit their idea of logic. It has no basis in
> actual science. Indeed, several experiments indicate that time itself is
> more complex than previously thought. In the words of the 10th, and second
> greatest Doctor, "Time is not crystaline, it's made up wibbily-wobbily
> timey-wimey stuff." In some experiments, signals have been detected moving
> in a retro-causal direction from their apparent cause. In the notorious
> 2-slit experiments, it has been found that manipulating one beam causes a
> change in an EARLIER measurement of the other beam. In psychology, it has
> been shown that people exhibit reactions to events prior to the stimulus.
> Furthermore, training AFTER the test actually improves performance on the
> test. I sometimes start thinking about something up to two days before I
> encounter it. I, and many other people (much better at it than myself, I
> might add), can guess the outcome of computerized card-flip games better
> than chance.
> 2. Reversibly encoding plausible stimulus channels.
> You have argued that your posets can represent computations. I can kinda see
> that in a data-dependency driven way. However, I'm far from convinced that
> you can encode generalized sensory information. To convince me, I require an
> algorithm that will encode some matrix or lattice of pixels. To make it
> easy, I only require the luminance channel for each pixel. The encoding must
> be reversible such that the re-constructed image is accurate to within 5%
> for any given image.
> 3. O(N!) Time to O(1) P-time with finite processors; Really???
> You just asserted that a direct computation of your algorithm is NC (!!). So
> you have proposed a N! algorithm that deals with permutations in a nearly
> brute-force manner. You insist that the brain can do it in close to constant
> time, yet the brain is much smaller than N! neurons, only 20-30% of which
> are in the visual system. You suggested a partitioning algorithm, (One
> processor per element; iirc) but you haven't said much of anything specific
> about what each of those processors was supposed to be doing; it would seem
> to contradict the claim that the problem was in NC...
--
E T F
N H E
D E D
Powers are not rights.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com