Sergio Pissanetzky wrote:

> The situation is a little different with your own calculations. You
> mentioned you had been studying compression in the retina. This is exactly
> what I want to look into. Can you dig that up, or perhaps just remember what
> it was that you were trying to do? I'd like to review that material off
> line, if you don't mind. Also, it would be very useful to find experiments
> where the actual output from the retina itself, without involving any other
> neurons, was directly measured with electrodes. Then, if we knew this, we
> would be able to compare that output with the input and determine if it is
> bound or not.  

Heh. ;)

My work in 2004 ended abruptly when my hard drive crashed. =P

Basically I created a M by n matrix, then populated it by scanning the
input picture with this matrix:

-1/12  -1/6   -1/12

-1/6     1    -1/6

-1/12   -1/6  -1/12

(with adjustments for edges and corners).

I then re-scaled the picture by 1/2 by averaging groups of 4 pixels and
re-applied my algorithm. I repeated the process until the input image
was uselessly tiny.

The algorithm introduced a very significant noise signal but it
basically worked. I could probably do much better performance wise by
using vector operations, even still the machine I had back then could do
it in about ten seconds, running on Squeak 3.6 (Smalltalk).

The magic happens in V1 of the occipital lobe where the cortical columns
https://www.google.com/search?q=cortical+column&hl=en&prmd=imvns&tbm=isch

http://en.wikipedia.org/wiki/Cortical_column learn to detect edges,
corners, curves, orientation, etc...

http://en.wikipedia.org/wiki/Ocular_dominance_column

> EI is a map, only I didn't make it. It's a natural map. 

What worries me about you is that you don't seem to be open to further
expanding your toolbox at this point. It is possible that my concerns
about it are ill-founded, that there is an explanation of how to deal
with spatial information within the framework of your theory, but I
don't see it yet. Furthermore, the way I see it, everything in the brain
must either be implemented or explained at a higher level. The extremely
important process of applying learned knowledge doesn't seem to be
covered by EI. Also, one of the more celebrated features of human
intelligence is the ability to set logic aside from time to time and be
creative. Your algorithm doesn't seem to leave much room for that; and
yes it is important for getting off of "false summits" in the
terminology of hill-climbing algorithms, as well as communication with
marginally rational agents.

So yeah, I'm going to need some evidence that you can broaden your
perspective or I'll be forced to write you off as a high-functioning
crackpot.

-- 
E T F
N H E
D E D

Powers are not rights.





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to