Alan, alright, but what are those numbers? What are their units? Why do they form a matrix? What picture did you scan? What does your algorithm do? No need to reply to all that indetail, I am just trying to understand how you think about a retina. If your work is unrecoverable it's OK. Have you followed any literature about this experiment?
>ALAN SAID: >What worries me about you is that you don't seem to be open to further >expanding your toolbox at this point. It is possible that my concerns >about it are ill-founded, that there is an explanation of how to deal >with spatial information within the framework of your theory, but I >don't see it yet. Furthermore, the way I see it, everything in the brain >must either be implemented or explained at a higher level. The extremely >important process of applying learned knowledge doesn't seem to be covered >by EI. Also, one of the more celebrated features of human intelligence is >the ability to set logic aside from time to time and be creative. Your >algorithm doesn't seem to leave much room for that; and yes it is important >for getting off of "false summits" in the terminology of hill-climbing >algorithms, as well as communication with marginally rational agents. SERGIO REPLIES: Alan, you are accurately describing the branching point, the point where scientists are confronted with complexity, feel that there must be "something" there, some form of inference, don't know what to do about it, and branch out into a variety of approaches. At that point, Michael Behe went the way of Intelligent Design. Stuart Kauffman proposed a quantum-mechanical brain. Betrand Russell proposed boolean logic, later proved incapable to do the job by Church and Turing. AI'ers designated that as "the problem" and tried to "engineer" it. They wanted to use their own intelligence to "solve the problem" by way of a man-made solution. They got nowhere in 60 years. Ben proposed Solomonoff's inductive inference, but realized it's not enough and decided to engineer the rest. Just like the AI'ers. I proposed EI. Pure inference, no engineering. Nothing bad about engineering, it's just that engineers were not there when evolution created the brain. The brain, and its intelligence, can not be engineered. The rest of your letter indicates to me that you are still refusing to let go. You still want to put your skills at it, you think you can do it. You want to take charge of "that" and "solve" it yourself. You can't. You have to let go. That's the supreme sacrifice that AGI requires. It's not about you, sorry. AGI doesn't care about you. And you can't own an AGI. Sergio -----Original Message----- From: Alan Grimes [mailto:[email protected]] Sent: Monday, June 18, 2012 4:03 PM To: AGI Subject: Re: [agi] Issues My work in 2004 ended abruptly when my hard drive crashed. =P Basically I created a M by n matrix, then populated it by scanning the input picture with this matrix: -1/12 -1/6 -1/12 -1/6 1 -1/6 -1/12 -1/6 -1/12 (with adjustments for edges and corners). I then re-scaled the picture by 1/2 by averaging groups of 4 pixels and re-applied my algorithm. I repeated the process until the input image was uselessly tiny. The algorithm introduced a very significant noise signal but it basically worked. I could probably do much better performance wise by using vector operations, even still the machine I had back then could do it in about ten seconds, running on Squeak 3.6 (Smalltalk). The magic happens in V1 of the occipital lobe where the cortical columns https://www.google.com/search?q=cortical+column&hl=en&prmd=imvns&tbm=isch http://en.wikipedia.org/wiki/Cortical_column learn to detect edges, corners, curves, orientation, etc... http://en.wikipedia.org/wiki/Ocular_dominance_column > EI is a map, only I didn't make it. It's a natural map. What worries me about you is that you don't seem to be open to further expanding your toolbox at this point. It is possible that my concerns about it are ill-founded, that there is an explanation of how to deal with spatial information within the framework of your theory, but I don't see it yet. Furthermore, the way I see it, everything in the brain must either be implemented or explained at a higher level. The extremely important process of applying learned knowledge doesn't seem to be covered by EI. Also, one of the more celebrated features of human intelligence is the ability to set logic aside from time to time and be creative. Your algorithm doesn't seem to leave much room for that; and yes it is important for getting off of "false summits" in the terminology of hill-climbing algorithms, as well as communication with marginally rational agents. So yeah, I'm going to need some evidence that you can broaden your perspective or I'll be forced to write you off as a high-functioning crackpot. -- E T F N H E D E D Powers are not rights. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57 Modify Your Subscription: https://www.listbox.com/member/?& d2 Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
