Alan, Ben, Steven Here is my analysis of a recent conversation about the vision system some of us carried on this blog. Douglas Hofstadter said (not exact quote): "The major question of AI is ... to convert 100M retinal dots into 'Hi Mom' in one tenth of a second.
DOUGLAS SAID> one tenth of a second SERGIO REPLIES> If one knew the principle that makes that conversion work, it would be perfectly alright to implement it on a supercomputer even if it takes 1 year to compute. Then, only then, one could start engineering it: better algorithms, better hardware, perhaps new hardware ... The year would go to one month, then one day. If the purpose is to explain how 'Hi Mom' works, then 'one tenth of a second' is irrelevant. AT SOME POINT IN THE CONVERSATION, ALAN SAID> Because of the optical properties of the eye, the image on your retina is upside down and backwards. Because there are more optic nerves than somatic nerves, there is no "pyramidal tract" in the optic nerve, it is a straight shot back to V1 which is upside down and backwards. To keep everything consistent with the brain, the somatic nerves ARE crossed over the center line of the body and you find an upside down and backward representation of your body on the post-central gyrus of your brain. So yeah, your brain is upside down and backwards in your skull. SERGIO REPLIES> There are many factors that influence the anatomy/topology/orientation of nerves. Perhaps the optical nerve is short because nerves are slow and 'Hi Mom' needs to be fast. It is narrow because the size of the head needs to be small enough to allow birth. The retina does some compression to allow the nerve to be thinner. If the goal is to explain 'Hi Mom' there is no need to take all these details into account. Proof can be found in the blind climber who can see enough to climb mountains alone using a camera with electrodes attached to his tongue. There is no optical nerve, no retina, no vision-specialized structures in the brain, and it still works. It works because neural matter organizes itself on the go. One of the first (the first?) brain-on-a-dish experiments in Florida, some 8-10 years ago, consisted of neurons ground from a rat brain, kept alive in-vitro, and connected to electrodes. Using only external signals applied to the electrodes, the neurons quickly learned how to fly a plane. There is no brain here! No hypothalamus, nothing. Yet they can self-organize. I don't know much about hypothalamus and other things in the brain, but this makes me wonder, if neurons can self-organize on a dish, why couldn't neurons in the brain do the same? They do self-organize some in the case of the blind climber. Could one explain the ENTIRE brain structure as a result of two mechanisms, evolution and self-organization? I am not the first to propose this. I see brain structure as too complicated to arise by evolution alone, and there must be a reason why it made neurons capable of self-organization. AT ANOTHER POINT IN THE CONVERSATION, ADDRESSING ALAN, BEN SAID> How exactly do you suggest to bridge the functionality gap between visual pattern recognition and all the other things human beings do? SERGIO'S COMMENTS> I propose assuming EI is the principle behind Hi Mom (the actual principle is symmetry/conservation laws), apply it to visual pattern recognition, duplicate my experiments, expand them, publish the results, let not me be the only one saying these things. If that works, then there is already a HUGE field of work for AGI, AGI would have gained reputation, and funding will follow - to improve from 1 year to 1 month to 1 day. Then, with all that in hand, consider other brain structures. They should be self-organizing too. How far will they self-organize? I don't know but I don't see any reason why they should stop at some point. They will get slower at higher levels in the hierarchy, but why would they stop? Alan has said: "It can be claimed that the entire nervous system is designed around the visual sense. " This is a whole research program, for many years and many people. The critical requirement is to have a principle, start from the principle, and always work within the principle. Sergio -----Original Message----- From: Ben Goertzel [mailto:[email protected]] Sent: Saturday, June 30, 2012 9:27 AM To: AGI Subject: Re: [agi] Building high-level features using large scale unsupervised learning Alan, All that is a reasonably plausible-sounding AGi approach, and sounds a lot like the AGI approach Itamar Arel is always telling me about (and similar to the broad vision of Dileep George, Jeff Hawkins, and many others)... However, two quibbles: 1) the perceptual-hierarchy stuff that Ng and his group at Google have just reported, is only a small portion of the architecture you've sketched.... 2) You say " The first thing to note is that this is an unsupervised pattern learner. That should be pretty amazing all by itself. The second thing to note is that all it deals with are vectors of numbers. There is no reason on earth that it can't be made to work with any conceivable stimulus that can be encoded as a vector of numbers. " but obviously, functionality at classification on data with a certain sort of statistical properties (visual data) does not necessarily imply similar functionality as classification on other data with other properties.... The extent of generalizability of that network's functionality remains to be seen -- Ben On Fri, Jun 29, 2012 at 9:19 AM, Alan Grimes <[email protected]> wrote: > Ben Goertzel wrote: >> How exactly do you suggest to bridge the functionality gap between >> visual pattern recognition and all the other things human beings do? > > =) > > Setting aside problems noted as still being unsolved, here's a crude > sketch of how the system can be organized. For the sake of brevity, > only the cortical-thalamic-cortical system will be considered. > > The first thing to note is that this is an unsupervised pattern learner. > That should be pretty amazing all by itself. The second thing to note > is that all it deals with are vectors of numbers. There is no reason > on earth that it can't be made to work with any conceivable stimulus > that can be encoded as a vector of numbers. There are some serious > channel dependence problems, previously noted, but the basic process is present. > > The third thing to note is that they could run their matrix stack in > reverse and "imagine" what a face looks like. This is critical, > especially for motor control! =P > > This is your basic algorithm. The next challenge is that you need to > break channel dependence and introduce associations between patterns > ie with faces and the various representations of the word "face". I > suspect that once channel dependence is fixed, then, at some high > level in the network, these associations will emerge on their own. > > The next issue is topology. You could organize the topology like the > human brain and, in theory, it should be human equivalent. Motor > control is implemented just like perception. It builds up complex > sequences of actions from simple sequences of actions exactly as > complex perceptions are built up from simple perceptions. To do > something, you just run the stack in reverse, as mentioned above. > Combined with channel dependence and free association, you obtain arbitrary sequences of planned actions. > Actions that are fully learned become habitual (simply initiate the > top level abstraction). Other actions require an iterative system-wide > process for planning, but most of the mechanisms are already present. > > You obtain episodic memory by having a pipeline that associates > concurrent perceptions, which appears to be what the hypocampus does. > > To obtain super-human intelligence, you need to make the topology of > the system adaptive, or even accessible to the system itself. Ideally, > you want a highly redundant, highly distributed, highly parallel and > highly efficient architecture. This architecture does have a second > class of scalability issues, each matrix, at each level of abstraction > is of fixed size, There needs to be a process that simplifies and > consolidates knowledge to a more ideal representation. At that point > you're off the edge of the (metaphorical) napkin I sketched this all > out on. =P > > About 80% of everything else you need is already available off the > shelf, the other 20% might have some important, perhaps even > difficult, challenges but then we're talking about emotions and > motivation instead of intelligence. > > -- > E T F > N H E > D E D > > Powers are not rights. > > > > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: > https://www.listbox.com/member/archive/rss/303/212726-11ac2389 > Modify Your Subscription: https://www.listbox.com/member/?& Powered by > Listbox: http://www.listbox.com -- Ben Goertzel, PhD http://goertzel.org "My humanity is a constant self-overcoming" -- Friedrich Nietzsche ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57 Modify Your Subscription: https://www.listbox.com/member/?& d2 Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
