Mike Tintner <[email protected]> wrote: > How do you reckon that will work for an infant or anyone who has only > seen an example or two of the concept class-of-forms? >
I do not reckon that it will work for an infant or anyone (or anything) who (or that) has only seen an example or two of the concept class-of-forms. I haven't looked at your photos, but I did indicate that learning has to be able to advance with new kinds of objects of a kind. My previous comment specifically dealt with the problem of learning to recognize radically different instances of the kind. There was once a time when it was thought that domain-specific AI, using general methods of reasoning would be more feasible than general AI. This optimism was not borne out by experiment. The question is why not? I believe that domain specific AI needs to rely on so much general knowledge (AGI) as a base, that until a certain level of success in AGI is achieved, narrower domain specific AI will be limited to calculation-based reasoning and the like (as in closed taxonomic AI or simple neural networks). A similar situation occurred in space travel. At the dawn of the space age some people intuitively thought that traveling to the moon would be 2000 times more difficult than sending a space vehicle up a 100 miles (since it was 2000 times further away) so if it took 10 years to get to the pont where they could get a space capsule up 100 miles, it would take 20000 years to reach the moon. It didn't work that way, because as the leading experts realized, getting away from earth's gravity results in a significant and geometric decrease in the force needed to continue. Because this fact was not intuitive to the naive critic it wasn't completely grasped by many people until the first space vehicle escaped earth orbit a few years after the first space shots. I think a similar situation probably is at the center of the feasibility of basic AGI. As more and more examples are learned, the complications in storing and accessing that information in a wise and intelligent manner become more and more elusive. But, for example, if domain specific information is dependent on a certain level of general knowledge, then you won't see domain specific AI really take off until that level of AGI becomes feasible. Why would this relationship occur? Because each time you double *all* knowledge (as is implied by a doubling of general knowledge) you have a progressively more complicated load on the computer. So to double that general knowledge twice, you would have to create an AGI program that was capable of dealing with four times as much complexity. To double that general knowledge again, you would have to create an AGI program that would have to deal with 8 times the complexity as your first prototype. Once you get your AGI program to work at a certain level of complexity, then your domain-specific AI program might start to take off and you would see the kind of dazzling results which would make the critics more wary of expressing their skepticism. Jim Bromer On Mon, Aug 9, 2010 at 8:13 AM, Mike Tintner <[email protected]>wrote: > How do you reckon that will work for an infant or anyone who has only > seen an example or two of the concept class-of-forms? > > (You're effectively misreading the set of fotos - altho. this needs making > clear - a major point of the set is: how will any concept/schema of chair, > derived from any set of particular kinds of chairs, cope with a radically > new kind of chair? Just saying - "well let's analyse the chairs we have" - > is not an answer. You can take it for granted that the new chair will have > some feature[s]/form that constitutes a "radical departure" from existing > ones. (as is amply illustrated by my set of fotos). And yet your - an AGI - > mind can normally adapt and recognize the new object as a chair. ). > > *From:* Jim Bromer <[email protected]> > *Sent:* Monday, August 09, 2010 12:50 PM > *To:* agi <[email protected]> > *Subject:* Re: [agi] How To Create General AI Draft2 > > The mind cannot determine whether or not -every- instance of a kind > of object is that kind of object. I believe that the problem must be a > problem of complexity and it is just that the mind is much better at dealing > with complicated systems of possibilities than any computer program. A > young child first learns that certain objects are called chairs, and that > the furniture objects that he sits on are mostly chairs. In a few cases, > after seeing an odd object that is used as a chair for the first time (like > seeing an odd outdoor chair that is fashioned from twisted pieces of wood) > he might not know that it is a chair, or upon reflection wonder if it is or > not. And think of odd furniture that appears and comes into fashion for a > while and then disappears (like the bean bag chair). The question for me is > not what the smallest pieces of visual information necessary to represent > the range and diversity of kinds of objects are, but how would these diverse > examples be woven into highly compressed and heavily cross-indexed pieces of > knowledge that could be accessed quickly and reliably, especially for the > most common examples that the person is familiar with. > Jim Bromer > > On Mon, Aug 9, 2010 at 2:16 AM, John G. Rose <[email protected]>wrote: > >> Actually this is quite critical. >> >> >> >> Defining a chair - which would agree with each instance of a chair in the >> supplied image - is the way a chair should be defined and is the way the >> mind processes it. >> >> >> >> John >> > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com/> > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com/> > ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
