On Sat, Mar 29, 2008 at 8:58 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> > Robert/Ben:. In fact. I would suggest that AGI researchers start to > distinguish > >> themselves from narrow AGI by replacing the over ambiguous concepts > from > >> AI, > >> one by one. For example: > >> > >> knowledge representation = world model. > >> learning = world model creation > >> reasoning = world model simulation > >> goal = life goal (to indicate that we have the ambition of building > >> something really alive) > >> If we say something like "world model creation", it seems pretty > obvious > >> that we do not mean anything like just tweaking a few bits in some > >> function. > > > > Yet, those terms are used for quite shallow things in many Good Old > > Fashioned > > robotics architectures ;-) > > > > IMO there is one key & in fact crucial distinction between AI & AGI - > which > hinges on "adaptivity". > > An AI program has "special(ised) adaptivity" -can adapt its actions but > only > within a known domain > > An AGI has "general adaptivity"- can also adapt its actions to deal with > unknown, unfamiliar domains. > > > ------------------------------------------- > The distinction in terms is not generally recognized. Most AI programs do not show a wide range of adaptivity of learning. However, most of us who are interested in the field believe that there will be more achievements in the future. The use of the term AGI in this group is meant to differentiate the general adaptivity that you mentioned that would be required for general artificial intelligence, but the term AI is an inclusive term that has different meanings but does definitely include the future of AI research and general AI. The way you expressed 'general adaptivity' is interesting. People only have a constrained ability to learn, just as computers do, but obviously they can learn in ways that computers cannot. But there is ample evidence that AI programming is improving. So the issue is not just general adaptivity but the range of adaptivity, or the ranges of different kinds of adaptivity. The reason I am making this point is because by exmaining the problem with a little more precision, or at least differentiation, some of the more obscure issues may eventually be revealed. Jim Bromer ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63 Powered by Listbox: http://www.listbox.com
