I was specifically referring to your comment ending in "BY ITSELF".
> Jeez, Will, the point of Artificial General Intelligence is that it > can start adapting to an unfamiliar situation and domain BY ITSELF. I believe this statement is just plain incorrect. Where you got "hard part of AGI" and "narrow AI parts" from my post, I have no idea. Using loaded words like "cheating" when someone suggests to "teach" an AGI definitely seems a little over the top. If you ask the question, "Show me your AGI and prove that it works?" then you can't get a positive answer from me or anyone else on this list. We all agree that AGI doesn't exist. No argument there. This list discusses the different ways of "potentially getting there" and you seem very interested in actually seeing an AGI. Well, then why criticize those that are trying to produce exactly what you are demanding to be shown. The problem of AGI is that no one has been there. No one can possibly know 100% for sure if any solution is correct, hence, the detailed and pointed discussions on this list to get new ideas and avoid potential solutions that others can convince you won't work. I think that normally most of the people on this list don't actually get dissuaded from their beliefs but think of new and positive ideas even when defending their own positions. Everyone is very busy but most of the names on this list keep coming up year after year. Below you demand "no excuses or distractions". Good luck with that if you can get it. The lack of a satisfactory solution to any of your questions means absolutely nothing in terms of whether any of the ideas discussed on this list are good, bad or otherwise. It just means your question wasn't addressed to your satisfaction. You haven't "proven" that any of your problems are necessary or relevant to AGI and you demand something (output from a working AGI) that doesn't exist. Why not change your approach and contribute something of value to others on the list? I have many ideas about AGI but you haven't shown any respect for any of the ideas that have been generously sent your way by others already so why would I bother? David Clark > -----Original Message----- > From: Mike Tintner [mailto:[EMAIL PROTECTED] > Sent: March-03-08 1:18 PM > To: [email protected] > Subject: Re: [agi] Thought experiment on informationally limited > systems > > Yes, an AGI will have to be able to do narrow AI. > > What you are doing here - and everyone is doing over and over and over > - is > saying: "Yes, I know there's a hard part to AGI, but can I please > concentrate on the easy parts - the narrow AI parts - first?" > > If I give you a problem, I don't want to know whether you can take > dictation > and spell, I just want to know whether you can solve the problem - and > not > make excuses, or create distractions. > > It's simple - do you have any ideas about the problem of AGI - ideas > for > generalizing skills (see below) - "cross-over" ideas - or not? > > David: > > > > How intelligent would any human be if it couldn't be taught by other > > humans? > > > > Could a human ever learn to speak by itself? The few times this has > > happened in real life, the person was permanently disabled and not > capable > > of becoming a normal human being. > > > > If humans can't become human without the help of other humans, why > should > > this is a criteria for AGI? > > > > David Clark > > > > PS I am not suggesting that explicitly programming 100% of an AGI is > > either > > doable or desirable but some degree of detailed teaching must be a > > requirement for all on this list who dream of creating an AGI, no? > > > >> -----Original Message----- > >> From: Mike Tintner [mailto:[EMAIL PROTECTED] > >> Sent: March-02-08 5:36 AM > >> To: [email protected] > >> Subject: Re: [agi] Thought experiment on informationally limited > >> systems > >> > >> Jeez, Will, the point of Artificial General Intelligence is that it > can > >> start adapting to an unfamiliar situation and domain BY ITSELF. And > >> your > >> FIRST and only response to the problem you set was to say: "I'll get > >> someone > >> to tell it what to do." > >> > >> IOW you simply avoided the problem and thought only of cheating. > What a > >> solution, or merest idea for a solution, must do is tell me how that > >> intelligence will start adapting by itself - will generalize from > its > >> existing skills to cross over domains. > >> > >> Then, as my answer indicated, it may well have to seek some > >> instructions and > >> advice - especially and almost certainly if it wants to acquire a > >> whole new > >> major skill, as we do, by taking courses etc. > >> > >> But a general intelligence should be able to adapt to some > unfamiliar > >> situations entirely by itself - like perhaps your submersible > >> situation. No > >> guarantee that it will succeed in any given situation, (as there > isn't > >> with > >> us), but you should be able to demonstrate its power to adapt > >> sometimes. > >> > >> In a sense, you should be appalled with yourself that you didn't try > to > >> tackle the problem - to produce a "cross-over" idea. But since > >> literally no > >> one else in the field of AGI has the slightest "cross-over" idea - > i.e. > >> is > >> actually tackling the problem of AGI, - and the whole culture is one > of > >> avoiding the problem, it's to be expected. (You disagree - show me > one, > >> just > >> one, cross-over idea anywhere. Everyone will give you a v. > >> detailed,impressive timetable for how long it'll take them to > produce > >> such > >> an idea, they just will never produce one. Frankly, they're too > >> scared). > >> > >> > >> Mike Tintner <[EMAIL PROTECTED]> wrote: > >> > > >> >> You must first define its existing skills, then define the new > >> challenge > >> >> with some degree of precision - then explain the principles by > >> which it > >> >> will > >> >> extend its skills. It's those principles of > >> extension/generalization > >> >> that > >> >> are the be-all and end-all, (and NOT btw, as you suggest, any > >> helpful > >> >> info > >> >> that the robot will receive - that,sir, is cheating - it has to > >> work > >> >> these > >> >> things out for itself - although perhaps it could *ask* for > info). > >> >> > >> > > >> > Why is that cheating? Would you never give instructions to a child > >> > about what to do? Taking instuctions is something that all > >> > intelligences need to be able to do, but it should be attempted to > be > >> > minimised. I'm not saying it should take instructions > unquestioningly > >> > either, ideally it should figure out whether the instructions you > >> give > >> > are any use for it. > >> > > >> > Will Pearson > >> > > >> > > >> > >> > >> ------------------------------------------- > >> agi > >> Archives: http://www.listbox.com/member/archive/303/=now > >> RSS Feed: http://www.listbox.com/member/archive/rss/303/ > >> Modify Your Subscription: > >> http://www.listbox.com/member/?& > >> 724342 > >> Powered by Listbox: http://www.listbox.com > > > > ------------------------------------------- > > agi > > Archives: http://www.listbox.com/member/archive/303/=now > > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > > Modify Your Subscription: > > http://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > > > > > > > > -- > > No virus found in this incoming message. > > Checked by AVG Free Edition. > > Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: > 3/3/2008 > > 10:01 AM > > > > > > > ------------------------------------------- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > http://www.listbox.com/member/?& > 724342 > Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b Powered by Listbox: http://www.listbox.com
