2008/5/27 Mike Tintner <[EMAIL PROTECTED]>:
> Will:And you are part of the problem insisting that an AGI should be tested
> by its ability to learn on its own and not get instruction/help from
> other agents be they human or other artificial intelligences.
>
> I insist[ed] that an AGI should be tested on its ability to solve some
> *problems* on its own - cross-domain problems - just as we do. Of course, it
> should learn from others, and get help on other problems, as we do too.

But you don't test for that, and as the loebner prize shows you only
tend to get what you test for.

> But
> if it can't solve many general problems on its own - which seemed OK by you
> (after setting up your initially appealing submersible problem - solutio
> interrupta!) - then it's only a narrow AI.
>
I am happy for the baby machine (which is what we will be dealing with
to start with) not to be able to solve general problems on its own.
Later on I would be disappointed.

  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to