On Sat, Mar 29, 2008 at 3:58 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  IMO there is one key & in fact crucial distinction between AI & AGI - which
>  hinges on "adaptivity".
>
>  An AI program has "special(ised) adaptivity" -can adapt its actions but only
>  within a known domain
>
>  An AGI has "general adaptivity"- can also adapt its actions to deal with
>  unknown, unfamiliar domains.
>

There is no "general adaptivity" - all learning algorithms are
constrained to efficient learning only in narrow domains. The problem
with narrow AI systems is that their performance either relies on
people manually designing learning algorithms which work fine on given
narrow domains, or requires insanely much data to learn things with
less biased algorithms. In first case each new problem requires a
human in the loop and months of thinking about the problem, in effect
human acquires information about the target domain using his
intelligence and then encodes this information in parameterized form,
so that it can then be tweaked a little to solve a "last mile problem"
of adapting to particular features of target domain that are hard to
encode manually or are different in each case. In second case
algorithm is terrible at learning in target domain, but when you have
a whole Internet of data it doesn't necessarily look like that,
considering that you don't have to tweak the algorithm to the problem.

Making a general AI learning requires understanding of a general AI
domain, and this general AI domain is not "more general" than some of
these narrow AIs. Probability is conserved, general AI is necessarily
restricted in learning ability. So, while an ability to represent an
arbitrary algorithm would be nice (which is not present in many
popular machine learning algorithms), it doesn't mean that system will
guess any algorithm given only partial data about it. It might need to
actually look at it. But it needs to be able to guess the kind of
information that people are good at guessing, and what is it exactly
that we learn, infer from incomplete description, as opposed to
memorizing when presented in whole, is the core of the problem.


Here's a paper that may give some intuition about this issue:

http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf
Yoshua Bengio, Yann LeCun. 2007. Scaling learning algorithms towards AI.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to