So if I have a system that is close to AGI, I have no way of really knowing
it right?

Even if I believe that my system is a true AGI there is no way of convincing
the others irrefutably that this system is indeed a AGI not just an advanced
AI system.

I have read the toy box problem and rock wall problem, but not many people
will still be convinced I am sure.

I wanted to know that if there is any consensus on a general problem which
can be solved and only solved by a true AGI. Without such a test bench how
will we know if we are moving closer or away from our quest. There is no
map.

Deepak



On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>  I realised that what is needed is a *joint* definition *and*  range of
> tests of AGI.
>
> Benamin Johnston has submitted one valid test - the toy box problem. (See
> archives).
>
> I have submitted another still simpler valid test - build a rock wall from
> rocks given, (or fill an earth hole with rocks).
>
> However, I see that there are no valid definitions of AGI that explain what
> AGI is generally , and why these tests are indeed AGI. Google - there are v.
> few defs. of AGI or Strong AI, period.
>
> The most common: AGI is human-level intelligence -  is an
> embarrassing non-starter - what distinguishes human intelligence? No
> explanation offered.
>
> The other two are also inadequate if not as bad: Ben's "solves a variety of
> complex problems in a variety of complex environments". Nope, so does  a
> multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
> something to do with "insufficient knowledge and resources..."
> "Insufficient" is open to narrow AI interpretations and reducible to
> mathematically calculable probabilities.or uncertainties. That doesn't
> distinguish AGI from narrow AI.
>
> The one thing we should all be able to agree on (but who can be sure?) is
> that:
>
> ** an AGI is a general intelligence system, capable of independent
> learning**
>
> i.e. capable of independently learning new activities/skills with minimal
> guidance or even, ideally, with zero guidance (as humans and animals are) -
> and thus acquiring a "general", "all-round" range of intelligence..
>
> This is an essential AGI goal -  the capacity to keep entering and
> mastering new domains of both mental and physical skills WITHOUT being
> specially programmed each time - that crucially distinguishes it from narrow
> AI's, which have to be individually programmed anew for each new task. Ben's
> AGI dog exemplified this in a v simple way -  the dog is supposed to be able
> to learn to fetch a ball, with only minimal instructions, as real dogs do -
> they can learn a whole variety of new skills with minimal instruction.  But
> I am confident Ben's dog can't actually do this.
>
> However, the independent learning def. while focussing on the distinctive
> AGI goal,  still is not detailed enough by itself.
>
> It requires further identification of the **cognitive operations** which
> distinguish AGI,  and wh. are exemplified by the above tests.
>
> [I'll stop there for interruptions/comments & continue another time].
>
>  P.S. Deepakjnath,
>
> It is vital to realise that the overwhelming majority of AGI-ers do not *
> want* an AGI test -  Ben has never gone near one, and is merely typical in
> this respect. I'd put almost all AGI-ers here in the same league as the US
> banks, who only want mark-to-fantasy rather than mark-to-market tests of
> their assets.
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
cheers,
Deepak



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to