Deepak,

I think you would be much better off focusing on something more practical.
Understanding a movie and all the myriad things going on, their
significance, etc... that's AI complete. There is no way you are going to
get there without a hell of a lot of steps in between. So, you might as well
focus on the steps required to get there. Such a test is so complicated,
that you cannot even start, except to look for simpler test cases and goals.


My approach to testing agi has been to define what AGI must accomplish.
Which I have in the following steps:
1) understand the environment
2) understand ones own actions and how they affect the environment
3) understand language
4) learn goals from other people through language
5) perform planning and attempt to achieve goals
6) other miscellaneous requirements.

Each step must be accomplished in a general way. By general, I mean that it
can solve many many problems with the same programming.

Each step must be done in order because each step requires previous steps to
proceed. So, to me, the most important place to start is general environment
understanding.

Then, now that you know where to start, you pick more specific goals and
test cases. How do you develop and test general environment understanding?
What is a simple test case you can develop on? What are the fundamental
problems and principles involved? What is required to solve these problems?

Those are the sorts of tests you should be considering. But that only comes
after you decide what AGI requires and steps required. Maybe you'll agree
with me, maybe you won't. So, that's how I would recommend going about it.

Dave

On Sun, Jul 18, 2010 at 4:04 PM, deepakjnath <deepakjn...@gmail.com> wrote:

> Let me clarify. As you all know there are somethings computers are good at
> doing and somethings that Humans can do but a computer cannot.
>
> One of the test that I was thinking about recently is to have to movies
> show to the AGI. Both movies will have the same story but it would be a
> totally different remake of the film probably in different languages and
> settings. If the AGI is able to understand the sub plot and say that the
> story line is similar in the two movies then it could be a good test for AGI
> structure.
>
> The ability of a system to understand its environment and underlying sub
> plots is an important requirement of AGI.
>
> Deepak
>
> On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote:
>
>>  Please explain/expound freely why you're not "convinced" - and indicate
>> what you expect,  - and I'll reply - but it may not be till tomorrow.
>>
>> Re your last point, there def. is no consensus on a general problem/test
>> OR a def. of AGI.
>>
>> One flaw in your expectations seems to be a desire for a single test -
>> almost by definition, there is no such thing as
>>
>> a) a single test - i.e. there should be at least a dual or serial test -
>> having passed any given test, like the rock/toy test, the AGI must be
>> presented with a new "adjacent" test for wh. it has had no preparation,
>> like say building with cushions or sand bags or packing with fruit. (and
>> neither rock/toy test state that clearly)
>>
>> b) one kind of test - this is an AGI, so it should be clear that if it can
>> pass one kind of test, it has the basic potential to go on to many different
>> kinds, and it doesn't really matter which kind of test you start with - that
>> is partly the function of having a good.definition of AGI .
>>
>>
>>  *From:* deepakjnath <deepakjn...@gmail.com>
>> *Sent:* Sunday, July 18, 2010 8:03 PM
>> *To:* agi <agi@v2.listbox.com>
>> *Subject:* Re: [agi] Of definitions and tests of AGI
>>
>> So if I have a system that is close to AGI, I have no way of really
>> knowing it right?
>>
>> Even if I believe that my system is a true AGI there is no way of
>> convincing the others irrefutably that this system is indeed a AGI not just
>> an advanced AI system.
>>
>> I have read the toy box problem and rock wall problem, but not many people
>> will still be convinced I am sure.
>>
>> I wanted to know that if there is any consensus on a general problem which
>> can be solved and only solved by a true AGI. Without such a test bench how
>> will we know if we are moving closer or away from our quest. There is no
>> map.
>>
>> Deepak
>>
>>
>>
>> On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner 
>> <tint...@blueyonder.co.uk>wrote:
>>
>>>  I realised that what is needed is a *joint* definition *and*  range of
>>> tests of AGI.
>>>
>>> Benamin Johnston has submitted one valid test - the toy box problem. (See
>>> archives).
>>>
>>> I have submitted another still simpler valid test - build a rock wall
>>> from rocks given, (or fill an earth hole with rocks).
>>>
>>> However, I see that there are no valid definitions of AGI that explain
>>> what AGI is generally , and why these tests are indeed AGI. Google - there
>>> are v. few defs. of AGI or Strong AI, period.
>>>
>>> The most common: AGI is human-level intelligence -  is an
>>> embarrassing non-starter - what distinguishes human intelligence? No
>>> explanation offered.
>>>
>>> The other two are also inadequate if not as bad: Ben's "solves a variety
>>> of complex problems in a variety of complex environments". Nope, so does  a
>>> multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's -
>>> something to do with "insufficient knowledge and resources..."
>>> "Insufficient" is open to narrow AI interpretations and reducible to
>>> mathematically calculable probabilities.or uncertainties. That doesn't
>>> distinguish AGI from narrow AI.
>>>
>>> The one thing we should all be able to agree on (but who can be sure?) is
>>> that:
>>>
>>> ** an AGI is a general intelligence system, capable of independent
>>> learning**
>>>
>>> i.e. capable of independently learning new activities/skills with minimal
>>> guidance or even, ideally, with zero guidance (as humans and animals are) -
>>> and thus acquiring a "general", "all-round" range of intelligence..
>>>
>>> This is an essential AGI goal -  the capacity to keep entering and
>>> mastering new domains of both mental and physical skills WITHOUT being
>>> specially programmed each time - that crucially distinguishes it from narrow
>>> AI's, which have to be individually programmed anew for each new task. Ben's
>>> AGI dog exemplified this in a v simple way -  the dog is supposed to be able
>>> to learn to fetch a ball, with only minimal instructions, as real dogs do -
>>> they can learn a whole variety of new skills with minimal instruction.  But
>>> I am confident Ben's dog can't actually do this.
>>>
>>> However, the independent learning def. while focussing on the distinctive
>>> AGI goal,  still is not detailed enough by itself.
>>>
>>> It requires further identification of the **cognitive operations** which
>>> distinguish AGI,  and wh. are exemplified by the above tests.
>>>
>>> [I'll stop there for interruptions/comments & continue another time].
>>>
>>>  P.S. Deepakjnath,
>>>
>>> It is vital to realise that the overwhelming majority of AGI-ers do not *
>>> want* an AGI test -  Ben has never gone near one, and is merely typical in
>>> this respect. I'd put almost all AGI-ers here in the same league as the US
>>> banks, who only want mark-to-fantasy rather than mark-to-market tests of
>>> their assets.
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> cheers,
>> Deepak
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> cheers,
> Deepak
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to