[EMAIL PROTECTED] wrote:
> What is the universal test for the ability of any given AI SYSTEM 
> to Perceive Reason and Act? 
>
> Is there such a test? 
>
> What is the closest test known to date? 
>
> Dan Goe
>
>
>
> ----------------------------------------------------
>
> >From : William Pearson <[EMAIL PROTECTED]>
> To : [email protected]
> Subject : Re: [agi] AGI bottlenecks
> Date : Fri, 2 Jun 2006 14:30:20 +0100
>   
>> On 01/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>>     
>>> I had similar feelings about William Pearson's recent message about
>>> systems that use reinforcement learning:
>>>
>>>       
>>>> A reinforcement scenario, from wikipedia is defined as
>>>>
>>>> "Formally, the basic reinforcement learning model consists of:
>>>>
>>>>  1. a set of environment states S;
>>>>  2. a set of actions A; and
>>>>  3. a set of scalar "rewards" in the Reals.
>>>> "
>>>>         
>>> Here is my standard response to Behaviorism (which is what the above
>>> reinforcement learning model actually is):  Who decides when the 
>>>       
> rewards 
>   
>>> should come, and who chooses what are the relevant "states" and 
>>>       
> "actions"? 
>   
>> The rewards I don't deal with, I am interested in external brain
>> add-ons rather than autonomous systems, so the reward system will be
>> closely coupled to a human in some fashion.
>>
>> The rest of post I was trying to outline a system that could alter
>> what it considered actions and states (and bias, learning algorithms
>> etc). The RL definition  was just there as an example to work against.
>>
>>     
>>> If you find out what is doing *that* work, you have found your
>>> intelligent system.  And it will probably turn out to be so enormously
>>> complex, relative to the reinforcement learning part shown above, that
>>> the above formalism (assuming it has not been discarded by then) will 
>>>       
> be 
>   
>>> almost irrelevant.
>>>       
>> The internals of the system will be enormously more complex compared
>> to the reinforcement part I described. But it won't make that
>> irrelevent. What goes on inside a PC is vastly more complex than the
>> system that governs the permissions of what each *nix program can do.
>> This doesn't mean the permission governing system is irrelevent.
>>
>> Like the permissions system in *nix the reinforcement system it is
>> only supposed to govern who is allowed to do what, not what actually
>> happens. Unlike the permission system it is supposed to get that from
>> the affect of the programs on the environment.  Without it both sorts
>> of systems would be highly unstable.
>>
>> I see it as a necessity for complete modular flexibility. If you get
>> one of the bits that does the work wrong, or wrong for the current
>> environment, how do you allow it to change?
>>
>>     
>>> Just my deux centimes' worth.
>>>
>>>       
>> Appreciated.
>>
>>     
>>> On a more positive note, I do think it is possible for AGI researchers
>>> to work together within a common formalism.  My presentation at the
>>> AGIRI workshop was about that, and when I get the paper version of the
>>> talk finalized I will post it somewhere.
>>>
>>>       
>> I'll be interested, but sceptical.
>>
>>   Will
>>
>> -------
>> To unsubscribe, change your address, or temporarily deactivate your 
>>     
> subscription, 
>   
>> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>>     
This is a response, but it requires a full robot rather than a bare AGI:
Drive a car across town without either having an accident or causing
one.  This requires reaching a destination given only an address and a
map.  (You could scan in a standard AAA map for this purpose.  Mapquest
is cheating.)  It's legal to have the entity know ahead of time several
of the major streets, but not any the street on which the destination is
located.

Solving this requires, at minimum, the ability to perceive, reason, and
act.  It also has several other requirements which I will summarize as
"judgment".  It also requires lots of interactions between the various
components.

Note:  This is a task that many people find difficult.  Also note that
AAA maps are ALL inaccurate in details.  This is partially because of
the tremendous effort involved in keeping them up to date, and partially
because of legal requirements on proving that someone else copied your
map.  (If they copied your errors, then it's good evidence that they
copied your map.)

You could probably construct an analogous task for an AGI involving
searching for information on the web, but this description is
intuitively obvious to a person.

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to