I would say "rote memorization" and knowledge / data, IS understanding.

I look outside and I see a tree, I understand that it is a tree, I know its a 
tree, I know about leaves and grass and how it grows...  I havnt learned 
anything new, I memorized all that from books and teaching etc.

I would further say that I given the level of knowledge and understanding about 
the tree that I was intelligent in that area, you could ask me questions and I 
could answer them, I could conjecture what would happen if I dug the tree up 
etc.

Learning does not seem to be a requirment for intelligence, though a good 
intelligence, and a growing intelligence would need to learn.

James Ratcliff

Mark Waser <[EMAIL PROTECTED]> wrote:       Hi James,
  
     I'm going to handle your  questions in reverse order . . . . 
  
 > Do you think learning is a requirement for understanding, or  intelligence?
  
 Yes, I believe that learning is a requirement for  intelligence.  Intelligence 
is basically how fast you learn.  Zero  learning equals zero intelligence.
  
 > a reservation serivce has a world model as well, it knwo about 1000+  
 > airline routes and times, it talks to you, saves your preferences for 
 > outgoign  flight, and can use that to think and come up with a suggestion 
 > for an incoming  flight, and which airline to take
  
 A reservation service does indeed have a world  model but it is a *very* 
simple model with very few object types, relationships,  and actions.  The 
1000+ airline routes and times are merely data within the  model and even if 
they numbered a million they would not increase the  size of the *model*.  But 
the most important thing is that the model is  absolutely fixed -- i.e. the 
system doesn't learn.
  
 > and an  expert system as having more intelligence due to a richer world 
 > model and more  ability to give answers.

 I would say that the expert system is more capable  but would disagree that it 
has more intelligence (unless it has some sort of  learning functionality).
  
 > If we took  a 10 year old child, and stopped their ability to learn, they 
 > would still have  the ability to do all the things they did before, can go 
 > to the store, and play  and fix breakfast etc.

 Again, I would phrase this as the child still has  their old capabilities but 
their intelligence has dropped to zero -- because  realistically, they would 
not maintain the ability to do all the things they did  before.  Initially, yes 
-- BUT -- slowly and surely, as their  environment changed, they would be less 
and less capable of dealing with it as  they couldn't learn what they needed to 
cope with the change.
  
 > But  understanding itself doesnt have any special requirement that it 
 > understand New  things, just the things that are currently considering.

 Have you seen the things that you're currently  considering before?  If so, 
how is rote memorization different from  understanding?
  
          Mark
  
    ----- Original Message ----- 
   From:    James Ratcliff    
   To: [email protected] 
   Sent: Friday, May 04, 2007 11:24 AM
   Subject: Re: [agi] rule-based NL    system
   

Two problems unfortunatly arise quickly there,
1. Internal    World Model.
  An intelligence must have some form of internal world    model, because this 
is what it operates on internally, its memory, 
     People have a complex world model including everythign we have built up 
over    years, but a reservation serivce has a world model as well, it knwo 
about    1000+ airline routes and times, it talks to you, saves your 
preferences for    outgoign flight, and can use that to think and come up with 
a suggestion for    an incoming flight, and which airline to take.  If the 
system contains    weather data as well, and can use it, then it could be more  
  intelligent.
  It has a world model built up there, not as complex,    but defintly there, 
and I would rate that as having some level of    "intelligence" and an expert 
system as having more intelligence due to a    richer world model and more 
ability to give answers.
2. Learning.
     Probably a contreversial point here, but 
Do you think learning is a requirement for understanding, or    intelligence?
For an intelligence, I dont believe it is.  If we took a    10 year old child, 
and stopped their ability to learn, they would still have    the ability to do 
all the things they did before, can go to the store, and    play and fix 
breakfast etc.
  Now for an AGI to grow and be able to do    more and more things, it needs to 
have the ability to learn.  But    understanding itself doesnt have any special 
requirement that it understand    New things, just the things that are 
currently considering.

Jame    Ratcliff

Mark Waser <[EMAIL PROTECTED]> wrote:   >      What definition of intelligence 
would you like to use?

Legg's      definition is perfectly fine for me.

> How about the "answering      machine" test for intelligence? A machine 
> passes 
> the
> test      if people prefer talking to it over talking to a human. For 
> example,      
> I
> prefer to buy airline tickets online rather than talk to      a travel agent. 
> To
> pass the answering machine test, I would      make the same preference given 
> only
> voice communication,      even if I know I won't be put on hold, charged a 
> higher
>      price, etc. It does not require passing the Turing test. I may be 
>      perfectly
> aware it is a machine. You may substitute instant messages      for voice if 
> you
> wish.

What does "being preferred by      humans" have to do with (almost any 
definition 
of) intelligence? If you      mean that it can solve any problem (i.e. tell a 
caller how to reach any      goal -- or better yet even, assist them) then, 
sure, 
it works for me. If      it's only dealing with a limited domain, like being a 
travel agent, then      I'd call it a narrow AI. Intelligence is only as good 
as 
your model of      the world and what it allows you to do (which is pretty much 
a      
paraphrasing of Legg's definition as far as I'm concerned). And if      you're 
not using an expandable model, as a calculator is not, then      you're not 
intelligent.

> I claim that a system that can pass      this test "understands" my words and 
> knows
> what they mean,      even if the words are not grounded in nonverbal 
>      sensorimotor
> experience. Its world model will be different than that      of a human, but 
> so
> what?

And I'll claim that it      doesn't understand a thing UNLESS it has a model of 
it's world (which      could be text-only for all I care but which has the 
behavior necessary      for it to accurately answer questions about the real 
world) that it is      relating your words to. If it has that and can add to 
it's world as new      things are introduced to it from the "real" world, then 
I'm very willing      to say that it is intelligent and that it understands 
it's 
world. If      not, you just have an unintelligent program.

> Its world model      will be different than that of a human, but so what?

I've never      claimed that an intelligence's world model has to be anything 
like that      of a human. All I require is that it be effective and      
expandable.


----- Original Message ----- 
From: "Matt Mahoney"      
To: 
Sent: Wednesday, May      02, 2007 12:50 PM
Subject: Re: [agi] rule-based NL system


>      --- Mark Waser wrote:
>
>> > OK, how      about Legg's definition of universal intelligence as a 
>> > measure 
>>      > of
>> > how
>> > a system "understands" its      environment?
>>
>> OK. What purpose do you wish to use      Legg's definition for? You 
>> immediately
>> discard it      below . . . .
>
> What definition of intelligence would you like      to use?
>
> How about the "answering machine" test for      intelligence? A machine 
> passes 
> the
> test if people prefer      talking to it over talking to a human. For 
> example, 
> I
>      prefer to buy airline tickets online rather than talk to a travel agent. 
>      
> To
> pass the answering machine test, I would make the same      preference given 
> only
> voice communication, even if I know I      won't be put on hold, charged a 
> higher
> price, etc. It does      not require passing the Turing test. I may be 
> perfectly
>      aware it is a machine. You may substitute instant messages for voice if  
>     
> you
> wish.
>
> I claim that a system that can      pass this test "understands" my words and 
> knows
> what they      mean, even if the words are not grounded in nonverbal 
>      sensorimotor
> experience. Its world model will be different than that      of a human, but 
> so
> what?
>
>
>
> --      Matt Mahoney, [EMAIL PROTECTED]
>
> -----
> This list      is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or      change your options, please go to:
>      http://v2.listbox.com/member/?&;
> 


-----
This list      is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or      change your options, please go      to:
http://v2.listbox.com/member/?&;



_______________________________________
James    Ratcliff - http://falazar.com
Looking for something...      

---------------------------------
   Sucker-punch    spam with award-winning protection.
Try the free    Yahoo! Mail Beta.   
---------------------------------
    This list is sponsored by AGIRI: http://www.agiri.org/email
To    unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;
---------------------------------
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
 
---------------------------------
Food fight? Enjoy some healthy debate
in the Yahoo! Answers Food & Drink Q&A.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to