Well I will go with the high level of intelligence condition, 
and I would think it is pretty obvious.

We know already that among humans there is a grading or levels of intelligence, 
so unless there is some specific "thing" you must have to be intelligent, 
I would consider a 20 yr old, a 10, and a 5 yr old intelligent, and measure the 
intelligence with a "list" of things they can do, they can walk, talk move 
around blocks, etc, the extent they can accomplish what they want.

A quadrapeligic who cant move but can only type is still intelligent,
What about a brain damaged person with alzeihmers?  They cant remember well but 
maybe they can still dress and eat by themselves, just not hold a job.

A savant that can be trained to water the flowers in a garden?  eh cant do 
anything else btu this one function, but he can look and tell if they need 
water, and which ones to water, and can accept instruction.. I think that is 
still intelligentn behavior, but is extremely limited.
Dogs can be trained to rescue or so search out drugs, which is intelligent, but 
a narrow usage.
  Expert systems are quite smart in their domains, 
and thermostats have a range of intelligence.   Ours here at the house has one 
box upstairs and downstiars controlled by a main unit, that could do a range of 
things.

High-level or approaching human level intelligence is what most of us are all 
concerned with here, but I think in defining intelligence we have to be able to 
look all the way up and down the range that it offers and recognize these as 
having intelligence.

If you dont call a thermostat intelligent, then you have to in some other way 
define what it does, either by saying its an object that "makes decisions based 
on input" or "simply programmed" or whatnot, these all boil down and start 
looking like our various intelligence definitions, "accept input, make 
decisions, give output, try to reach a goal"
Anything lacking one of those 4 components I might not think of as intelligent.

James Ratcliff

Mark Waser <[EMAIL PROTECTED]> wrote:         My view of intelligence is  
rather different.  I don't believe that a thermostat has intelligence (and  
saying so tends to invite ridicule which is bad public relations).  I *do*  
understand your point but saying that a thermostat has intelligence violates 
the  common man's understanding of intelligence -- and that is not a good thing 
to do  unless you have very good reason.
  
     Maybe you should just assume  that my intelligence is equivalent to your 
"high-level of intelligence".   If you're willing to do so, though, I'll 
immediately ask why you need to call a  non-high-level of intelligence 
intelligent.    :-)
  
          Mark
    ----- Original Message ----- 
   From:    James Ratcliff    
   To: [email protected] 
   Sent: Saturday, May 05, 2007 1:33  AM
   Subject: Re: [agi] rule-based NL    system
   

  Its mainly that I    believe there is a full range of intelligences 
available, from a simple    thermostat, to a complex one that measures and 
controls humudity and knows if    a person is in a run, and has specific 
settings for differnt people, to a an    expert system, to a human to an AI and 
super AGI, all having some level of    intelligence.
  The ones we are concerned with are the 1/2 human level    and anything above.
  Learning I would say is a key role in    having a high-level of intelligence, 
probably the main building block,    learning and reasoning, both tied tightly 
together.

James Ratcliff

Mark Waser    <[EMAIL PROTECTED]> wrote:             >> I      would say "rote 
memorization" and knowledge / data, IS      understanding.

         OK, we have a definitional      difference then.  My justification for 
my view is that I believe that      you only *really* understand something when 
you have predictive power on      cases that you haven't directly seen yet 
(sort of like saying that, in      order to be useful or have any value, a 
hypothesis must have predictive      power).
      
     >> I      look outside and I see a tree, I understand that it is a tree, I 
know its a      tree, I know about leaves and grass and how it grows...  I 
havnt      learned anything new, I memorized all that from books and teaching   
   etc.
      
         I don't think so.  I      think that you have a lot of information 
that you derived from      generalizations, analogies, etc (i.e. learning).
     

     >> I      would further say that I given the level of knowledge and 
understanding      about the tree that I was intelligent in that area, you 
could ask me      questions and I could answer them, I could conjecture what 
would happen if I      dug the tree up etc.
      
         Are you *sure* that you've      been directly told what would happen 
if you dug a tree up?  What do you      think would happen if you dug up a 
planticus      imaginus?  I'm sure that you haven't been specifically      told 
what would happen then.  :-)  I think that you have some      serious 
predictive power that is *not* just rote memorization.
      
     >>      Learning does not seem to be a requirment for intelligence, though 
a good      intelligence, and a growing intelligence would need to      learn.
      
     Your      definition of intelligence is apparently (and correct me if I'm 
wrong)      how well something deals with it's environment.  My contention is 
that      anything that doesn't learn will necessarily undergo a degradation of 
their      ability to deal with it's environment.  If you agree with this, then 
     why don't you agree with learning being a requirement for      
intelligence?
      
                  Mark
      
            -----        Original Message ----- 
       From:        James        Ratcliff 
       To:        [email protected] 
       Sent:        Friday, May 04, 2007 4:56 PM
       Subject:        Re: [agi] rule-based NL system
       

I would say "rote memorization" and knowledge / data, IS        understanding.

I look outside and I see a tree, I understand that        it is a tree, I know 
its a tree, I know about leaves and grass and how it        grows...  I havnt 
learned anything new, I memorized all that from        books and teaching etc.

I would further say that I given the level        of knowledge and 
understanding about the tree that I was intelligent in        that area, you 
could ask me questions and I could answer them, I could        conjecture what 
would happen if I dug the tree up etc.

Learning        does not seem to be a requirment for intelligence, though a 
good        intelligence, and a growing intelligence would need to learn.

James        Ratcliff

Mark Waser <[EMAIL PROTECTED]>        wrote:                                    
Hi James,
          
             I'm going to handle your          questions in reverse order . . . 
. 
          
         > Do you think learning is a requirement for understanding, or         
 intelligence?
          
         Yes, I believe that learning is a          requirement for 
intelligence.  Intelligence is basically how fast          you learn.  Zero 
learning equals zero intelligence.
          
         > a reservation serivce has a world model as well, it knwo about       
   1000+ airline routes and times, it talks to you, saves your preferences      
    for outgoign flight, and can use that to think and come up with a          
suggestion for an incoming flight, and which airline to take
          
         A reservation service does indeed have a          world model but it 
is a *very* simple model with very few object types,          relationships, 
and actions.  The 1000+ airline routes and times are          merely data 
within the model and even if they numbered          a million they would not 
increase the size of the *model*.           But the most important thing is 
that the model is absolutely fixed --          i.e. the system doesn't learn.
          
         > and an expert system as having more intelligence due to a richer     
     world model and more ability to give answers.

         I would say that the expert system is more          capable but would 
disagree that it has more intelligence (unless it has          some sort of 
learning functionality).
          
         > If          we took a 10 year old child, and stopped their ability 
to learn, they          would still have the ability to do all the things they 
did before, can          go to the store, and play and fix breakfast etc.

         Again, I would phrase this as the child          still has their old 
capabilities but their intelligence has dropped to          zero -- because 
realistically, they would not maintain the ability to do          all the 
things they did before.  Initially, yes -- BUT          -- slowly and surely, 
as their environment changed, they would          be less and less capable of 
dealing with it as they couldn't learn what          they needed to cope with 
the change.
          
         > But understanding itself doesnt have any special requirement that    
      it understand New things, just the things that are currently          
considering.

         Have you seen the things that you're          currently considering 
before?  If so, how is rote memorization          different from understanding?
          
                          Mark
          
                    -----            Original Message ----- 
           From:            James            Ratcliff 
           To:            [email protected] 
           Sent:            Friday, May 04, 2007 11:24 AM
           Subject:            Re: [agi] rule-based NL system
           

Two problems unfortunatly arise quickly there,
1.            Internal World Model.
  An intelligence must have some form of            internal world model, 
because this is what it operates on internally,            its memory, 
  People have a complex world model including            everythign we have 
built up over years, but a reservation serivce has            a world model as 
well, it knwo about 1000+ airline routes and times,            it talks to you, 
saves your preferences for outgoign flight, and can            use that to 
think and come up with a suggestion for an incoming            flight, and 
which airline to take.  If the system contains            weather data as well, 
and can use it, then it could be more            intelligent.
  It has a world model built up there, not as            complex, but defintly 
there, and I would rate that as having some            level of "intelligence" 
and an expert system as having more            intelligence due to a richer 
world model and more ability to give            answers.
2. Learning.
  Probably a contreversial point            here, but 
Do you think learning is a requirement for understanding, or            
intelligence?
For an intelligence, I dont believe it is.  If            we took a 10 year old 
child, and stopped their ability to learn, they            would still have the 
ability to do all the things they did before, can            go to the store, 
and play and fix breakfast etc.
  Now for an            AGI to grow and be able to do more and more things, it 
needs to have            the ability to learn.  But understanding itself doesnt 
have any            special requirement that it understand New things, just the 
things            that are currently considering.

Jame Ratcliff

Mark            Waser <[EMAIL PROTECTED]> wrote:            >              What 
definition of intelligence would you like to use?

Legg's              definition is perfectly fine for me.

> How about the              "answering machine" test for intelligence? A 
> machine passes 
>              the
> test if people prefer talking to it over talking to a              human. For 
> example, 
> I
> prefer to buy airline tickets              online rather than talk to a 
> travel agent. 
> To
> pass              the answering machine test, I would make the same 
> preference given              
> only
> voice communication, even if I know I won't be              put on hold, 
> charged a 
> higher
> price, etc. It does              not require passing the Turing test. I may 
> be 
>              perfectly
> aware it is a machine. You may substitute instant              messages for 
> voice if 
> you
> wish.

What does              "being preferred by humans" have to do with (almost any 
definition              
of) intelligence? If you mean that it can solve any problem              (i.e. 
tell a 
caller how to reach any goal -- or better yet even,              assist them) 
then, sure, 
it works for me. If it's only dealing              with a limited domain, like 
being a 
travel agent, then I'd call              it a narrow AI. Intelligence is only 
as good as 
your model of              the world and what it allows you to do (which is 
pretty much a              
paraphrasing of Legg's definition as far as I'm concerned). And              if 
you're 
not using an expandable model, as a calculator is not,              then you're 
not 
intelligent.

> I claim that a system              that can pass this test "understands" my 
> words and 
>              knows
> what they mean, even if the words are not grounded in              nonverbal 
> sensorimotor
> experience. Its world model              will be different than that of a 
> human, but 
> so
>              what?

And I'll claim that it doesn't understand a thing              UNLESS it has a 
model of 
it's world (which could be text-only              for all I care but which has 
the 
behavior necessary for it to              accurately answer questions about the 
real 
world) that it is              relating your words to. If it has that and can 
add to 
it's world              as new things are introduced to it from the "real" 
world, then              
I'm very willing to say that it is intelligent and that it              
understands it's 
world. If not, you just have an unintelligent              program.

> Its world model will be different than that of              a human, but so 
> what?

I've never claimed that an              intelligence's world model has to be 
anything 
like that of a              human. All I require is that it be effective and    
          expandable.


----- Original Message ----- 
From: "Matt              Mahoney" 
To: 
Sent:              Wednesday, May 02, 2007 12:50 PM
Subject: Re: [agi] rule-based NL              system


> --- Mark Waser              wrote:
>
>> > OK, how about              Legg's definition of universal intelligence as 
>> > a measure              
>> > of
>> > how
>> > a system              "understands" its environment?
>>
>> OK. What              purpose do you wish to use Legg's definition for? You 
>>              immediately
>> discard it below . . . .
>
>              What definition of intelligence would you like to              
> use?
>
> How about the "answering machine" test for              intelligence? A 
> machine passes 
> the
> test if people              prefer talking to it over talking to a human. For 
> example, 
>              I
> prefer to buy airline tickets online rather than talk to a              
> travel agent. 
> To
> pass the answering machine test, I              would make the same 
> preference given 
> only
> voice              communication, even if I know I won't be put on hold, 
> charged a              
> higher
> price, etc. It does not require passing the              Turing test. I may 
> be 
> perfectly
> aware it is a              machine. You may substitute instant messages for 
> voice if 
>              you
> wish.
>
> I claim that a system that can              pass this test "understands" my 
> words and 
> knows
>              what they mean, even if the words are not grounded in nonverbal  
>             
> sensorimotor
> experience. Its world model will be              different than that of a 
> human, but 
> so
>              what?
>
>
>
> -- Matt Mahoney,              [EMAIL PROTECTED]
>
> -----
> This list is              sponsored by AGIRI: http://www.agiri.org/email
> To              unsubscribe or change your options, please go to:
>              http://v2.listbox.com/member/?&;
>              


-----
This list is sponsored by AGIRI:              http://www.agiri.org/email
To unsubscribe or change your options,              please go              to:
http://v2.listbox.com/member/?&;



_______________________________________
James            Ratcliff - http://falazar.com
Looking for something...            
           
---------------------------------
           Sucker-punch            spam with award-winning protection.
Try the free            Yahoo! Mail Beta.            
---------------------------------
           This list is sponsored by AGIRI: http://www.agiri.org/email
To            unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;         
---------------------------------
         This list is sponsored by AGIRI: http://www.agiri.org/email
To          unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


_______________________________________
James        Ratcliff - http://falazar.com
Looking for something...        
       
---------------------------------
       Food        fight? Enjoy some healthy debate
in the Yahoo!        Answers Food & Drink Q&A.        
---------------------------------
       This list is sponsored by AGIRI: http://www.agiri.org/email
To        unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;     
---------------------------------
     This list is sponsored by AGIRI: http://www.agiri.org/email
To      unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


_______________________________________
James    Ratcliff - http://falazar.com
Looking for something...      

---------------------------------
   Ahhh...imagining that irresistible "new car" smell?
Check out new    cars at Yahoo! Autos.    
---------------------------------
   This list is sponsored by AGIRI: http://www.agiri.org/email
To    unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;
---------------------------------
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to