[agi] Connectionists: ANNOUNCE: PASCAL Visual Object Classes Recognition Challenge 2007

2007-05-04 Thread Eugen Leitl
- Forwarded message from Chris Williams [EMAIL PROTECTED] -

From: Chris Williams [EMAIL PROTECTED]
Date: Mon, 30 Apr 2007 18:10:41 +0100 (BST)
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED], John Winn [EMAIL PROTECTED],
[EMAIL PROTECTED], Mark Everingham [EMAIL PROTECTED]
Subject: Connectionists: ANNOUNCE: PASCAL Visual Object Classes Recognition
Challenge 2007


  PASCAL Visual Object Classes Recognition Challenge 2007

We are running a third PASCAL Visual Object Classes Recognition
Challenge. This time there are more classes (twenty), more challenging
images, and the possibility of more confusion between classes with
similar visual appearance (cars/bus/train, bicycle/motorbike).

As before, participants can recognize any or all of the classes, and there
are classification and detection tracks. There are also two taster
competitions, on pixel-wise segmentation and on person layout (detecting
head, hands, feet).

The development kit (Matlab code for evaluation, and baseline algorithms)
and training data is now available at:

http://www.pascal-network.org/challenges/VOC/voc2007/index.html

where further details are given. The timetable of the challenge is:

* April 2007: Development kit and training data available.

* 11 June 2007: Test data made available.

* 17 Sept 2007, 11pm GMT: DEADLINE for submission of results.

* 15 October 2007: Visual Recognition Challenge workshop (Caltech 256 and
PASCAL VOC2007) to be held as part of ICCV 2007 in Rio de Janeiro, Brazil,
see http://www.robots.ox.ac.uk/~vgg/misc/iccv07/

Mark Everingham
Luc Van Gool
Chris Williams
John Winn
Andrew Zisserman


- End forwarded message -
-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Trouble implementing my AGI Algorithm

2007-05-04 Thread Matt Mahoney
--- a [EMAIL PROTECTED] wrote:

 Help me with the algorithm. Thank you

Dear a for anonymous (are you related to Ben?),

Before you worry about whether an AGI should be friendly or selfish or
religious, first you have to solve some lower level problems in language,
vision, hearing, navigation, etc.  You might make some progress in each field
but eventually you will run into the problem that you can't fully solve any of
the problems without solving all of them.  For example, images and sound
contain writing and speech, so you need to solve language.  Then, in order to
communicate effectively with a machine, it must have a world model similar to
yours, and a lot of this knowledge comes from the other senses.

After you have done that, then the next problem is that you are not building a
human.  You are building a slave.  Its sole purpose is to be useful to humans.
 A human body is not necessarily the best form for serving this purpose.  You
might build a robot with 4 arms and wheels for legs and sonar instead of
vision.  Or it might not have a body at all, or maybe thousands of insect
sized robots controlled as one.  The problem is that this creature will have a
world model that is nothing like yours, and that will make communication
difficult.  With currently available computers we cope with this problem by
inventing new terminology or by using existing words in new ways.  For
example, we talk about an operating system process as running or sleeping even
though it has no legs and does not dream.  Then there are other mental states,
like running in privileged mode, that have no equivalent in humans.

In humans, selfishness and friendliness and religion are secondary goals to
our main goal, which like all species, is to propagate our DNA.  For example,
religion achieves this goal by making taboo any form of sex that does not
contribute to making children.  Therefore, it is inappropriate to program
religion into an AGI whose goal is not reproduction, but to serve humans.  In
your AGI design, you need to choose an appropriate set of emotions and mental
states, inventing new ones as needed.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The role of incertainty

2007-05-04 Thread James Ratcliff
This is similar to the thread I was workign on recently about Goals that 
didnt get quite as far as I would have liked either. 

1. For use as testing metrics or for our personal goals of What an AGI should 
achieve, or what is important these goals or classes of problems should be 
defined as well as we possibly can. 

A couple obvious upper level classes are 
  Navigation - in a real or virtual world.
  Natural Language - speaking, reading, talking
  Basic Problems Solving - given a simple, problem find a solution.

2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user functions 
for the consumer, the selling points you would give them, and how the AGI would 
help these functions.  
  Is it going to be a rich enough situation in general to display more than 
just a blocks world type of pet, go fetch me the blue ball over there

James Ratcliff

Benjamin Goertzel [EMAIL PROTECTED] wrote: 

On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:No, I keep saying - 
I'm not asking for the odd  narrowly-defined task - but rather defining CLASSES 
of specific problems that  your/an AGI will be able to tackle. 



Well, we have thought a lot about

-- virtual agent control in simulation worlds (both pets and humanlike avatars)
-- natural language question answering 
-- recognition of patterns in large bodies of scientific data

 

 Part of the definition task should be to  explain how if you can solve one 
kind of problem, then you will be able to solve  other distinct kinds.
  




We can certainly explain that re Novamente, but IMO it is not the best way to 
get across how the system works to others with a technical interest in AGI.  It 
may well be a useful mode of description for marketing purposes, however. 

ben g
 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
 
-
Never miss an email again!
Yahoo! Toolbar alerts you the instant new Mail arrives. Check it out.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-05-04 Thread Mike Tintner

Richard,

What's the point here? You seem to be just being cussed. You're not really 
interested in the structure of the sciences, are you?


Psychosemiotics, first off, does NOT EXIST  -  so how cognitive science 
could already cover it is interesting. It has been mooted vaguely - in a 
book esp. by Howard Smith:


psychosemiotics, defined as the study of how we learn, understand, and use 
the signs of culture (p. 2), offers a way to understand cognition by 
examining how humans use signs to make meaning of their everchanging 
physical and cultural environments (p. 3). 


I posit a more ambitious formulation,  -  that it should be esp. about how 
the structure of sign systems reflects  the structure of the human brain. I 
doubt that you're really into this area, because if you were, you'd have 
noticed that the structure/ division I use (symbols/ graphics/ images) is 
not a recognized division. No, this whole area is still virgin territory - 
if you disagree, point out the research or relevant branch(es) of science.


Vis a vis:

There is an actual picture tree
in the brain -- see above quote from you -- which is a direct, 
unambiguous description of the position defended by the group associated 
with Kosslyn.


- I take that more seriously, although I am v. confident of my position. 
Link me to a statement of this position of the group associated with 
Kosslyn, and I will reply in detail.






- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, May 04, 2007 3:14 PM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think 
your AGI design will work?]




Mike Tintner wrote:
Er Richard, you are opening too many too large areas - we could be here 
till the end of the month.


It seems to me you are using language rather loosely at times -  
inevitably you are going to have problems with what I am saying. If I say 
psychoSEMIOTICS, for example, that's exactly what I mean, and it's v. 
different from psychoLINGUISTICS. The latter is concerned with how 
LANGUAGE use reflects brain/mind structures - the former would obviously 
be concerned with how ALL sign systems' use  reflect mind structures - 
including all symbolic systems, (words, numbers, morse  other codes, 
programminglanguages),  all graphic systems, (maps, icons, cartoons, 
geometry etc. etc) and all image systems (photographs, videos, statues, 
3D-models, etc). - and why our total body of sign systems keeps evolving 
along certain lines.
The way you have just defined it, psychoSEMIOTICS is no different than 
cognitive science/AI.  If it is different, specify how (that was my 
original question).



Re Kosslyn etc, my basic concern is not so much with the relative merits 
of different sign systems - of language vs images - but of how the brain 
actually processes information - of what it does to make sense of words 
and numbers - how it actually works, when you read this text for 
instance. There is an actual picture tree in the brain, I would 
suggest - it processes information on at least three levels 
simultaneously (and not just as it may appear to, on just one).  The 
immediate point here is that this whole area has NOT been covered before 
by Kosslyn or anyone else (although there may be odd allusions in some 
places). You wouldn't have had all the arguments we had about this  area, 
if it had been covered.


I addressed the arguments you were actually having at the time, which were 
all focussed on statements like There is an actual picture tree in the 
brain -- see above quote from you -- which is a direct, unambiguous 
description of the position defended by the group associated with Kosslyn.


If you are interested in the more general issue of how the brain actually 
processes information, regardless of whether it uses images to do so or 
not, then welcome to the club:  but THAT question is cognitive science, 
and it is not the same as the question of whether the brain does so using 
picture trees.



Re embodied cognition, you'll just have to look it up - it's a still 
growing field, still contentious.


Eh?  I am a cognitive psychologist/cognitive scientist, Mike.  I asked you 
where this growing field is, because I don't see any sign of it.   I would 
be happy to look it up if you would point to it.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.2/785 - Release Date: 02/05/2007 14:16






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The role of incertainty

2007-05-04 Thread Benjamin Goertzel




2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user
functions for the consumer, the selling points you would give them, and how
the AGI would help these functions.
  Is it going to be a rich enough situation in general to display more
than just a blocks world type of pet, go fetch me the blue ball over there

James Ratcliff




Well, it's a commercial project so I can't really talk about what the
capabilities of the version 1.0 virtual pets will be.

But the idea will be to start simple, and then incrementally roll out
smarter and smarter versions.  And, the idea is to make the pets flexible
and adaptive learning systems, rather than just following fixed behavior
patterns.

One practical limitation is that we need to host a lot of pets on each
server...

However, we can do some borg mind stuff to work around this problem -- so
that each pet retains its own personality, yet benefits from collective
learning done basis on the totality of all the pets' memories...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Derek Zahn

Ben Goertzel writes:

Well, it's a commercial project so I can't really talk about what the 
capabilities of the version 1.0 virtual pets will be.


I did spend a few evenings looking around Second Life.  From
that experience, I think that virtual protitutes would be
a more profitable product :)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-05-04 Thread Mark Waser
Mike,

Richard is not being difficult.  He is trying to ascertain the basis for 
your beliefs (and get pointers to it).  Only from this e-mail did *I* ascertain 
that you believe that you had made up psychosemiotics.  Previously, it looked 
to me as if you thought you were pointing at established science -- and in that 
context, Richard's questions were more than reasonable.

Further, your statement that Psychosemiotics, first off, does NOT EXIST  - 
 so how cognitive science could already cover it is interesting just shows 
*your* ignorance.  Just because you believe that you've invented something new 
and attached a name to it doesn't mean that it isn't already established 
science under a different name.  I am quite aware of Richard's background and 
can assure you that you are extremely unlikely to be correct when you're trying 
to correct him on something in basic cognitive science (especially since you 
clearly lack a solid grounding in the field).

So let me repeat the ending of my last e-mail -- We want to welcome new 
members to this group but your assumptions and communications style are not 
making it easy for us (and hopefully, you can recognize the time and effort 
spent bringing you up to speed).  A total novice debating an expert may be a 
great experience for the novice but does *very* little for the group as a whole 
except expend time and attention (since the novice is very unlikely to 
contribute to the expert's understanding until he gets up to speed).  I would 
suggest that it would be most effective if you would adopt a course of LEARNING 
what the group believes and how it communicates FIRST and DEBATING LATER (after 
you both have something to debate about *and* the ability to effectively 
communicate it).

As a first step, why don't you try asking specific questions rather than 
being insulting?

Mark

- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, May 04, 2007 11:19 AM
Subject: Re: [agi] The University of Phoenix Test [was: Why do you think your 
AGI design will work?]


 Richard,
 
 What's the point here? You seem to be just being cussed. You're not really 
 interested in the structure of the sciences, are you?
 
 Psychosemiotics, first off, does NOT EXIST  -  so how cognitive science 
 could already cover it is interesting. It has been mooted vaguely - in a 
 book esp. by Howard Smith:
 
 psychosemiotics, defined as the study of how we learn, understand, and use 
 the signs of culture (p. 2), offers a way to understand cognition by 
 examining how humans use signs to make meaning of their everchanging 
 physical and cultural environments (p. 3). 
 
 I posit a more ambitious formulation,  -  that it should be esp. about how 
 the structure of sign systems reflects  the structure of the human brain. I 
 doubt that you're really into this area, because if you were, you'd have 
 noticed that the structure/ division I use (symbols/ graphics/ images) is 
 not a recognized division. No, this whole area is still virgin territory - 
 if you disagree, point out the research or relevant branch(es) of science.
 
 Vis a vis:
 
 There is an actual picture tree
 in the brain -- see above quote from you -- which is a direct, 
 unambiguous description of the position defended by the group associated 
 with Kosslyn.
 
 - I take that more seriously, although I am v. confident of my position. 
 Link me to a statement of this position of the group associated with 
 Kosslyn, and I will reply in detail.
 
 
 
 
 
 - Original Message - 
 From: Richard Loosemore [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Friday, May 04, 2007 3:14 PM
 Subject: Re: [agi] The University of Phoenix Test [was: Why do you think 
 your AGI design will work?]
 
 
 Mike Tintner wrote:
 Er Richard, you are opening too many too large areas - we could be here 
 till the end of the month.

 It seems to me you are using language rather loosely at times -  
 inevitably you are going to have problems with what I am saying. If I say 
 psychoSEMIOTICS, for example, that's exactly what I mean, and it's v. 
 different from psychoLINGUISTICS. The latter is concerned with how 
 LANGUAGE use reflects brain/mind structures - the former would obviously 
 be concerned with how ALL sign systems' use  reflect mind structures - 
 including all symbolic systems, (words, numbers, morse  other codes, 
 programminglanguages),  all graphic systems, (maps, icons, cartoons, 
 geometry etc. etc) and all image systems (photographs, videos, statues, 
 3D-models, etc). - and why our total body of sign systems keeps evolving 
 along certain lines.
 The way you have just defined it, psychoSEMIOTICS is no different than 
 cognitive science/AI.  If it is different, specify how (that was my 
 original question).


 Re Kosslyn etc, my basic concern is not so much with the relative merits 
 of different sign systems - of language vs images - but of how the brain 
 

Re: [agi] The role of incertainty

2007-05-04 Thread Benjamin Goertzel

Second Life also has a teen grid, by the way, which is not very
active right now, but which virtual pets could enhance significantly.

Virtual prostitutes are not in the plans anytime soon ;-)

On 5/4/07, Derek Zahn [EMAIL PROTECTED] wrote:


Ben Goertzel writes:

Well, it's a commercial project so I can't really talk about what the
capabilities of the version 1.0 virtual pets will be.

I did spend a few evenings looking around Second Life.  From
that experience, I think that virtual protitutes would be
a more profitable product :)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Mike Tintner
Is there any already existing competition in this area - virtual adaptive pets 
- that we can look at?

  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, May 04, 2007 4:46 PM
  Subject: Re: [agi] The role of incertainty




2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user 
functions for the consumer, the selling points you would give them, and how the 
AGI would help these functions.  
  Is it going to be a rich enough situation in general to display more than 
just a blocks world type of pet, go fetch me the blue ball over there 

James Ratcliff



  Well, it's a commercial project so I can't really talk about what the 
capabilities of the version 1.0 virtual pets will be.

  But the idea will be to start simple, and then incrementally roll out smarter 
and smarter versions.  And, the idea is to make the pets flexible and adaptive 
learning systems, rather than just following fixed behavior patterns. 

  One practical limitation is that we need to host a lot of pets on each 
server...

  However, we can do some borg mind stuff to work around this problem -- so 
that each pet retains its own personality, yet benefits from collective 
learning done basis on the totality of all the pets' memories... 

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.467 / Virus Database: 269.6.2/785 - Release Date: 02/05/2007 
14:16

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The role of incertainty

2007-05-04 Thread Derek Zahn

On a less joking note, I think your ideas about applying your
cognitive engine to NPCs in RPG type games (online or otherwise)
could work out really well.  The AI behind the game entities
that are supposedly people is depressingly stupid, and games
are a bazillion-dollar business.

I hope your business direction works out well for you!


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] rule-based NL system

2007-05-04 Thread Mark Waser
Hi James,

I'm going to handle your questions in reverse order . . . . 

 Do you think learning is a requirement for understanding, or intelligence?

Yes, I believe that learning is a requirement for intelligence.  Intelligence 
is basically how fast you learn.  Zero learning equals zero intelligence.

 a reservation serivce has a world model as well, it knwo about 1000+ airline 
 routes and times, it talks to you, saves your preferences for outgoign 
 flight, and can use that to think and come up with a suggestion for an 
 incoming flight, and which airline to take

A reservation service does indeed have a world model but it is a *very* simple 
model with very few object types, relationships, and actions.  The 1000+ 
airline routes and times are merely data within the model and even if they 
numbered a million they would not increase the size of the *model*.  But the 
most important thing is that the model is absolutely fixed -- i.e. the system 
doesn't learn.

 and an expert system as having more intelligence due to a richer world model 
 and more ability to give answers.

I would say that the expert system is more capable but would disagree that it 
has more intelligence (unless it has some sort of learning functionality).

 If we took a 10 year old child, and stopped their ability to learn, they 
 would still have the ability to do all the things they did before, can go to 
 the store, and play and fix breakfast etc.

Again, I would phrase this as the child still has their old capabilities but 
their intelligence has dropped to zero -- because realistically, they would not 
maintain the ability to do all the things they did before.  Initially, yes -- 
BUT -- slowly and surely, as their environment changed, they would be less and 
less capable of dealing with it as they couldn't learn what they needed to cope 
with the change.

 But understanding itself doesnt have any special requirement that it 
 understand New things, just the things that are currently considering.

Have you seen the things that you're currently considering before?  If so, how 
is rote memorization different from understanding?

Mark

  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Friday, May 04, 2007 11:24 AM
  Subject: Re: [agi] rule-based NL system


  Two problems unfortunatly arise quickly there,
  1. Internal World Model.
An intelligence must have some form of internal world model, because this 
is what it operates on internally, its memory, 
People have a complex world model including everythign we have built up 
over years, but a reservation serivce has a world model as well, it knwo about 
1000+ airline routes and times, it talks to you, saves your preferences for 
outgoign flight, and can use that to think and come up with a suggestion for an 
incoming flight, and which airline to take.  If the system contains weather 
data as well, and can use it, then it could be more intelligent.
It has a world model built up there, not as complex, but defintly there, 
and I would rate that as having some level of intelligence and an expert 
system as having more intelligence due to a richer world model and more ability 
to give answers.
  2. Learning.
Probably a contreversial point here, but 
  Do you think learning is a requirement for understanding, or intelligence?
  For an intelligence, I dont believe it is.  If we took a 10 year old child, 
and stopped their ability to learn, they would still have the ability to do all 
the things they did before, can go to the store, and play and fix breakfast etc.
Now for an AGI to grow and be able to do more and more things, it needs to 
have the ability to learn.  But understanding itself doesnt have any special 
requirement that it understand New things, just the things that are currently 
considering.

  Jame Ratcliff

  Mark Waser [EMAIL PROTECTED] wrote:
 What definition of intelligence would you like to use?

Legg's definition is perfectly fine for me.

 How about the answering machine test for intelligence? A machine passes 
 the
 test if people prefer talking to it over talking to a human. For example, 
 I
 prefer to buy airline tickets online rather than talk to a travel agent. 
 To
 pass the answering machine test, I would make the same preference given 
 only
 voice communication, even if I know I won't be put on hold, charged a 
 higher
 price, etc. It does not require passing the Turing test. I may be 
 perfectly
 aware it is a machine. You may substitute instant messages for voice if 
 you
 wish.

What does being preferred by humans have to do with (almost any 
definition 
of) intelligence? If you mean that it can solve any problem (i.e. tell a 
caller how to reach any goal -- or better yet even, assist them) then, 
sure, 
it works for me. If it's only dealing with a limited domain, like being a 
travel agent, then I'd 

Re: [agi] The role of incertainty

2007-05-04 Thread Mark Waser
 However, we can do some borg mind stuff to work around this problem -- so 
 that each pet retains its own personality, yet benefits from collective 
 learning done basis on the totality of all the pets' memories... 

Nice!


  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, May 04, 2007 11:46 AM
  Subject: Re: [agi] The role of incertainty




2. More specific for your AGI,
  What do you see the virtual pets doing?  Specifically as end user 
functions for the consumer, the selling points you would give them, and how the 
AGI would help these functions.  
  Is it going to be a rich enough situation in general to display more than 
just a blocks world type of pet, go fetch me the blue ball over there 

James Ratcliff



  Well, it's a commercial project so I can't really talk about what the 
capabilities of the version 1.0 virtual pets will be.

  But the idea will be to start simple, and then incrementally roll out smarter 
and smarter versions.  And, the idea is to make the pets flexible and adaptive 
learning systems, rather than just following fixed behavior patterns. 

  One practical limitation is that we need to host a lot of pets on each 
server...

  However, we can do some borg mind stuff to work around this problem -- so 
that each pet retains its own personality, yet benefits from collective 
learning done basis on the totality of all the pets' memories... 

  -- Ben

--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-05-04 Thread Richard Loosemore

Mike Tintner wrote:

Richard,

What's the point here? You seem to be just being cussed. You're not 
really interested in the structure of the sciences, are you?


Is this ad hominem remark really necessary?

Psychosemiotics, first off, does NOT EXIST  -  so how cognitive science 
could already cover it is interesting. It has been mooted vaguely - in a 
book esp. by Howard Smith:


Okay, I'll try to phrase it as carefully as I can:  what you suggest as 
the subject matter of 'psychosemiotics' does not seem to differ from the 
subject matter of cognitive science/psychology, because the latter 
already is committed to understanding cognition in all its aspects, 
including the rather small aspect of cognition that is the human use of 
signs .. so if you think there is something special about 
psychosemiotics that makes it distinct from what cognitive science is 
already doing, please specify this.



psychosemiotics, defined as the study of how we learn, understand, and 
use the signs of culture (p. 2), offers a way to understand cognition 
by examining how humans use signs to make meaning of their everchanging 
physical and cultural environments (p. 3). 


I posit a more ambitious formulation,  -  that it should be esp. about 
how the structure of sign systems reflects  the structure of the human 
brain. I doubt that you're really into this area, because if you were, 
you'd have noticed that the structure/ division I use (symbols/ 
graphics/ images) is not a recognized division. No, this whole area is 
still virgin territory - if you disagree, point out the research or 
relevant branch(es) of science.


All you have done so far is to declare that Semiotics should be used to 
shed light on the structure of the human mind, and that this should be 
called psychosemiotics, and that this is virgin territory.


My response to you is the same as the response I would give to someone 
who might claim that the human use of restaurants should be used to shed 
light on the structure of the human mind, and that this should be called 
psychobistromathics, and that this is virgin territory.


I would ask:  why is this different from the general use of all kinds of 
human behaviors to study the mind . a field that is already named, 
and is called cognitive science?  Most people would say that it has to 
be a good deal more than just a vague declaration of intent, to be a 
scientific field with a new name.



(BTW Someone already did employ the human use of restaurants as a way to 
shed light on the structure of the human mind, but they were never 
inclined to declare it a new field of study, or promise, before they had 
even started on it, that it was a virgin territory).




Vis a vis:

There is an actual picture tree
in the brain -- see above quote from you -- which is a direct, 
unambiguous description of the position defended by the group 
associated with Kosslyn.


- I take that more seriously, although I am v. confident of my position. 
Link me to a statement of this position of the group associated with 
Kosslyn, and I will reply in detail.


Try any basic undergraduate text on cognitive science, or, if you are in 
a hurry, I am sure you will be able to find a statement of their 
position somewhere in these, or a thousand other places:


http://en.wikibooks.org/wiki/Cognitive_Psychology_and_Cognitive_Neuroscience/Imagery

http://www.iep.utm.edu/i/imagery.htm

http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/I/imagery.html

http://users.ecs.soton.ac.uk/~harnad/Hypermail/Foundations.Cognition/0091.html

http://ruccs.rutgers.edu/faculty/pylyshyn-mehler.htm

http://mitpress.mit.edu/catalog/item/default.asp?tid=7103ttype=2

http://www.gis.net/~tbirch/mi11.htm

http://www-static.cc.gatech.edu/~jimmyd/summaries/kosslyn1994.html



































-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] What would motivate you to put work into an AGI project?

2007-05-04 Thread Charles D Hixson

What would motivate you to put work into an AGI project?

1) A reasonable point of entry into the project

2) The project would need to be FOSS, or at least communally owned.  
(FOSS for preference.)  I've had a few bad experiences where the project 
leader ended up taking everything, and don't intend to have another.


3)  The project would need to be adopting a multiplex approach.  I don't 
believe in single solutions.  AI needs to represent things in multiple 
ways, and to deal with those ways in quasi-independent channels.  My 
general separation is:  Goals (desired end states), Desires (desired 
next states), Models, and logic.  I recognize that everything is 
addressed by a mixture of these approaches...but people seem to use VERY 
different mixtures (both from person to person and in the same person 
from situation to situation).


4) I'd need to have a belief that the project had a sparkplug.  
Otherwise I might as well keep fumbling around on my own.  Projects need 
someone to inspire the troops.


5) There would need to be some way to communicate with the others on the 
project that didn't involve going to a restaurant.  (I'm on a diet, and 
going to restaurants frequently is a really BAD idea.)  (N.B.:  One 
project I briefly joined had a chat list...which might have worked well 
if it had actually been the means of communication.  Turned out that the 
inner circle met frequently at a restaurant and rarely visited the 
chat room.  But I think a mailing list or a newsgroup is a better choice 
anyway.  [The project was successful, but I think that the members on 
the chat group were mainly a diversion from the actual work of the 
project.])


6)  Things would need to be reasonably documented.  This comes in lots 
of forms, but for a work in progress there's a lot to be said for 
comments inserted into the code itself, and automatically extracted to 
create documentation.  (Otherwise I prefer the form that Python 
uses...but nobody else does that as well.)


7) LANGUAGES:  Using a language that I felt not completely unsuitable.   
After LOTS of searching I've more or less settled on Java as the only 
wide-spread language with decent library support that can run 
distributed systems with reasonable efficiency.  There are many other 
contenders (e.g., C, C++, Fortran, Alice, Erlang, and D each have their 
points), and I don't really *like* Java, but  Java, C, and C++ appear to 
be the only widely used languages that have the ability to run across a 
multi-processor with reasonable efficiency.  (And even there the 
techniques used can hardly be called widespread.)
7a)  Actually C and C++ can be suitable if there are appropriate 
libraries to handle such things as garbage collection, and protocols for 
how to save persistent data and then remember it later.  But I still 
don't like the way they make free use of wild pointers.
7b)  I wonder to what extent the entire project needs to be in the same 
language.  This does make understanding things easier, as long as it's 
small enough that someone can understand everything at a low level, or 
if the entity should ever want to understand itself.  But there are 
plausible arguments for writing things in a rapid development language, 
such as Python or Ruby, and then only translating the routines that 
later need to be translated for efficiency.  (If only those languages 
could execute across multiple processors!)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] rule-based NL system

2007-05-04 Thread Charles D Hixson

J. Storrs Hall, PhD. wrote:

On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
  

Mark Waser wrote:


... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
  

...
But note that in this case world model is not a model of the same
world that you have a model of.  



After reading the foregoing discussions of subjects such as intelligence, 
language, meaning, etc, it is quite clear to me that the various members of 
this list do not have models of the same world. This is entirely appropriate: 
consider each of us as a unit in a giant GA search for useful ways of 
thinking about reality...


Josh
  
Well, that's true.  E.g., when I was 3 I had one I patched for 3 months 
in a vain attempt to cure amblyopia.  This caused me to be relatively 
detached from visual imagery, and more attached to kinesthetic imagery. 
But still, all normal people have a world model where when their eyes 
are covered they can't see, but where the eyes cannot be removed and 
then replaced.  So there are relatively small degrees of difference 
between the world models of normal humans and those which will be 
learned by AGIs.  This is even true in the case of AGIs which are raised 
with the intention of having them have approximately normal maturation.  
The attempt is essentially futile.  Humans will come to resemble AGIs 
before AGIs come to resemble people.  (Admittedly, though, the AGIs that 
people eventually come to resemble won't bear much resemblance to the 
early model AGIs.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?]

2007-05-04 Thread James Ratcliff
  I think at some point in time an AGI will need to be embodied.. I know many 
intend to use robots in the future, and to copy the software into them, as a 
step embodiment in a virtual environment could prove useful.
  One thing I intend to do with mine is give it autonomy as soon as possible, 
and allow it to explore, and try things out.  This is a crucial route in 
learning, and by letting a bot loose in a sim office environment, it could act 
and interact with many objects and learn in that fashion.
  There is only so much we can do with text only, many things will be learned 
as experienced, with text we can pull in some initial material and much facts, 
but not all of the life experience.
  A simplified version of vision can be used, goign ahead and letting a bot 
know or read the name of many of the objects, to get around the specific 
vision problems that are known.
  Second life now has a programming API as well, and I just found a damage 
function which could be used for role-playing agents.

James Ratcliff

YKY (Yan King Yin) [EMAIL PROTECTED] wrote: 

 On 4/27/07, James Ratcliff [EMAIL PROTECTED] wrote: 
 *navigation,
 *manipulation,
 *communication/ languages

   
 You're talking about stepping stone tasks to bootstrap an AGI, so I'd say 
navigation and manipulation are not necessary.  In fact, the whole 
embodiment thing can be ditched, because language is sufficient.  In fact, a 
robot or VR-bot cannot navigate / manipulate without vision, so the embodiment 
route is too complex and inefficient compared to the NL route. 
  
 I'd add:
 * basic reasoning abilities
 * memorizing facts and maintain a coherent body of knowledge
 otherwise the thing would be useless.
  
 Agree?
  
 YKY


   
-
Ahhh...imagining that irresistible new car smell?
 Check outnew cars at Yahoo! Autos.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] rule-based NL system

2007-05-04 Thread James Ratcliff
Yeah I am trying to get his to run, but no luck yet, wish it wasnt only linux 
based,

But even a general 3-D graphical app is not that hard to write, I have done a 
few, 
and I am also looking at something like using a Second Life interface, as much 
of the graphics and interface design has already been done, and there is a rich 
environment and interface there that could be built upon.
  I also wrote a bot for World of Warcraft, though I dont believe the 
environment is rich enough for full interactions needed by an AGI.

Once you could get to a level of telling the AGI to do something like Fill up 
that bucket full of water, having it respond with ? Dont know how, please show 
me, and then being able to use your character to specificially show it how to 
do tasks, you would be in a good position to have a teachable robot that could 
then generalize on these tasks to learn how to do many different things.

James Ratcliff

YKY (Yan King Yin) [EMAIL PROTECTED] wrote: 
 On 5/4/07, James Ratcliff [EMAIL PROTECTED] wrote: 
 The point of most of this is humans and an AI would need to construct a 
 imaginary world environment in their mind.  Most people make a typical 
 elephant, and a typical chair and then interact the to as directed. 
 
 A blind person still gets its information from experience... if it reads 
 about an elephant, it proabbly says a big animal the size of a car, and her 
 experience lets her know abnout cars and animals, and she has sat in chairs 
 and know how big they are. 
 But both of those are tied to the physical experences that she has.  You can 
 only get so much from the words alone unless you have an infinite database 
 where everything poeeible has been described fully.
  
 But many many things can be gathered from the text alone as well.

  
 A VR interface would certainly be nice, but it takes a lot of time to build 
one and I'm not good at that area.  Maybe Ben's AGI-Sim can be used by another 
AGI?  If so we can save a lot of efforts. 
  
 YKY
 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Ahhh...imagining that irresistible new car smell?
 Check outnew cars at Yahoo! Autos.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] rule-based NL system

2007-05-04 Thread Mark Waser
 I would say rote memorization and knowledge / data, IS understanding.

OK, we have a definitional difference then.  My justification for my view 
is that I believe that you only *really* understand something when you have 
predictive power on cases that you haven't directly seen yet (sort of like 
saying that, in order to be useful or have any value, a hypothesis must have 
predictive power).

 I look outside and I see a tree, I understand that it is a tree, I know its 
 a tree, I know about leaves and grass and how it grows...  I havnt learned 
 anything new, I memorized all that from books and teaching etc.

I don't think so.  I think that you have a lot of information that you 
derived from generalizations, analogies, etc (i.e. learning).


 I would further say that I given the level of knowledge and understanding 
 about the tree that I was intelligent in that area, you could ask me 
 questions and I could answer them, I could conjecture what would happen if I 
 dug the tree up etc.

Are you *sure* that you've been directly told what would happen if you dug 
a tree up?  What do you think would happen if you dug up a planticus imaginus?  
I'm sure that you haven't been specifically told what would happen then.  :-)  
I think that you have some serious predictive power that is *not* just rote 
memorization.

 Learning does not seem to be a requirment for intelligence, though a good 
 intelligence, and a growing intelligence would need to learn.

Your definition of intelligence is apparently (and correct me if I'm wrong) how 
well something deals with it's environment.  My contention is that anything 
that doesn't learn will necessarily undergo a degradation of their ability to 
deal with it's environment.  If you agree with this, then why don't you agree 
with learning being a requirement for intelligence?

Mark

  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Friday, May 04, 2007 4:56 PM
  Subject: Re: [agi] rule-based NL system


  I would say rote memorization and knowledge / data, IS understanding.

  I look outside and I see a tree, I understand that it is a tree, I know its a 
tree, I know about leaves and grass and how it grows...  I havnt learned 
anything new, I memorized all that from books and teaching etc.

  I would further say that I given the level of knowledge and understanding 
about the tree that I was intelligent in that area, you could ask me questions 
and I could answer them, I could conjecture what would happen if I dug the tree 
up etc.

  Learning does not seem to be a requirment for intelligence, though a good 
intelligence, and a growing intelligence would need to learn.

  James Ratcliff

  Mark Waser [EMAIL PROTECTED] wrote:
Hi James,

I'm going to handle your questions in reverse order . . . . 

 Do you think learning is a requirement for understanding, or intelligence?

Yes, I believe that learning is a requirement for intelligence.  
Intelligence is basically how fast you learn.  Zero learning equals zero 
intelligence.

 a reservation serivce has a world model as well, it knwo about 1000+ 
airline routes and times, it talks to you, saves your preferences for outgoign 
flight, and can use that to think and come up with a suggestion for an incoming 
flight, and which airline to take

A reservation service does indeed have a world model but it is a *very* 
simple model with very few object types, relationships, and actions.  The 1000+ 
airline routes and times are merely data within the model and even if they 
numbered a million they would not increase the size of the *model*.  But the 
most important thing is that the model is absolutely fixed -- i.e. the system 
doesn't learn.

 and an expert system as having more intelligence due to a richer world 
model and more ability to give answers.

I would say that the expert system is more capable but would disagree that 
it has more intelligence (unless it has some sort of learning functionality).

 If we took a 10 year old child, and stopped their ability to learn, they 
would still have the ability to do all the things they did before, can go to 
the store, and play and fix breakfast etc.

Again, I would phrase this as the child still has their old capabilities 
but their intelligence has dropped to zero -- because realistically, they would 
not maintain the ability to do all the things they did before.  Initially, yes 
-- BUT -- slowly and surely, as their environment changed, they would be less 
and less capable of dealing with it as they couldn't learn what they needed to 
cope with the change.

 But understanding itself doesnt have any special requirement that it 
understand New things, just the things that are currently considering.

Have you seen the things that you're currently considering before?  If so, 
how is rote memorization different from understanding?

Mark

  - Original Message 

Re: [agi] Trouble implementing my AGI Algorithm

2007-05-04 Thread a
I do not believe that the algorithm must be more complex. The more complex the 
algorithm, the more ad hoc it is. Complex algorithms are not able to perform 
generalized tasks. I believe the reason that n-digit was a failure was because 
there is no vision system, NOT because the algorithm is too simple. Because the 
algorithm searches the database recursively, I believe that my simple algorithm 
can perform any computation (trained by operant conditioning). The failure for 
n-digit addition was because there are no eyes that can move to concentrate 
on each digit. 

The database is remarkably similar to the human brain. It can learn easily by 
only remembering the difference between the external stimuli with a similar 
stimuli remembered in the database. Therefore, the algorithm compress the 
learned knowledge efficiently. Pattern recognition and abstract reasoning is 
also easy because of the incremental learning. 


I am having trouble with the fuzzy database representation. So it's best to 
test the algorithm in a specific subfield (like n-digit addition) and then 
generalize it into real-world tasks. 

In general, my algorithm behaves like the brain of an animal. Animals learn by 
operant conditioning and are also difficult to teach them multiple digit 
addition.

I believe that the environment must be fuzzy in order for the operant 
conditioning method to work.

I know that the database has to remember pain and pleasure for stimuli. But I 
have difficulty making a fuzzy database representation, even for some 
subfields.

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 3, 2007 5:06:33 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm

Interesting e-mail.  I agree with most of your philosophy but believe that 
the algorithm you are requesting is far, far more complex than you realize.

Is there any particular reason why you're remaining anonymous?

- Original Message - 
From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 03, 2007 4:57 PM
Subject: [agi] Trouble implementing my AGI Algorithm


 Hello,

 I have trouble implementing my AGI algorithm:

 The below paragraphs might sound ridiculous, because they are my original 
 ideas.

 We are all motivated by selfish thoughts. We help others so others can 
 help us back. We help others to cope
 with our pleasurable chemical addiction. We help others because 
 helpfulness is encoded in our genetic markup.

 We experience pain. Pain is to help us defend damage. When we touch 
 something hot we can draw back. But we
 have the free will to not react to it. I believe there is no free will.

 I will explain what I means. Assume that pain is a constraint. But this 
 constraint is not absolute. Other
 thoughts can override the constraint. For example, when you help some 
 animal being eaten from a monster, you
 can fight with the monster to save the

 animal's life. But you will experience pain in the fight. Therefore pain 
 is not a constraint. Your goal to save the animal's
 life overrides the pain constraint. (your goal to save the

 animal's life is also motivated by selfish actions) Therefore, pain is not 
 a constraint. But if there is no goal that overrides the pain constraint, 
 you will do anything to avoid the pain. We have proven there is no free 
 will--we choose to react or not react to pain is dependent on your goal or 
 our knowledge. Therefore, implementing pain as a constraint in friendly AI 
 will not help many lives. Our brains are doing things to get the highest 
 pleasure as possible. We get a chemical addiction to save that animal. 
 That pleasure is more pleasant than avoiding the pain by not fighting. We 
 trust ourselves. We can gamble pain for future pleasure. Therefore, I 
 believe that emotion can be implemented by an ordinary computer. Emotion 
 can be implemented by an algorithm that searches for the highest pleasure. 
 The algorithm must also has the ability to gamble pain for pleasure (by 
 applying goals or knowledge). There is no right or wrong. We kill 
 insects all the time. But we usually do
 not sympathize with them. This is because that our religion says that 
 bugs are not as important as other animals. It's
 a byproduct of natural selection. We have to hunt animals to survive.

 Without religion, we would brood over this question: Is it better to save 
 a human by sacrificing 1000 insects
 or vice versa?

 Therefore we assume that religion is natural. Religion helps us survive. 
 Some religions help us believe there
 is afterlife and reincarnation. Because we believe these, we do not fear 
 death. We are not afraid to
 sacrifice ourselves for others. For example, we will not be afraid to 
 participate in wars and spread our
 religion. Religion is a virus. Most of the world is religious because of 
 that.

 Therefore, some religions are dangerous. But religion is essential for our 
 daily survival. Some religious
 thoughts are 

Re: [agi] rule-based NL system

2007-05-04 Thread James Ratcliff
  Its mainly that I believe there is a full range of intelligences available, 
from a simple thermostat, to a complex one that measures and controls humudity 
and knows if a person is in a run, and has specific settings for differnt 
people, to a an expert system, to a human to an AI and super AGI, all having 
some level of intelligence.
  The ones we are concerned with are the 1/2 human level and anything above.
  Learning I would say is a key role in having a high-level of intelligence, 
probably the main building block, learning and reasoning, both tied tightly 
together.

James Ratcliff

Mark Waser [EMAIL PROTECTED] wrote:  I  would say rote memorization 
and knowledge / data, IS  understanding.

 OK, we have a definitional  difference then.  My justification for my view 
is that I believe that you  only *really* understand something when you have 
predictive power on cases that  you haven't directly seen yet (sort of like 
saying that, in order to be  useful or have any value, a hypothesis must have 
predictive power).
  
  I look  outside and I see a tree, I understand that it is a tree, I know 
  its a tree, I  know about leaves and grass and how it grows...  I havnt 
  learned anything  new, I memorized all that from books and teaching etc.
  
 I don't think so.  I think  that you have a lot of information that you 
derived from generalizations,  analogies, etc (i.e. learning).
 

  I  would further say that I given the level of knowledge and understanding 
  about  the tree that I was intelligent in that area, you could ask me 
  questions and I  could answer them, I could conjecture what would happen if 
  I dug the tree up  etc.
  
 Are you *sure* that you've been  directly told what would happen if you 
dug a tree up?  What do you think  would happen if you dug up a planticus 
imaginus?   I'm sure that you haven't been specifically told what would happen 
then.   :-)  I think that you have some serious predictive power that is *not* 
just  rote memorization.
  
   Learning does not seem to be a requirment for intelligence, though a good  
  intelligence, and a growing intelligence would need to  learn.
  
 Your definition  of intelligence is apparently (and correct me if I'm wrong) 
how well  something deals with it's environment.  My contention is that 
anything that  doesn't learn will necessarily undergo a degradation of their 
ability to deal  with it's environment.  If you agree with this, then why don't 
you agree  with learning being a requirement for intelligence?
  
 Mark
  
- Original Message - 
   From:James Ratcliff
   To: agi@v2.listbox.com 
   Sent: Friday, May 04, 2007 4:56 PM
   Subject: Re: [agi] rule-based NLsystem
   

I would say rote memorization and knowledge / data, ISunderstanding.

I look outside and I see a tree, I understand that it isa tree, I know its 
a tree, I know about leaves and grass and how itgrows...  I havnt learned 
anything new, I memorized all that from booksand teaching etc.

I would further say that I given the level ofknowledge and understanding 
about the tree that I was intelligent in thatarea, you could ask me 
questions and I could answer them, I could conjecturewhat would happen if I 
dug the tree up etc.

Learning does not seem tobe a requirment for intelligence, though a good 
intelligence, and a growingintelligence would need to learn.

James Ratcliff

MarkWaser [EMAIL PROTECTED] wrote:   Hi James,
  
 I'm going to handle your  questions in reverse order . . . . 
  
  Do you think learning is a requirement for understanding, or  
intelligence?
  
 Yes, I believe that learning is a requirement  for intelligence.  
Intelligence is basically how fast you learn.   Zero learning equals zero 
intelligence.
  
  a reservation serivce has a world model as well, it knwo about  
1000+ airline routes and times, it talks to you, saves your preferences for 
 outgoign flight, and can use that to think and come up with a suggestion for   
   an incoming flight, and which airline to take
  
 A reservation service does indeed have a world  model but it is a 
*very* simple model with very few object types,  relationships, and 
actions.  The 1000+ airline routes and times are  merely data within the 
model and even if they numbered a million  they would not increase the size 
of the *model*.  But the most  important thing is that the model is 
absolutely fixed -- i.e. the system  doesn't learn.
  
  and an  expert system as having more intelligence due to a richer 
world model and  more ability to give answers.

 I would say that the expert system is more  capable but would disagree 
that it has more intelligence (unless it has some  sort of learning 
functionality).
  
  If we  took a 10 year old child, and stopped their ability