Re: [agi] Early Apps.

2002-12-30 Thread Damien Sullivan
 Gary Miller wrote:

  That being said other than Cyc I am at a loss to name any serious AI
  efforts which are over a few years in duration and have more that 5 man
  years worth of effort (not counting promotional and fundraising).

No offense, but I suspect you need to read more of the literature.  I still am
rather clueless about the field, and I can name a few such projects.  In
Hofstadter's lab both the Metacat and Letter Spirit projects are each the
product of roughly a man-decade of effort, one man (or woman) at a time.  The
Tabletop project might count as more effort in the same design, not to mention
Copycat's precursors.  It's likely that someone will be working on extending
Metacat soon.

Elsewhere, there's the ACT-R project at CMU, formerly ACT-*, about which I
know very little, but it seems to have been around for a while.  At Indiana
University David Leake's case-based reasoning project seems to have multiple
grad students, probably pushing it over 5 man years quickly, although if by
serious AI you meant general AI now it might not qualify.

-xx- Damien X-) 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Early Apps.

2002-12-30 Thread Pei Wang
As I posted to this mailing list a few months ago,  I have a list (now
including 10 projects) that:

  a.. Each of them has the plan to eventually grow into a thinking machine
or artificial general intelligence (so it is not merely about part of AI);
  b.. Each of them has been carried out for more than 5 years (so it is more
than a PhD project);
  c.. Each of them has prototypes or early versions finished (so it is not
merely a theory), and there are some publications explaining how it works
(so it is not merely a claim).
Ben has a similar list at http://www.agiri.org/agilinks.htm.

If by serious AI efforts you don't restrict the field to AGI (or strong
AI, real AI, and so on), then there are hundreds of projects with more
that 5 man years worth of effort.

Pei

- Original Message -
From: Damien Sullivan [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, December 30, 2002 5:57 PM
Subject: Re: [agi] Early Apps.


  Gary Miller wrote:

   That being said other than Cyc I am at a loss to name any serious AI
   efforts which are over a few years in duration and have more that 5
man
   years worth of effort (not counting promotional and fundraising).

 No offense, but I suspect you need to read more of the literature.  I
still am
 rather clueless about the field, and I can name a few such projects.  In
 Hofstadter's lab both the Metacat and Letter Spirit projects are each the
 product of roughly a man-decade of effort, one man (or woman) at a time.
The
 Tabletop project might count as more effort in the same design, not to
mention
 Copycat's precursors.  It's likely that someone will be working on
extending
 Metacat soon.

 Elsewhere, there's the ACT-R project at CMU, formerly ACT-*, about which I
 know very little, but it seems to have been around for a while.  At
Indiana
 University David Leake's case-based reasoning project seems to have
multiple
 grad students, probably pushing it over 5 man years quickly, although if
by
 serious AI you meant general AI now it might not qualify.

 -xx- Damien X-)

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Early Apps.

2002-12-30 Thread Pei Wang

Sorry, I forgot to mention that my list is at
http://www.cis.temple.edu/~pwang/203-AI/Lecture/203-1126.htm.

Happy New Year to everyone!

Pei

 - Original Message -
 From: Pei Wang [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Monday, December 30, 2002 6:26 PM
 Subject: Re: [agi] Early Apps.


  As I posted to this mailing list a few months ago,  I have a list (now
  including 10 projects) that:
 
a.. Each of them has the plan to eventually grow into a thinking
 machine
  or artificial general intelligence (so it is not merely about part of
 AI);
b.. Each of them has been carried out for more than 5 years (so it is
 more
  than a PhD project);
c.. Each of them has prototypes or early versions finished (so it is
not
  merely a theory), and there are some publications explaining how it
works
  (so it is not merely a claim).
  Ben has a similar list at http://www.agiri.org/agilinks.htm.
 
  If by serious AI efforts you don't restrict the field to AGI (or
strong
  AI, real AI, and so on), then there are hundreds of projects with
more
  that 5 man years worth of effort.
 
  Pei
 
  - Original Message -
  From: Damien Sullivan [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Monday, December 30, 2002 5:57 PM
  Subject: Re: [agi] Early Apps.
 
 
Gary Miller wrote:
  
 That being said other than Cyc I am at a loss to name any serious
AI
 efforts which are over a few years in duration and have more that
5
  man
 years worth of effort (not counting promotional and fundraising).
  
   No offense, but I suspect you need to read more of the literature.  I
  still am
   rather clueless about the field, and I can name a few such projects.
In
   Hofstadter's lab both the Metacat and Letter Spirit projects are each
 the
   product of roughly a man-decade of effort, one man (or woman) at a
time.
  The
   Tabletop project might count as more effort in the same design, not to
  mention
   Copycat's precursors.  It's likely that someone will be working on
  extending
   Metacat soon.
  
   Elsewhere, there's the ACT-R project at CMU, formerly ACT-*, about
which
 I
   know very little, but it seems to have been around for a while.  At
  Indiana
   University David Leake's case-based reasoning project seems to have
  multiple
   grad students, probably pushing it over 5 man years quickly, although
if
  by
   serious AI you meant general AI now it might not qualify.
  
   -xx- Damien X-)
  
   ---
   To unsubscribe, change your address, or temporarily deactivate your
  subscription,
   please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
  
 



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Early Apps.

2002-12-29 Thread Ben Goertzel


Gary Miller wrote:
 I agree that as humans we bring a lot of general knowledge with us when
 we learn a new domain.  That is why I started off with the general
 conversational domain and am now branching into science, philosophy,
 mathematics and history.  And of course the AI can not make all the
 connections without being extensively interviewed on a subject and
 having a human help clarify it's areas of confusion just as a parent
 answers questions for a child or a teacher for a student.  I am not in
 fact trying to take the exhaustive approach one domain at a time
 approach but rather to teach it the most commonly known and requested
 information first.  My last email just used that description to identify
 my thoughts on grounding.  I am hoping that by doing this and repeating
 the interviewing process in an iterative development cycle that
 eventually the bot will eventually be able to discuss many different
 subjects at a somewhat superficial level much as same as most humans are
 capable of.  This is a lot different from the exhaustive definition that
 Cyc provides for each concept.

Gary, I respect the hypothesis you're making here: it is a scientific
hypothesis in the sense of Karl Popper, i.e. it is pragmatically
falsifiable.  You can try with this approach and see how it works.  It is
not identical to the expert systems approach, though it has some
commonalities.

My own intuition is that this approach will not succeed -- that conversing
with humans is not going to get across enough of the tacit, implicit
knowledge that a mind needs to have to really converse intelligently in any
nontrivial subject area.  I think that even if the implicit knowledge seems
to *us* to be there in the conversations, it won't be there *for the system*
unless the system has had some experience gaining implicit knowledge of its
own via nonlinguistic world-interaction.

 I don't think AI is absent sufficient theory, just sufficient execution.

Well, here I profoundly disagree with you.  I think that the
generally-accepted AI theories are profoundly wrong, and extremely limited
in their view of how intelligence must operate.  I think AI's failure to
execute is directly based on the failure of its theories to accept and
encompass the full complexity of the mind.

 I feel like the Cyc Project's heart was in the right place and the level
 of effort was certainly great, but perhaps the purity of their vision
 took priority over usability of the end result.  Is any company actually
 using Cyc as anything other than a search engine yet?

 That being said other than Cyc I am at a loss to name any serious AI
 efforts which are over a few years in duration and have more that 5 man
 years worth of effort (not counting promotional and fundraising).

My Novamente project certainly fits this description.  The Webmind AI
project had about 70 man-years of effort go into it between 1997-early 2001.
Novamente is Webmind's successor -- different code, different mathematics,
different software architecture, but the same spirit, and building on
Webmind's successes and mistakes.  Novamente has had maybe 7 man-years of
effort go into it since mid-2001.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Early Apps.

2002-12-28 Thread Ben Goertzel

Gary Miller wrote:
***
I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

...

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.
***

What you're describing is the Expert System approach to AI, closely
related to the common sense approach to AI.

Cycorp takes this point of view, and so have a whole lot of other AI
projects in the last few decades...

I certainly believe there's some truth to it.  If you encoded a chemistry
textbook in formal logic, fed it into an AI system, and let the AI system do
a lot of probabilistic reasoning and associating on the information, then
you'd have a lot of speculative uncertain intuitive knowledge generated in
the system, complementing the hard knowledge that was explicitly encoded.
If you encoded a physics textbook and a bio textbook as well, you could have
the system generate uncertain, intuitive cross-domain knowledge in the same
way.

In fact, we are doing something like this in Novamente now, for a
bioinformatics application.  We're feeding in information from a dozen
different bio databases and letting the system reason on the integrated
knowledge  right now we're at the feeding in stage.

Unlike some anti-symbolic-AI extremists, I think this sort of thing can be
*useful* for AGI.  But I think it can only be a part of the picture.
Whereas I think experience-based learning is a lot more essential

I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think deeply
and creatively about any domain -- even a technical domain about science.
What's missing, among other things, is the intricate interlinking between
declarative and procedural knowledge.  When humans learn a domain, we learn
not only facts, we learn techniques for thinking and problem-solving and
experimenting and information-presentation .. and we learn these in such a
way that they're all mixed up with the facts  In theory, I believe, all
this stuff could be formalized -- but the formalization isn't pragmatically
possible to do, because we humans don't explicitly know the techniques we
use for thinking, problem-solving, etc. etc.   In large part, we do them
tacitly, and we learn them tacitly..

When we learn a new domain declaratively, we start off by transferring some
of our tacit knowledge from other domains to that new domain.  Then, we
gradually develop new tacit knowledge of that domain, based on experience
working in the domain...

I think that this tacit knowledge (lots of uncertain knowledge, mixing
declarative  procedural) has got to be there as a foundation, for a system
to really deploy factual knowledge in a creative  fluent way...


***
 I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.
***

I think this is a good description of one among  many processes involved in
language generation...

I also think there's some more complex unconscious inference going on, than
is implied by your statement.  It's not a matter of cutting and pasting
into a template, it's a matter of recursively applying a bunch of syntactic
rules that build up complex linguistic forms from simpler ones.  The
syntactic buildup process has parallels to the thought-buildup process, and
the two sometimes proceed in synchrony, which is one of the reasons
formulating thoughts in language can help clarify them.

I dealt with some of these issues -- on a conceptual, not an
implementational level - in a chapter in my book from complexity to
creativity, entitled Fractals and Sentence Production:

http://www.goertzel.org/books/complex/ch9.html

If I were to rewrite that chapter now, it would have a lot of stuff on
probabilistic inference  unification grammars -- richer and better details,
enhanced by the particular 

RE: [agi] Early Apps.

2002-12-28 Thread Gary Miller
Ben Goertzal wrote:

I don't think that a pragmatically-achievable amount of formally-encoded
knowledge is going to be enough to allow a computer system to think
deeply and creatively about any domain -- even a technical domain about
science. What's missing, among other things, is the intricate
interlinking between declarative and procedural knowledge.  When humans
learn a domain, we learn not only facts, we learn techniques for
thinking and problem-solving and experimenting and
information-presentation .. and we learn these in such a way that
they're all mixed up with the facts 

What you're describing is the Expert System approach to AI, closely
related to the common sense approach to AI.
 
...

I agree that as humans we bring a lot of general knowledge with us when
we learn a new domain.  That is why I started off with the general
conversational domain and am now branching into science, philosophy,
mathematics and history.  And of course the AI can not make all the
connections without being extensively interviewed on a subject and
having a human help clarify it's areas of confusion just as a parent
answers questions for a child or a teacher for a student.  I am not in
fact trying to take the exhaustive approach one domain at a time
approach but rather to teach it the most commonly known and requested
information first.  My last email just used that description to identify
my thoughts on grounding.  I am hoping that by doing this and repeating
the interviewing process in an iterative development cycle that
eventually the bot will eventually be able to discuss many different
subjects at a somewhat superficial level much as same as most humans are
capable of.  This is a lot different from the exhaustive definition that
Cyc provides for each concept.

I view what I am doing distinct from expert systems because I do not yet
use either a backward or forward inference engine to satisfy a limited
number of goal states. The knowledge base is not in the form of rules
but rather many matched patterns and encoded factoids of knowledge many
of which are transitory in nature and track the context of the
conversation.  Each pattern may trigger a request for additional
information like an expert system.  But the bot does not have a
particular goal state in mind other that learning new information unless
a specific request of it is made by the user.  I also differ from Cyc in
that realizing the importance of English as a user interface from the
beginning, all internal thoughts and goal states occur as an internal
dialog in English.  This eliminates the requirement to translate an
internal knowledge representation to an external natural language other
than providing one or response patterns to specific input patterns.  It
also makes it easy to monitor what the bot is learning and whether it is
making proper inferences because it's internal thought process is
displayed in English while in debug mode..  The templates which generate
the responses in some cases do have conditional logic to determine which
output template is appropriate response based on the AI's personality
variables and the context of the current conversation.  Variables are
also set conditionally to maintain metadata for context.  If the
references a male in it's response [He] and [Him] get set vs. [Her] and
[She] if a female is referenced.  [CurrentTopic], [It], [There] and
[They] are all set to maintain backward contextual references.  

I was able to find a few references to the Common Sense approach to AI
on google and some of the difficulties in achieving it.  And I must
admit I have not yet implemented non-monotonic reason or probabilistic
reasoning as of yet.  I am not under the illusion that I am necessarily
inventing or implementing anything that has not been conceived of
before.  As Newton says if I achieve great heights it will be because I
have stood on the shoulders of giants.  I just see the current state of
the art and think that it can be made much better.  I do not actually
know how far I can take it while staying self-funded, but hopefully by
the time my money runs out it will demonstrate enough utility and
potential to be of value to someone.  I think I like the sound of the
Common Sense Approach to AI though.   I can't remember the last time
anyone accused me of having common sense, but I like the sound of it!

I don't think AI is absent sufficient theory, just sufficient execution.
I feel like the Cyc Project's heart was in the right place and the level
of effort was certainly great, but perhaps the purity of their vision
took priority over usability of the end result.  Is any company actually
using Cyc as anything other than a search engine yet?  

That being said other than Cyc I am at a loss to name any serious AI
efforts which are over a few years in duration and have more that 5 man
years worth of effort (not counting promotional and fundraising).  

The Open Source efforts are interesting and have some utility but are

Re: [agi] Early Apps.

2002-12-27 Thread Shane Legg
Alan Grimes wrote:

According to my rule of thumb, 

If it has a natural language database it is wrong, 

I more or less agree...

Currently I'm trying to learn Italian before I leave
New Zealand to start my PhD.  After a few months working
through books on Italian grammar and trying to learn lots
of words and verb forms and stuff and not really getting
very far, I've come to realise just how complex language is!

Many of you will have learnt a second language as an adult
yourselves and will know what I mean - natual languages are
massively complex things.  I worked out that I know about
25,000 words in English, many with multiple means, many
having huges amounts of symbol grounding information and
complex relationships with other things I know, then there
is spelling information and grammar knowledge and I'm
told that English grammar isn't too complex, but my Italian
grammar reference book is 250 pages of very dense information
on irregular verbs and tenses etc... and of course even that
is only a high level ridged structure description not how the
language is actually used.

Natural languages are hard - really hard.  Humans have special
brain areas that are set up to solve just this kind of problem
and even then it takes a really long time to get good at it,
perhaps ten years!  To work something that complex out using
a general intelligence rather than specialised systems would
require a computer that was amazingly smart in my opinion.

One other thing; if one really is focused on natural language
learning why not make things a little easier and use an artificial
language like Esperanto?  Unlike like highly artificial languages
like logic based or maths based etc. languages, Esperanto is just
like a normal natural language in many ways.  You can get novels
written in it, you can speak it, some children have even grown
up speaking it as one of their first languages along side other
natural languages.  However the language is extremely regular
compared to a real natural language.  For example there are only
16 rules of grammar - they can fit onto an single sheet of paper!
All the verbs and adverbs and pronouns and so on obey neat and tidy
patterns and rules.  I'm told that after two weeks somebody can
become comfortable enough with the grammar to be able to hold a
conversation and then after a few months of learning more words
is able to communicate quite freely and read books and so on.

Why not aim at this and make the job much easier?  If you ever
did build a computer that could hold a good conversation in
Esperanto I'm sure moving to a natural language would only be
a matter of taking what you already had and increasing the level
of complexity to deal with all the additional messiness required.

Enough rating for today!  :)

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] Early Apps.

2002-12-27 Thread Gary Miller
On Dec 26 Ben Goertzel said:

 One basic problem is what's known as symbol grounding.  this 
 means that an Ai system can't handle semantics, language-based 
 cognition or even advanced syntax if it doesn't understand the 
 relationships between its linguistic tokens and patterns in the 
 nonlinguistic world.

I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in 
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and 
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.  

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

Each time we repeat the process are we not grounding the previous
knowledge further because the bot can now reason and respond to
questions not just about hydrogen, it now has an English representation
of the relationship between hydrogen and other related concepts in the
physical world..

If we were to teach someone such as Helen Keller with very limited
sensory inputs would we not be attempting to do the same thing?

Humans of course do not learn in this exhaustive manner.  We get a
shotgun bombardment of knowledge from all types of media on all manner
of subjects.  The things that interest us we pursue additional knowledge
about.  The more detailed our knowledge in any given area the greater we
say our expertise 
is.  Initially we will be better grounded than a bot, because as
children we learn a little bit about a whole lot of things.  So anything
new we learn we attempt to tie into our semantic network.  

When I think.  I think in English.  Yes, at some level below my
conscious awareness these English thoughts are electrochemically
encoded, but consciously I reason and remember in my native tongue or I
retrieve a sensory image, multimedia if you will.

If someone tells me that A kinipsa is terrible plorid.  I attempt to
determine what a kinipsa and a plorid are so that I may ground this
concept and interconnect it correctly within my existing semantic
network.  If A bot is taught to pursue new knowledge and ground the
unknown terms with it's existing semantic net by putting the goals Find
out what a plorid is and Find out what a kinipsa is on it's list of
short term goals then it will ask questions and seek to ground itself as
a human would!

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.

When a child learns to speak he does not have a large dictionary to draw
on to tell him that mice is the plural of mouse.  No rule will tell
him that.  He has to learn it.  He will say mouses and someone will
correct him.  It gets added to his NLP database as an exception to the
rule.  A human has limited storage so a rule learned by generalizing
from experience is a shortcut to learning and remembering all the plural
forms for nouns.  In a AGI we can give the intelligence certain learning
advantages such as these dictionaries and lists of synonym sets which do
not take that much storage in the computer.  

I also think that children do not deal with syntax.  They have heard a
statement similar to what they want to express and have this stored as a
template in their minds.  I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.  For people who have to learn a foreign language as an
adult this is difficult because they tend to think in their first
language and commingle the templates from their original and the new
language.  But because we do not parse what we here and read strictly by
the laws of syntax we have little trouble understanding many of these
ungrammatical utterances.
 
 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of [EMAIL PROTECTED]
Sent: Thursday, December 26, 2002 11:03 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Early Apps.



On 26 Dec 2002 at 10:32, Gary Miller wrote:

 On Dec. 26 Alan Grimes said:
 
  According to my rule of thumb,
  If it has a natural language database it is wrong, 
  
 Alan I can see based on the current generation of bot technology why 
 one

RE: [agi] Early Apps.

2002-12-26 Thread Gary Miller
On Dec. 26 Alan Grimes said:

 According to my rule of thumb, 
 If it has a natural language database it is wrong, 
 
Alan I can see based on the current generation of bot technology why one
would feel this way.

I can also see people having the view that biological systems learn from
scratch so that AI systems should be able to also.

Neither of these arguments are particularly persuasive though based on
what I've developed to date.

Do you have other arguments against a NLP knowledge based approach that
you could share with me.

If you feel this is out of bounds for the list please just Email with
your arguments. 

I am involved in such a project and certainly don't wish to to be
wasting my time!


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of Alan Grimes
Sent: Thursday, December 26, 2002 1:12 AM
To: [EMAIL PROTECTED]
Subject: [agi] Early Apps.



According to my rule of thumb, 

If it has a natural language database it is wrong, 

many of the proposed early AGI apps are rather unfeasable. 

However, there is a very interesting application which goes streight to
the hart to the main AI problem and also provides a very valuble tool
for flexing the chips that we already have in our sweatty little hands. 

The area is COMPILERS. 

Today's compilers are notoriously bad. The leading free compiler is
atrociously bad. 

Now, if there could be an AI based compiler that could both understand
the source and the machine in a very human-like way the output code
would be that much better. This would also be valuble for a bootstrap
AI though I strongly caution against such an AI untill we have a _MUCH_
better understanding of what is going on. 

I expect to be preparing a proposal in a few months that will outline a
complete strategy for an AI that should be both fesable and, through
inhreant architectual constraints, be reasonably safe. 

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Early Apps.

2002-12-26 Thread Alan Grimes
 Neither of these arguments are particularly persuasive though based on
 what I've developed to date.

!+ d03$n'7 vv0rk b3cuz $uch 4 $!st3m c4n'+ r34d m! 31337 +3x+.


 I am involved in such a project and certainly don't wish to to be
 wasting my time!

I would be out of place to say anything about your project even if I did
know the specifics of your goals. The only thing I'm saying is be
realistic about the limitations of your approach.


-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Early Apps.

2002-12-26 Thread Damien Sullivan
On Thu, Dec 26, 2002 at 01:44:25PM -0800, Alan Grimes wrote:

 A human level intelligence requires arbitrary acess to
 visual/phonetic/other faculties in order to be intelligent. 

I'm sure all those blind and deaf people appreciate being considered
unintelligent.

-xx- Damien X-) 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Early Apps.

2002-12-26 Thread Alan Grimes
Damien Sullivan wrote:
  A human level intelligence requires arbitrary acess to
  visual/phonetic/other faculties in order to be intelligent.

 I'm sure all those blind and deaf people appreciate being considered
 unintelligent.

It depends.

If their brains are intact they are no less intelligent than their
peers. 

However there are some forms of blindness that involves cortical
lesions... This form of blindness is accompanied by a loss of visual
faculties and hence a partial loss of intelligence. 

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Early Apps.

2002-12-26 Thread Alan Grimes
Gary Miller wrote:
 AG A human level intelligence requires arbitrary access to
 AG visual/phonetic/other faculties in order to be intelligent.

 By this definition of intelligence then we must conclude the Helen
 Keller was totally lacking in
 intelligence.

You are confusing the visual faculty (a region of cortex) with the sense
of sight (through the organ called the eye).

I beleive that her actual faculties were intact but her senses were
dammaged. 

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Early Apps.

2002-12-26 Thread ben
On 26 Dec 2002 at 10:32, Gary Miller wrote:

 On Dec. 26 Alan Grimes said:
 
  According to my rule of thumb, 
  If it has a natural language database it is wrong, 
  
 Alan I can see based on the current generation of bot technology why one
 would feel this way.
 
 I can also see people having the view that biological systems learn from
 scratch so that AI systems should be able to also.
 
 Neither of these arguments are particularly persuasive though based on
 what I've developed to date.
 
 Do you have other arguments against a NLP knowledge based approach that
 you could share with me.

One basic problem is what's known as symbol grounding.  this means that an Ai system 
can't handle semantics, language-based cognition or even advanced syntax if it doesn't 
understand the relationships between its linguistic tokens and patterns in the 
nonlinguistic 
world.

However, this problem doesn't totally rule out use of a linguistic DB.  One could 
imagine 
supplying a system with a linguistic DB and having it learn groundings for the words 
and 
structures in the DB...

Another problem is what I call the knowledge richness problem.

The basic idea here is that if a system learns something through experience, it then 
is likely 
to know that something in an adaptable, adjustable way.  Because it knows not only the 
thing itself, but a bunch of other things in the neighborhood of that thing, various 
useful 
components and superstructures of the thing, etc.  it knows these other related things 
as 
side-effects of the learning process.

On the other hand, if a system learns something through reading out of a DB, it 
doesn't 
have this surround of related things to draw on, so it will be far less able to adapt 
and build 
on that thing it's learned...

My view is that a linguistic DB is not necessarily the kiss of death for an AGI system 
-- but I 
don't think you can build an AGI system that has a DB as its *primary source* of 
linguistic 
knowledge.  If an AGI system uses a linguistic DB as one among many sources of 
linguistic 
information -- and the others are mostly experience-based -- then it may still work, 
and the 
linguistic DB may potentially accelerate aspects of its learning..

Ben G 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]