Re: [agi] Early Apps.

2002-12-27 Thread Shane Legg
Alan Grimes wrote:

According to my rule of thumb, 

If it has a natural language database it is wrong, 

I more or less agree...

Currently I'm trying to learn Italian before I leave
New Zealand to start my PhD.  After a few months working
through books on Italian grammar and trying to learn lots
of words and verb forms and stuff and not really getting
very far, I've come to realise just how complex language is!

Many of you will have learnt a second language as an adult
yourselves and will know what I mean - natual languages are
massively complex things.  I worked out that I know about
25,000 words in English, many with multiple means, many
having huges amounts of symbol grounding information and
complex relationships with other things I know, then there
is spelling information and grammar knowledge and I'm
told that English grammar isn't too complex, but my Italian
grammar reference book is 250 pages of very dense information
on irregular verbs and tenses etc... and of course even that
is only a high level ridged structure description not how the
language is actually used.

Natural languages are hard - really hard.  Humans have special
brain areas that are set up to solve just this kind of problem
and even then it takes a really long time to get good at it,
perhaps ten years!  To work something that complex out using
a general intelligence rather than specialised systems would
require a computer that was amazingly smart in my opinion.

One other thing; if one really is focused on natural language
learning why not make things a little easier and use an artificial
language like Esperanto?  Unlike like highly artificial languages
like logic based or maths based etc. languages, Esperanto is just
like a normal natural language in many ways.  You can get novels
written in it, you can speak it, some children have even grown
up speaking it as one of their first languages along side other
natural languages.  However the language is extremely regular
compared to a real natural language.  For example there are only
16 rules of grammar - they can fit onto an single sheet of paper!
All the verbs and adverbs and pronouns and so on obey neat and tidy
patterns and rules.  I'm told that after two weeks somebody can
become comfortable enough with the grammar to be able to hold a
conversation and then after a few months of learning more words
is able to communicate quite freely and read books and so on.

Why not aim at this and make the job much easier?  If you ever
did build a computer that could hold a good conversation in
Esperanto I'm sure moving to a natural language would only be
a matter of taking what you already had and increasing the level
of complexity to deal with all the additional messiness required.

Enough rating for today!  :)

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


[agi] Language and AGI (was Re: Early Apps)

2002-12-27 Thread Cliff Stabbert
Friday, December 27, 2002, 5:15:40 AM, Shane Legg wrote:

SL One other thing; if one really is focused on natural language
SL learning why not make things a little easier and use an artificial
SL language like Esperanto?  Unlike like highly artificial languages
SL like logic based or maths based etc. languages, Esperanto is just
SL like a normal natural language in many ways.  You can get novels
SL written in it, you can speak it, some children have even grown
SL up speaking it as one of their first languages along side other
SL natural languages.  However the language is extremely regular
SL compared to a real natural language.

I suspect that Esperanto will not be much more difficult to tackle
than any current existing language, or at best a *tiny* bit easier.
The greatest difficulty of language is not grammar, or spelling,
punctuation, etc.  To get an AGI to the point of using _any_ language
naturally on the level humans use it is the big challenge.  It can
be ancient Greek or Latin with all its declensions and exceptions; the
difficulty lies in the use of language per se.

But this does bring up a related point.

From a certain perspective, the development of abstraction is part 1)
and developing the ability to _communicate_ abstractions (whether merely
to oneself, as memory, or to others) through the method of language is
part 2) of the recipe for the development of intelligence.  1) and
2) intermingle and / or are different aspects of a single process;
however conceived; there is a discontinuity -- a singularity, if you
will -- that has taken place between general primate thought and human
thought (and that is recapulated in the development from baby thought
to child thought).

The linguistic step strikes me as what some have called a quantum
leap -- it is a qualitative jump, a meta jump, rather than an
incremental step, upwards.

Do we expect this quantum leap to arise from completely naturally
from an AGI, or do we build our AGIs with something of this nature?
How explicitly do we code for some ability to abstract?  How closely
does this correlate to the human use of language?

Note, I have no clue how one would go about building in such a
capability -- I'm just curious whether it's a too unlikely step to
hope to have occur randomly (naturally) on a realistic time basis.


--
Cliff

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Language and AGI

2002-12-27 Thread Alexander E. Richter
At 06:11 27.12.02 -0500, Cliff wrote:
...
I suspect that Esperanto will not be much more difficult to tackle
than any current existing language, or at best a *tiny* bit easier.
The greatest difficulty of language is not grammar, or spelling,
punctuation, etc.  

Esperanto is still too complicated. I am tinkering with 3 languages:

human-readable:

http://www.lojban.org/
http://www.loglan.org/

machine-readable:

http://pi7.fernuni-hagen.de/helbig/multinet_en.html

To talk with a chatterbot in Lojban oder Loglan is much better and easier
than in German or English (or Esperanto) but there are not enough texts in
this easier languages. 

Natural languages are a mess with many rules, exceptions and patches, but
imho only more work is needed to put this into a machine. 

cu Alex

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Natural Language DB's and AI

2002-12-27 Thread ben

Kevin wrote:
 We often intelligently use things we do not understand.  Computers,
 automobiles, our brains, quarks, and so on.  Why can't an AGI use words it
 does not actually understand, so long as it uses the word properly and
 accomplishes the desired result?  

I think it's fine for an AGI to use *some* words it doesn't understand well.

However, my conjecture is that in order for it to use *any* words with true fluency, 
it needs to 
have a solid core of words that it *does* understand (based on grounding in 
experience).  
Based on this core, it can then talk through its digital butt about a lot of other 
stuff ;-)

ben g




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Linguistic DB

2002-12-27 Thread SMcClenahan
I've always considered the whole world/universe as one big database. A system 
that narrows its focus to a partial set of knowledge contained in say, a 
computer database, will be excellent when performing within the realm of which 
that database was created.

Everyone needs to start wearing microphones and ear-pieces for the computer to 
communicate to you with. What's the longest time it could take to develop a 
human-computer communication protocol, 20 years? I need to attach one to my 6 
month old daughter now, before it's too late!

Heard about this DB yet?
http://www.infoworld.com/articles/ap/xml/02/12/16/021216apfastalk.xml


cheers,
Simon
 On the other hand, if a system learns something through reading out of a DB, it 
 doesn't 
 have this surround of related things to draw on, so it will be far less able to 
 adapt and build 
 on that thing it's learned...
 
 My view is that a linguistic DB is not necessarily the kiss of death for an AGI 
 system -- but I 
 don't think you can build an AGI system that has a DB as its *primary source* of 
 linguistic 
 knowledge.  If an AGI system uses a linguistic DB as one among many sources of 
 linguistic 
 information -- and the others are mostly experience-based -- then it may still 
 work, and the 
 linguistic DB may potentially accelerate aspects of its learning..

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] FUNNY: Tenjewberrymud (fwd)

2002-12-27 Thread SMcClenahan
Seriously, how would current state of the art voice recognition software grok 
this conversation?

cheers,
Simon
--  Forwarded Message:  -

You must read this aloud (for the full effect).  Just say any unfamiliar 
words phonetically.  It's amazing, you will understand what 
'tendjewberrymud' means by the end of the conversation.  This has been 
nominated for best email of 1999.  The following is a telephone 
conversation between a hotel guest and room-service operator, at a hotel in 
somewhere in Asia.  The call was recorded and later published in the Far 
East Economic Review.  Here goes


Room Service (RS): Morny.  Ruin sorbees.
Guest (G): Sorry, I thought I dialed room-service
RS: Rye..Ruin sorbees..morny!  Djewish to odor sunteen??
G: Uh..yes..I'd like some bacon and eggs
RS: Ow July den?
G: What??
RS: Ow July den?...pry, boy, pooch?
G : Oh, the eggs!  How do I like them?  Sorry, scrambled please.
RS: Ow July dee bayhcem...crease?
G: Crisp will be fine.
RS : Hokay.  An San tos?
G: What?
RS: San tos.  July San tos?
G: I don't think so
RS: No?  Judo one toes??
G: I feel really bad about this, but I don't know what 'judo one toes' means.
RS: Toes!  Toes!...why djew Don Juan toes?  Ow bow english mopping we bother?
G: English muffin!!  I've got it!  You were saying 'Toast.' Fine.  Yes, an 
english muffin will be fine.
RS: We bother?
G: No...just put the bother on the side.
RS: Wad?
G: I mean butter...just put it on the side.
RS: Copy?
G: Sorry?
RS: Copy...tea...mill?
G: Yes.  Coffee please, and that's all.
RS: One Minnie.  Ass ruin torino fee, strangle ache, crease baychem, tossy 
singlish mopping we bot her honey sigh, and copyrye??
G: Whatever you say
RS: Tendjewberrymud

G: You're welcome.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Thanks to JoAnn Leaming for this one.
Duffer
=

Duffer 
=
You must read this aloud (for the full effect). Just say any
unfamiliar words phonetically. It's amazing, you will understand
what 'tendjewberrymud' means by the end of the conversation. This
has been nominated for best email of 1999. The following is a
telephone conversation between a hotel guest and room-service operator,
at a hotel in somewhere in Asia. The call was recorded and later
published in the Far East Economic Review. Here goes

Room Service (RS): Morny. Ruin sorbees.
Guest (G): Sorry, I thought I dialed room-service
RS: Rye..Ruin sorbees..morny! Djewish to odor
sunteen??
G: Uh..yes..I'd like some bacon and eggs
RS: Ow July den?
G: What??
RS: Ow July den?...pry, boy, pooch?
G : Oh, the eggs! How do I like them? Sorry, scrambled
please.
RS: Ow July dee bayhcem...crease?
G: Crisp will be fine.
RS : Hokay. An San tos?
G: What?
RS: San tos. July San tos?
G: I don't think so
RS: No? Judo one toes??
G: I feel really bad about this, but I don't know what 'judo one
toes' means.
RS: Toes! Toes!...why djew Don Juan toes? Ow bow
english mopping we bother?
G: English muffin!! I've got it! You were saying
'Toast.' Fine. Yes, an english muffin will be fine.
RS: We bother?
G: No...just put the bother on the side.
RS: Wad?
G: I mean butter...just put it on the side.
RS: Copy?
G: Sorry?
RS: Copy...tea...mill?
G: Yes. Coffee please, and that's all.
RS: One Minnie. Ass ruin torino fee, strangle ache, crease
baychem, tossy singlish mopping we bot her honey sigh, and
copyrye??
G: Whatever you say
RS: Tendjewberrymud
G: You're welcome.

Michael P. Duff,
Jr.
[EMAIL PROTECTED]
Director, divine Advanced Web Technology (Chicago)
Work:
312-630-7664Cell:
815-483-0173
http://www.divine.comhttp://duff.dnsalias.com


Re: [agi] Language and AGI (was Re: Early Apps)

2002-12-27 Thread Shane Legg


I suspect that Esperanto will not be much more difficult to tackle
than any current existing language, or at best a *tiny* bit easier.
The greatest difficulty of language is not grammar, or spelling,
punctuation, etc.  To get an AGI to the point of using _any_ language
naturally on the level humans use it is the big challenge.  It can
be ancient Greek or Latin with all its declensions and exceptions; the
difficulty lies in the use of language per se.



In case my position isn't clear, I think that any language
will be too difficult to start with and development should
be focused on playing a wide range of simple games instead.

However I have been really struck by the fact that Esperanto
(and no doubt many other artificial languages) can be equal
to a natural language in terms of the role they play and yet
are something like ten times less complex than a real natural
language in terms of language structure.

I'm sure a reasonably powerful AGI would be able to infer the
Esperanto rule for forming the plural of a noun (you add j
to the end of the word) but I think it would struggle to work
out how to do it in Italian (it's about six pages of rules in
my Italian grammar book and than doesn't cover all the weird
cases like when a word changes gender conditionally when forming
a plural depending on the context).

Sure, getting a computer to speak Esperanto would still be
*really* hard, but having hundreds of pages of grammar rules
that serve no real purpose other than to add a truck load of
complexity to an already difficult problem just seems absurd.

I guess people continue to do AI with languages like English
because that is what is of practical use and where more money
is likely to be.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] Language and AGI (was Re: Early Apps)

2002-12-27 Thread Ben Goertzel

Shane,

I agreed with the wording in your earlier post more ;)

It is true that learning Esperanto would be easier for an AI than learning
English or Italian.

However, I think that if you had an AI capable of mastering the
syntax-semantics-pragmatics interface [the really hard part of language, as
you point out], then learning the syntactic rules of any language would
probably be a piece of cake for the AI...

Once an AI understands the world and can communicate in rudimentary
incorrect language, you can teach it correct grammar, and it will probably
learn the rules faster than  most humans...

-- Ben

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Shane Legg
Sent: Friday, December 27, 2002 7:48 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Language and AGI (was Re: Early Apps)



 I suspect that Esperanto will not be much more difficult to tackle
 than any current existing language, or at best a *tiny* bit easier.
 The greatest difficulty of language is not grammar, or spelling,
 punctuation, etc.  To get an AGI to the point of using _any_ language
 naturally on the level humans use it is the big challenge.  It can
 be ancient Greek or Latin with all its declensions and exceptions; the
 difficulty lies in the use of language per se.


In case my position isn't clear, I think that any language
will be too difficult to start with and development should
be focused on playing a wide range of simple games instead.

However I have been really struck by the fact that Esperanto
(and no doubt many other artificial languages) can be equal
to a natural language in terms of the role they play and yet
are something like ten times less complex than a real natural
language in terms of language structure.

I'm sure a reasonably powerful AGI would be able to infer the
Esperanto rule for forming the plural of a noun (you add j
to the end of the word) but I think it would struggle to work
out how to do it in Italian (it's about six pages of rules in
my Italian grammar book and than doesn't cover all the weird
cases like when a word changes gender conditionally when forming
a plural depending on the context).

Sure, getting a computer to speak Esperanto would still be
*really* hard, but having hundreds of pages of grammar rules
that serve no real purpose other than to add a truck load of
complexity to an already difficult problem just seems absurd.

I guess people continue to do AI with languages like English
because that is what is of practical use and where more money
is likely to be.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Language and AGI (was Re: Early Apps)

2002-12-27 Thread Jonathan Standley

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, December 27, 2002 7:48 PM
Subject: Re: [agi] Language and AGI (was Re: Early Apps)



 I guess people continue to do AI with languages like English
 because that is what is of practical use and where more money
 is likely to be.

 Shane

A newspeak style language might be useful for communicating with fairly
simple AI's  An emerging mind would probably have no more use for 10
synonyms for have  than a baby learning to talk does.

But natural language may be one of the more 'difficult' approaches to AI.
The various experiments that have been conducted in regards to the
Sapir-Worf hypothesis
lead me to question the notion of language as the root of intelligence.  It
seems likely to me that a human stores and manipulates primarily conceptual
constructions, not linguistic ones.  The language one speaks certainly
influences thought processes, but few people think in sentences.

J Standley

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Early Apps.

2002-12-27 Thread Gary Miller
On Dec 26 Ben Goertzel said:

 One basic problem is what's known as symbol grounding.  this 
 means that an Ai system can't handle semantics, language-based 
 cognition or even advanced syntax if it doesn't understand the 
 relationships between its linguistic tokens and patterns in the 
 nonlinguistic world.

I guess I'm still having trouble with the concept of grounding.  If I
teach/encode a
bot with 99% of the knowledge about hydrogen using facts and information
available in 
books and on the web.  It is now an idiot savant in that it knows all
about hydrogen and 
nothing about anything else and it is not grounded.  But if I then
examine the knowledge learned about hydrogen for other mentioned topics
like gases, elements, water, atoms, etc... And teach/encode 99% of
of the knowledge on these topics to the bot.  Then the bot is still an
idiot savant but less so isn't it better grounded?  A certain amount of
grounding I think has occurred by providing knowledge of related
concepts.  

If we repeat this process again, we may say the program is an idiot
savant in chemistry.

Each time we repeat the process are we not grounding the previous
knowledge further because the bot can now reason and respond to
questions not just about hydrogen, it now has an English representation
of the relationship between hydrogen and other related concepts in the
physical world..

If we were to teach someone such as Helen Keller with very limited
sensory inputs would we not be attempting to do the same thing?

Humans of course do not learn in this exhaustive manner.  We get a
shotgun bombardment of knowledge from all types of media on all manner
of subjects.  The things that interest us we pursue additional knowledge
about.  The more detailed our knowledge in any given area the greater we
say our expertise 
is.  Initially we will be better grounded than a bot, because as
children we learn a little bit about a whole lot of things.  So anything
new we learn we attempt to tie into our semantic network.  

When I think.  I think in English.  Yes, at some level below my
conscious awareness these English thoughts are electrochemically
encoded, but consciously I reason and remember in my native tongue or I
retrieve a sensory image, multimedia if you will.

If someone tells me that A kinipsa is terrible plorid.  I attempt to
determine what a kinipsa and a plorid are so that I may ground this
concept and interconnect it correctly within my existing semantic
network.  If A bot is taught to pursue new knowledge and ground the
unknown terms with it's existing semantic net by putting the goals Find
out what a plorid is and Find out what a kinipsa is on it's list of
short term goals then it will ask questions and seek to ground itself as
a human would!

I will agree that today's bots are not grounded because they are idiot
savants and lack the broad based high level knowledge with which to
ground any given fact or concept.  But if I am correct in my thinking
this is the same problem that Helen Keller's teacher was faced with in
teaching Helen one concept at a time until she had enough simple
information or knowledge to build more complex knowledge and concepts
upon.

When a child learns to speak he does not have a large dictionary to draw
on to tell him that mice is the plural of mouse.  No rule will tell
him that.  He has to learn it.  He will say mouses and someone will
correct him.  It gets added to his NLP database as an exception to the
rule.  A human has limited storage so a rule learned by generalizing
from experience is a shortcut to learning and remembering all the plural
forms for nouns.  In a AGI we can give the intelligence certain learning
advantages such as these dictionaries and lists of synonym sets which do
not take that much storage in the computer.  

I also think that children do not deal with syntax.  They have heard a
statement similar to what they want to express and have this stored as a
template in their minds.  I think we cut and paste what we are trying to
say into what we think is the correct template and then read it back to
ourselves to see if it sounds like other things we have heard and seems
to make sense.  For people who have to learn a foreign language as an
adult this is difficult because they tend to think in their first
language and commingle the templates from their original and the new
language.  But because we do not parse what we here and read strictly by
the laws of syntax we have little trouble understanding many of these
ungrammatical utterances.
 
 


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of [EMAIL PROTECTED]
Sent: Thursday, December 26, 2002 11:03 PM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Early Apps.



On 26 Dec 2002 at 10:32, Gary Miller wrote:

 On Dec. 26 Alan Grimes said:
 
  According to my rule of thumb,
  If it has a natural language database it is wrong, 
  
 Alan I can see based on the current generation of bot technology why 
 one