[agi] Emergent languages Org

2008-02-03 Thread Mike Tintner
Jeez there's always something new. Anyone know about this (which seems at a 
glance loosely relevant to Ben's approach) ?


http://www.emergent-languages.org/

Overview

This site provides an introduction to the research on emergent and 
evolutionary languages as conducted at the Sony Computer Science Laboratory 
in Paris and the AI-Lab at the VUB in Brussels. One of the principle 
objectives of this research is to identify the cognitive capabilities that 
artificial agents must posses to enable, in a population of such agents, the 
emergence and evolution of a language that exhibits characteristic features 
identified in natural languages.


Looks like Sony- Aibo- financed. Luc Steels seems to be a principal figure. 
This is quite fun:


http://www.csl.sony.fr/~py/clickerTraining.htm

Here he explains/justifies his approach:

http://www.csl.sony.fr/downloads/papers/2006/steels-06a.pdf

And how did I get to all this? From, tangentially, Construction Grammar, 
which is yet another interesting aspect of cognitive linguistics:


http://en.wikipedia.org/wiki/Construction_grammar 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93139505-4aa549


Re: [agi] Emergent languages Org

2008-02-03 Thread Bob Mottram
I havn't read any of Luc Steels stuff for a long time, but he has been
researching the evolution of language using robots or software agents
since the early 1990s.  This is really a symbol grounding problem
where the communication in some way needs to represent things or
situations which the agent can perceive with its sensors.

Some years ago I tried to do something similar to Pierre Oudeyers
video using a humanoid robot - presenting objects and saying this is
a... or what is this? or Is this a...?.  I didn't go very far
down this route because I found that visual recognition of objects
constitutes the major part of the problem.  It is possible to use SIFT
features and geometric hashes (which I think is what the AIBO robot is
doing in this demo) but these 2D methods just aren't very good on
objects with complicated 3D shapes.  Since I'm interested in making
machines which are genuinely intelligent, as opposed to appearing to
be intelligent in a five minute demo, I've spent most of my efforts on
the 3D object recognition problem.  It turns out that other things are
fundamentally related to this problem, such as mapping, navigation and
SLAM.



On 03/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
 Jeez there's always something new. Anyone know about this (which seems at a
 glance loosely relevant to Ben's approach) ?

 http://www.emergent-languages.org/

 Overview

 This site provides an introduction to the research on emergent and
 evolutionary languages as conducted at the Sony Computer Science Laboratory
 in Paris and the AI-Lab at the VUB in Brussels. One of the principle
 objectives of this research is to identify the cognitive capabilities that
 artificial agents must posses to enable, in a population of such agents, the
 emergence and evolution of a language that exhibits characteristic features
 identified in natural languages.

 Looks like Sony- Aibo- financed. Luc Steels seems to be a principal figure.
 This is quite fun:

 http://www.csl.sony.fr/~py/clickerTraining.htm

 Here he explains/justifies his approach:

 http://www.csl.sony.fr/downloads/papers/2006/steels-06a.pdf

 And how did I get to all this? From, tangentially, Construction Grammar,
 which is yet another interesting aspect of cognitive linguistics:

 http://en.wikipedia.org/wiki/Construction_grammar


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93142547-2c3dab


Re: [agi] Emergent languages Org

2008-02-03 Thread Stephen Reed
I have been collaborating with this lab on their Fluid Construction Grammar 
system, as described briefly in this blog post: 
http://texai.org/blog/2007/10/24/fluid-construction-grammar

I downloaded their Common Lisp implementation and rewrote it in Java and 
demonstrated that I could achieve the same results as their Lisp 
implementation.  Then I extended it to parse incrementally, e.g. word-by-word, 
strictly left-to-right, creating semantics at each step.

I have not studied the theory of emerging languages as I am focused on what I 
think is their excellent production rule engine for bi-directional grammars.  I 
would be glad to provide an introduction on the linkedin network to Pieter 
Wellens, who is a PhD student at the associated VUB AI-Lab.

-Steve
 
Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, February 3, 2008 8:03:13 AM
Subject: Re: [agi] Emergent languages Org

 Thanks  for  the  references...

I  found  this  paper

Kaplan,  F.,  Oudeyer,  P-Y.,  Kubinyi,  E.  and  Miklosi,  A.  (2002)  Robotic
clicker  training,  Robotics  and  Autonomous  Systems,  38(3-4),  pp.
197--206.

at  (near  the  bottom)

http://www.csl.sony.fr/~py/clickerTraining.htm

interesting  in  terms  of  highlighting  the  difference  btw  virtual-world
and  physical-robotics  teaching  of  agents,  as  well  as  the  basic
difference  between  Novamente's  Virtual  Animal  Brain  system  and  real
dog  brains...

They  point  out  that  imitation  learning  is  rarely  used  for  teaching
animals,  both  because  animals  are  bad  at  imitation,  and  because  of
differences  between  human  and  animal  anatomy.

However,  Novamente  is  good  at  imitation,  and  in  a  virtual-world
context  the  differences  between  human  and  animal  anatomy  can  be
finessed  pretty  easily  (via  simply  supplying  the  virtual  animal  with
suggestions  about  how  to  map  specific  human-avatar  animations  into
specific  animal-avatar  animations).

What  they  advocate  in  the  paper,  for  teaching  robots,  is  clicker
training  which  is  basically  Skinnerian  reinforcement  learning  with  a
judicious,  time-variant  sequence  of  partial  rewards.   At  first  you
reward  the  animal  for  doing  1/10  of  the  behavior  right,  then  after  
it
can  do  that,  you  reward  it  for  doing  2/10  of  the  behavior  right,  
etc.

In  their  work  on  language  learning

http://www.csl.sony.fr/~py/languageAcquisition.htm

I  see  nothing  coming  remotely  close  to  a  discussion  of  the  learning  
of
syntax  or  complex  semantics  ...  what  I  see  is  some  experiments  in
which  robots  learned,  through  spontaneous  exploration  and
reinforcement,  the  simple  fact  that  vocalizing  toward  other  agents  is
a  useful  thing  to  do.   Which  is  certainly  interesting  ...  but  it's
really  just  a  matter  of  learning  THAT  vocal  communication  exists,  in
a  setting  where  not  that  many  other  possibilities  exist...

--  Ben  G


On  Feb  3,  2008  7:08  AM,  Mike  Tintner  [EMAIL PROTECTED]  wrote:
  Jeez  there's  always  something  new.  Anyone  know  about  this  (which  
 seems  at  a
  glance  loosely  relevant  to  Ben's  approach)  ?

  http://www.emergent-languages.org/

  Overview

  This  site  provides  an  introduction  to  the  research  on  emergent  and
  evolutionary  languages  as  conducted  at  the  Sony  Computer  Science  
 Laboratory
  in  Paris  and  the  AI-Lab  at  the  VUB  in  Brussels.  One  of  the  
 principle
  objectives  of  this  research  is  to  identify  the  cognitive  
 capabilities  that
  artificial  agents  must  posses  to  enable,  in  a  population  of  such  
 agents,  the
  emergence  and  evolution  of  a  language  that  exhibits  characteristic  
 features
  identified  in  natural  languages.

  Looks  like  Sony-  Aibo-  financed.  Luc  Steels  seems  to  be  a  
 principal  figure.
  This  is  quite  fun:

  http://www.csl.sony.fr/~py/clickerTraining.htm

  Here  he  explains/justifies  his  approach:

  http://www.csl.sony.fr/downloads/papers/2006/steels-06a.pdf

  And  how  did  I  get  to  all  this?  From,  tangentially,  Construction  
 Grammar,
  which  is  yet  another  interesting  aspect  of  cognitive  linguistics:

  http://en.wikipedia.org/wiki/Construction_grammar


  -
  This  list  is  sponsored  by  AGIRI:  http://www.agiri.org/email
  To  unsubscribe  or  change  your  options,  please  go  to:
  http://v2.listbox.com/member/?;




-- 
Ben  Goertzel,  PhD
CEO,  Novamente  LLC  and  Biomind  LLC
Director  of  Research,  SIAI
[EMAIL PROTECTED]

If  men  cease  to  believe  that  they  will  one  day  become  gods  then  
they
will  surely  become  worms.
--  Henry  Miller

-
This  list  is  sponsored  by  AGIRI:  

[agi] A little more technical information about OpenCog

2008-02-03 Thread Ben Goertzel
I got a free hour this afternoon, and posted a little more technical information
about our plans for OpenCog, here:

http://opencog.org/wiki/OpenCog_Technical_Information

Nothing surprising or dramatic, mostly just a clear online explanation of our
basic plans, as have already been discussed in various emails...

-- Ben G

p.s. for those who don't know what opencog is, see

http://opencog.org/wiki/Main_Page



--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93259758-229b0c


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-03 Thread Kaj Sotala
On 1/30/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Kaj,

 [This is just a preliminary answer:  I am composing a full essay now,
 which will appear in my blog.  This is such a complex debate that it
 needs to be unpacked in a lot more detail than is possible here.  Richard].

Richard,

[Where's your blog? Oh, and this is a very useful discussion, as it's
given me material for a possible essay of my own as well. :-)]

Thanks for the answer. Here's my commentary - I quote and respond to
parts of your message somewhat out of order, since there were some
issues about ethics scattered throughout your mail that I felt were
best answered with a single response.

 The most important reason that I think this type will win out over a
 goal-stack system is that I really think the latter cannot be made to
 work in a form that allows substantial learning.  A goal-stack control
 system relies on a two-step process:  build your stack using goals that
 are represented in some kind of propositonal form, and then (when you
 are ready to pursue a goal) *interpret* the meaning of the proposition
 on the top of the stack so you can start breaking it up into subgoals.

 The problem with this two-step process is that the interpretation of
 each goal is only easy when you are down at the lower levels of the
 stack - Pick up the red block is easy to interpret, but Make humans
 happy is a profoundly abstract statement that has a million different
 interpretations.

 This is one reason why nobody has build an AGI.  To make a completely
 autonomous system that can do such things as learn by engaging in
 exploratory behavior, you have to be able insert goals like Do some
 playing, and there is no clear way to break that statement down into
 unambiguous subgoals.  The result is that if you really did try to build
 an AGI with a goal like that, the actual behavior of the system would be
 wildly unpredictable, and probably not good for the system itself.

 Further:  if the system is to acquire its own knowledge independently
 from a child-like state (something that, for separate reasons, I think
 is going to be another prerequisite for true AGI), then the child system
 cannot possibly have goals built into it that contain statements like
 Engage in an empathic relationship with your parents because it does
 not have the knowledge base built up yet, and cannot understand such a
 propositions!

I agree that it could very well be impossible to define explict goals
for a child AGI, as it doesn't have enough built up knowledge to
understand the propositions involved. I'm not entirely sure of how the
motivation approach avoids this problem, though - you speak of
setting up an AGI with motivations resembling the ones we'd call
curiosity or empathy. How are these, then, defined? Wouldn't they run
into the same difficulties?

Humans have lots of desires - call them goals or motivations - that
manifest in differing degrees in different individuals, like wanting
to be respected or wanting to have offspring. Still, excluding the
most basic ones, they're all ones that a newborn child won't
understand or feel before (s)he gets older. You could argue that they
can't be inborn goals since the newborn mind doesn't have the concepts
to represent them and because they manifest variably with different
people (not everyone wants to have children, and there are probably
even people who don't care about the respect of others), but still,
wouldn't this imply that AGIs *can* be created with in-built goals? Or
if such behavior can only be implemented with a motivational-system
AI, how does that avoid the problem of some of the wanted final
motivations being impossible to define in the initial state?

 But beyond this technical reason, I also believe that when people start
 to make a serious efort to build AGI systems - i.e. when it is talked
 about in government budget speeches across the world - there will be
 questions about safety, and the safety features of the two types of AGI
 will be examined.  I believe that at that point there will be enormous
 pressure to go with the system that is safer.

This makes the assumption that the public will become aware of AGI
being near well ahead of the time, and takes the possibility
seriously. If that assumption holds, then I agree with you. Still, the
general public seems to think that AGI will never be created, or at
least not in hundreds of years - and many of them remember the
overoptimistic promises of AI researchers in the past. If a sufficient
amount of scientists thought that AGI was doable, the public might be
convinced - but most scientists want to avoid making radical-sounding
statements, so they won't appear as crackpots to the people reviewing
their research grant applications. Combine this with the fact that the
keys for developing AGI might be scattered across so many disciplines
that very few people have studied them all, or that sudden
breakthroughs may accelerate the research, I don't think it's a 

Re: [agi] Emergent languages Org

2008-02-03 Thread Joseph Gentle
I doubt that 3D object recognition is integral to 'genuine
intelligence'. Theoretically, if we had an AGI we should be able to
put it in a simulated 2D world and it would still act intelligently.

IMO language is integral to strong AI in the same way that logic is
integral to mathematics. If you think about it, human languages are
basically higher order logics with fuzzy expressions, probabilities
and context. That sounds like a fabulous description of the higher
order logic our brains use internally to store thoughts.

I haven't read any of Steels stuff lately, either. I'm not sure if any
of the language he's generating is higher order, but I wouldn't be so
quick to dismiss emergent language generation as a trick for just 5
minute demos.

-J

On Feb 4, 2008 12:34 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 I havn't read any of Luc Steels stuff for a long time, but he has been
 researching the evolution of language using robots or software agents
 since the early 1990s.  This is really a symbol grounding problem
 where the communication in some way needs to represent things or
 situations which the agent can perceive with its sensors.

 Some years ago I tried to do something similar to Pierre Oudeyers
 video using a humanoid robot - presenting objects and saying this is
 a... or what is this? or Is this a...?.  I didn't go very far
 down this route because I found that visual recognition of objects
 constitutes the major part of the problem.  It is possible to use SIFT
 features and geometric hashes (which I think is what the AIBO robot is
 doing in this demo) but these 2D methods just aren't very good on
 objects with complicated 3D shapes.  Since I'm interested in making
 machines which are genuinely intelligent, as opposed to appearing to
 be intelligent in a five minute demo, I've spent most of my efforts on
 the 3D object recognition problem.  It turns out that other things are
 fundamentally related to this problem, such as mapping, navigation and
 SLAM.




 On 03/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
  Jeez there's always something new. Anyone know about this (which seems at a
  glance loosely relevant to Ben's approach) ?
 
  http://www.emergent-languages.org/
 
  Overview
 
  This site provides an introduction to the research on emergent and
  evolutionary languages as conducted at the Sony Computer Science Laboratory
  in Paris and the AI-Lab at the VUB in Brussels. One of the principle
  objectives of this research is to identify the cognitive capabilities that
  artificial agents must posses to enable, in a population of such agents, the
  emergence and evolution of a language that exhibits characteristic features
  identified in natural languages.
 
  Looks like Sony- Aibo- financed. Luc Steels seems to be a principal figure.
  This is quite fun:
 
  http://www.csl.sony.fr/~py/clickerTraining.htm
 
  Here he explains/justifies his approach:
 
  http://www.csl.sony.fr/downloads/papers/2006/steels-06a.pdf
 
  And how did I get to all this? From, tangentially, Construction Grammar,
  which is yet another interesting aspect of cognitive linguistics:
 
  http://en.wikipedia.org/wiki/Construction_grammar
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93269357-79b910


Re: [agi] Emergent languages Org

2008-02-03 Thread Ben Goertzel
 IMO language is integral to strong AI in the same way that logic is
 integral to mathematics.

The counterargument is that no one has yet made an AI virtual chimp ...
and nearly all of the human brain is the same as that of a chimp ...

I think that language-centric approaches are viable, but I wouldn't dismiss
sensorimotor-centric approaches to AGI either ... looking at evolutionary
history, it seems that ONE way to achieve linguistic functionality is via
some relatively minor tweaks on a prelinguistic mind tuned for flexible
sensorimotor learning... (tho I don't believe this is the only way, unlike some)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93270421-5eade1


Re: [agi] Emergent languages Org

2008-02-03 Thread Joseph Gentle
On Feb 4, 2008 11:27 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  IMO language is integral to strong AI in the same way that logic is
  integral to mathematics.

 The counterargument is that no one has yet made an AI virtual chimp ...
 and nearly all of the human brain is the same as that of a chimp ...

 I think that language-centric approaches are viable, but I wouldn't dismiss
 sensorimotor-centric approaches to AGI either ... looking at evolutionary
 history, it seems that ONE way to achieve linguistic functionality is via
 some relatively minor tweaks on a prelinguistic mind tuned for flexible
 sensorimotor learning... (tho I don't believe this is the only way, unlike 
 some)

 -- Ben


Interesting! You make a very good point.

I'd be interested in what you see as the path from SLAM to AGI.

To me, language generation seems obvious: 1. Make a language and
algorithms for generating stuff in that language. 2. Implement pattern
recognition and abstraction (imo not _that_ hard if you've designed
your language well) 3. Ground the language through real-world
sensorimotor experiences so the utterances mirror the agents'
experiences.

What do you see as the equivalent path from mapping, navigation and SLAM?

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93271502-5143d2


Re: [agi] Emergent languages Org

2008-02-03 Thread Ben Goertzel
Hi,

 I'd be interested in what you see as the path from SLAM to AGI.

 To me, language generation seems obvious: 1. Make a language and
 algorithms for generating stuff in that language. 2. Implement pattern
 recognition and abstraction (imo not _that_ hard if you've designed
 your language well) 3. Ground the language through real-world
 sensorimotor experiences so the utterances mirror the agents'
 experiences.

 What do you see as the equivalent path from mapping, navigation and SLAM?

Mapping, navigation and SLAM are not the key point -- embodied learning is
the point ... these are just prerequisites...

The robotics path to AI is a lot like the evolutionary path to natural
intelligence...

Create a system that learns to achieve simple sensorimotor goals in
its environment...
then move on to social goals... and language eventually emerges as an aspect of
social interaction...

Rather than language being a separate thing that is then grounded in experience,
make language **emerge** from nonlinguistic interactions ... as it
happened historically

See Mithen's The Singing Neanderthals for ideas about how language may
have emerged
from prelinguistic sound-making ... and a host of researchers for
ideas about how language
may have emerged from gesture (I have a paper touching on the latter
at novamente.net/papers )

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93273630-9e8239


Re: [agi] Emergent languages Org

2008-02-03 Thread Joseph Gentle
On Feb 4, 2008 12:12 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 The robotics path to AI is a lot like the evolutionary path to natural
 intelligence...

 Create a system that learns to achieve simple sensorimotor goals in
 its environment...
 then move on to social goals... and language eventually emerges as an aspect 
 of
 social interaction...

You might be right, but I'm very skeptical. I don't see how any
complex behaviour can simply 'emerge' from strict algorithms like
SLAM. SLAM as I know it doesn't allow for emergent behavior at all
(except for its explicit mapping ability).

Eventually, you will have to write something which allows for emergent
behaviour and complex communication. To me, that stage of your project
is the interesting crux of AGI. It should have some very interesting
emergant behaviour with inputs other than the information SLAM
outputs.

Why not just work on that difficult part now?

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93292780-7d3b9c