RE: [agi] Embedding AI agents in simulated worlds

2003-08-20 Thread Ben Goertzel

hi,

 One is that I wonder whether it's worth building into Novamente a pre-
 set predisposition to distinguish between 'me' and 'not' me.

My guess is that this will emerge pretty simply and naturally.

Some external observations will correlate closely with internal sensations
(these are the external manifestations of me) and others will not.

And, if external observation Y correlates closely with internal sensation X,
and X correlates with internal sensation Z (that may not correlate with any
external observation), then Y will come to correlate with Z [through one of
a couple cognitive mechanisms].

A cluster of tightly-linked nodes should then emerge, corresponding to
me-ish things (internal or external)

So, if the cognitive mechanisms work as expected (which may require some
parameter-tuning as it's a new application context, and the cognitive
mechanisms are now tuned for datamining applications), no pre-set
predisposition to distinguish me/not-me will be needed  I think this is
a relatively simple unsupervised learning problem, given a reasonable volume
of data obtained from spontaneously acting and observing in a (real or
simulated) world

 I would
 start by setting up in 'itch' to discover whether data flowing through
 Novamente is sourced from 'outside me' or from 'within me'.  The
 second 'itch' would be to label data generated outside of 'me' as being
 to do with 'me' or to do with 'other'.  I think a baby does the latter by
 noticing close correlations between internal feelings/intentions and
 seeing things happen 'outside' - I send messages to my arms/legs and I
 see objects move in a closely correlated way (turns out that as often as
 not I see my arms and legs move, etc.).

OK, so what I'm saying is that Novamente will do this about the way you
hypothesize a baby does it...

 Another itch that I imagine you already have built into Novamente is to
 try to find closely correlated streams of data - this would tend to speed
 the process of creating 'objects' or standard actions.

The search for correlations is intrinsic to Novamente cognition, yeah

 It might be worth running a few experiments to see if it significantly
 speeds up learning for a Novababy to have the 'itches' about 'me' / 'not
 me' built in at the start.

Well, if it sped things up, it would only be a short-term effect, because
after a certain point, this distinction is just gonna be DONE and LEARNED,
and then utilizing it will be basically independent of whether it was
acquired thru generic cognition or through a specially-programmed itch ...

 Another thought is about the way you've split leaning into direct
 environmental learning, learning to be taught, and then learning
 symbolic communication.

 I think learning symbolic communication is inseparable from learnng to
 be taught.  And direct environmental learning is inseparable from the
 precursors to speech.

Let me clarify: These aspects of learning are not really separated inside
Novamente, they are internally to be carried out by basically the same
processes.  However, I have separated them from the point of view of
thinking about teaching and training Novamente, simply to jog my mind in a
productive direction when designing teaching exercises...

 A long time ago I picked up at second hand a rather crude notion of the
 Piagetian stages of learning.  What I absorbed was the notion that
 Piaget said that kids must first learn concretely before learning
 abstractly.

 This had a surface common sense ring to it, but I've now decided that I
 don't at all agree that concrete learning has to preceed abstract
 learning.

I agree with Piaget, though I think his statement is only true in a
statistical sense.

Abstractions are generally built up hierarchically, so we have

-- concrete observations
-- level 1 patterns in observations
-- level 2 patterns, in level 1 observations  concrete observations
-- level 3 patterns, ...
-- etc.

Learning level N patterns is only possible once a large body of level N-1
patterns has been mastered.  Ascending the hierarchy takes time, because the
more abstract the patterns get, the larger the space of possible patterns
that must be searched in order to find the correct patterns.  (Of course, a
mind is not using a dumb search algorithm, the search metaphor is just
being used for its evocative power.)

 So, I think that the process of streaming data into objects and actions
 and relationships and characteristics is already a proccess of abstract
 learning.

Sure -- it's just on a lower level of the hierarchy of abstraction I've just
described...

 The second reason for thinging that abstract thinking starts very
 early in
 human babies is that their primary carers are talking to them all the
 time - of using highly abstract notions like I love you, you georgeous
 little thing, how clever, oh, don't be messy etc. etc.  The
 baby hears
 the words (jigsaw puzzle-like at first - ie. a fuzzy set of sounds in
 amongst other words they know) and then 

Re: [agi] Embedding AI agents in simulated worlds

2003-08-19 Thread Philip Sutton
Hi Ben,

I've just read your paper (Goertzel  Pennachin) at:
http://www.goertzel.org/dynapsyc/2003/NovamenteSimulations.htm

I'm not expert in any of this - but I'm 10 years and three years into 
raising two kids so that gives me some experience that might or might 
not be useful 

I thought what you said made good sense.

I've got two suggestions for modifications to your approach.

One is that I wonder whether it's worth building into Novamente a pre-
set predisposition to distinguish between 'me' and 'not' me.  I would 
start by setting up in 'itch' to discover whether data flowing through 
Novamente is sourced from 'outside me' or from 'within me'.  The 
second 'itch' would be to label data generated outside of 'me' as being 
to do with 'me' or to do with 'other'.  I think a baby does the latter by 
noticing close correlations between internal feelings/intentions and 
seeing things happen 'outside' - I send messages to my arms/legs and I 
see objects move in a closely correlated way (turns out that as often as 
not I see my arms and legs move, etc.). 

Another itch that I imagine you already have built into Novamente is to 
try to find closely correlated streams of data - this would tend to speed 
the process of creating 'objects' or standard actions.

It might be worth running a few experiments to see if it significantly 
speeds up learning for a Novababy to have the 'itches' about 'me' / 'not 
me' built in at the start.

Another thought is about the way you've split leaning into direct 
environmental learning, learning to be taught, and then learning 
symbolic communication.

I think learning symbolic communication is inseparable from learnng to 
be taught.  And direct environmental learning is inseparable from the 
precursors to speech.

I'll explain what I mean.

A long time ago I picked up at second hand a rather crude notion of the 
Piagetian stages of learning.  What I absorbed was the notion that 
Piaget said that kids must first learn concretely before learning 
abstractly.

This had a surface common sense ring to it, but I've now decided that I 
don't at all agree that concrete learning has to preceed abstract 
learning.  I have two reasons for thinking this.  When babies are in the 
ealiest stages of development I think they face the hardest possible 
learning tasks - they are getting a staggering stream of sensory input 
data - most of which is meaningless.  Out of this they have to sift 
signals that are meaningful - so from the minute their brains are able to 
process input data (while in utero) they engage in abstract learning - 
take the stream of raw data and abstract from it...so a tight 
coupling of environmental experience and abstract thinking is required 
form the first moment of mental capability.  Babies are undoubtedly 
pre-programmed to be alert to certain patterns of data.  This might be 
useful to get the baby responding in ways that helps its immediate 
survival.  But it might be that the pre-programing sets up a process of 
awareness crystalisation - certain streams of data can be treated as 
meaningful - and then out of the soup of other non-meaningful data 
additional correlations to the currently meaningful data can be 
developed - a bit like the way we do jigsaw puzzles.

So, I think that the process of streaming data into objects and actions 
and relationships and characteristics is already a proccess of abstract 
learning.

The second reason for thinging that abstract thinking starts very early in 
human babies is that their primary carers are talking to them all the 
time - of using highly abstract notions like I love you, you georgeous 
little thing, how clever, oh, don't be messy etc. etc.  The baby hears 
the words (jigsaw puzzle-like at first - ie. a fuzzy set of sounds in 
amongst other words they know) and then over time they associate 
behaviours, feelings, settings, other known words, etc. that invest these 
abstract terms with more and more meaning.  But the abstract symbol 
comes first and the meaning later.  The words are like pegs to hang 
meaning on.

Given these ways of seeing things, it's not hard to say that 'learning to 
learn from a teacher' is already a process of symbolic learning.  If a 
robot is circling another object and hoping the NovaBaby will realise 
that it wants the NovaBaby to go and get the object (or whatever) then 
it is teaching symbolic communication.  But it's just doing it in a way 
that a mute person would teach it or the way that it would have to teach 
it to a deaf child.  This form of teaching is no less abstract that the use 
of verbal symbols and it is no easier to learn (might even be harder as 
the action might not correlate so uniquely to the symbolic meaning that 
the teacher is trying to convey).

My first child started speaking at 8 months and he was clearly 
understanding words long before that - so my guess is that symbolic 
reasoning starts very, very early - and that language take-off is more to 
do with 

Re: [agi] Embedding AI agents in simulated worlds

2003-08-18 Thread Kevin
Ben..

For the vision processing stuff, here are some resources that you may be
familiar with:

Intel's open source vision library offering most standard vision functions
in C:
http://www.intel.com/research/mrl/research/opencv/

Sourceforge open vision project:
http://sourceforge.net/projects/opencvlibrary/

--Kevin


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, August 18, 2003 12:48 PM
Subject: [agi] Embedding AI agents in simulated worlds



 Hi all,

 Finally, I'm resuming a thread I started on this list a few weeks ago,
 before I left for the IJCAI conference...

 At the following  URL, you will find some thoughts of mine  Cassio's on
 hooking Novamente up to a simulated (virtual) world and teaching it
 therein:

 http://www.goertzel.org/dynapsyc/2003/NovamenteSimulations.htm

 Probably the most interesting parts are at the end, where we touch on
 artificial developmental psychology.

 This is not work we're doing right now, but it's stuff we're gradually
 moving towards as we refine and test our collection of cognitive
algorithms
 on various sorts of data.

 Comments desired!

 -- Ben G

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]