RE: [agi] Educating an AI in a simulated world

2003-07-19 Thread Philip Sutton
Hi Ben,

If Novababies are going to play and learn in a simulated world which is 
most likely based on an agent-based/object-orientated programming 
foundation, would it be useful for the basic Novamente to have prebuilt 
capacity for agent-based modelling? Would this in be necessary if a 
Novababy is to process objects in their native format as suggested by 
Brad Wyble (eg. sprites in 3d coordinates).

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Educating an AI in a simulated world

2003-07-19 Thread Ben Goertzel

Hi,

This kind of built-in capability certainly isn't *necessary* but it might be
useful.  This kind of issue is definitely worth exploring...

More thoughts later ;-)

ben

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Philip Sutton
Sent: Saturday, July 19, 2003 11:24 AM
To: [EMAIL PROTECTED]
Subject: RE: [agi] Educating an AI in a simulated world


Hi Ben,

If Novababies are going to play and learn in a simulated world which is
most likely based on an agent-based/object-orientated programming
foundation, would it be useful for the basic Novamente to have prebuilt
capacity for agent-based modelling? Would this in be necessary if a
Novababy is to process objects in their native format as suggested by
Brad Wyble (eg. sprites in 3d coordinates).

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-14 Thread Brad Wyble
 
 
 
 It's an interesting idea, to raise Novababy knowing that it can adopt
 different bodies at will.  Clearly this will lead to a rather different
 psychology than we see among humans --- making the in-advance design of
 educational environments particularly tricky!!

First of all, please read Diaspora by Greg Egan.  As a SF author, he excels in his 
informed approach to AI design, philosophy, and neuroscience.  This book touches on 
this topic(AI's designed for multiple bodies) very directly.  


This VR training room initially seemed like a great idea to me, but on consideration, 
I'm not so certain it's worth the trouble.

First of all, you are reducing the complexity of the environment by orders of 
magnitude.  One could argue that it is a baby's physical interation with the world is 
the cornerstone on which all future intelligence resides.  

Now you've made pains to point out that you're not trying to recreate people, but 
intelligence.  However, a Novamente grounded in a different reality will be difficult 
for people to interact with.  

So here are two possible issues: the VR world might actually slow down the 
intellectual growth of the Nova Baby.  And even when intelligent, it will be more 
alien from us than it needs to be.

A second point about this plan is that you are creating extra work for yourself both 
in designing a VR training paradigm, and then in bridging the gap from VR to the real 
world, which would be no picnic.  


So there are some possible negatives, the positives you've already listed.  


If this course is decided upon, consider giving the Novamente an ability to sense 
objects in their native format (sprites in 3d coordinates).  If your intent is to 
simplify the world, don't add in the fuzz of the artificial visual input, which is 
often flawed (e.g. clipping errors).  Give the Novababy access to the underlying 
framework of the world or it will be eternally confused as it tries to figure out why 
it can walk through trees, or why Mr. Smith's left toes are inside its own foot.  


-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Educating an AI in a simulated world

2003-07-14 Thread Ben Goertzel


Brad wrote:
  It's an interesting idea, to raise Novababy knowing that it can adopt
  different bodies at will.  Clearly this will lead to a rather
 different
  psychology than we see among humans --- making the in-advance design of
  educational environments particularly tricky!!

 First of all, please read Diaspora by Greg Egan.  As a SF author,
 he excels in his informed approach to AI design, philosophy, and
 neuroscience.  This book touches on this topic(AI's designed for
 multiple bodies) very directly.

I read that book a few years ago, after Eliezer Y kindly sent me a copy
(along with Fire Upon the Deep by Vernor Vinge).  Those gifts from Eli
reawakened my interest in SF, which had dwindled to just about zero in the
early 90's, due to an apparently lack of decently interesting SF books
beyond those I'd already read.

I think Egan's speculative physics is pretty nifty [though not nearly crazy
enough to be true, to paraphrase one of the founders of quantum theory],
but, I wasn't so impressed by Egan's treatment of AI (though I agree it's
not *idiotic* (unlike plenty of SF)).  He has more uploaded humans than
AI's, and even his AI's are pretty much derivatives of uploaded humans.
He does play around with the difference between embodied and disembodied
minds -- BUT, even his disembodied uploads seem to have pretty much the same
old human-type sort of self that we have ... presumably because they're
uploaded humans (or derivatives thereof) that haven't drifted too far from
their roots...

 This VR training room initially seemed like a great idea to me,
 but on consideration, I'm not so certain it's worth the trouble.

 First of all, you are reducing the complexity of the environment
 by orders of magnitude.  One could argue that it is a baby's
 physical interation with the world is the cornerstone on which
 all future intelligence resides.

 Now you've made pains to point out that you're not trying to
 recreate people, but intelligence.  However, a Novamente grounded
 in a different reality will be difficult for people to interact with.

 So here are two possible issues: the VR world might actually slow
 down the intellectual growth of the Nova Baby.  And even when
 intelligent, it will be more alien from us than it needs to be.

 A second point about this plan is that you are creating extra
 work for yourself both in designing a VR training paradigm, and
 then in bridging the gap from VR to the real world, which would
 be no picnic.


 So there are some possible negatives, the positives you've
 already listed.

Agree about the possible negatives.  However, I'm more worried about the VR
worlds may be too simple to really be useful problem than about the too
alien to communicate with us problem, because

a) If the VR world is one in which humans can play too, I imagine
communication will be possible in the context of that world.

b) Humans can interact in the context of virtual worlds dissimilar from
physical reality, and I suspect that similarly an AGI bred in a virtual
world will be able to interact in the physical world ... *if* it really
lives up to the G in AGI  Of course, there will be a significant
transition here, but I doubt it'll be an unbridgeable one.


 If this course is decided upon,

First, I stress this is not something the Novamente engineering team will be
working on right now.  We are not yet at the stage where building a great
training environment is our big problem.  However, it may be that some
additional resources to build this environment may present themselves --
because volunteers who don't have the training to work on AGI research
proper, may possibly have the training to work on AI-training-world
construction...

Anyway, the course is not decided upon yet -- what's decided upon is merely
that I'm going to spend a bit of time fleshing out the possibility.   We
wouldn't decide upon a major course of action such as this without far more
careful evaluation...

 consider giving the Novamente an
 ability to sense objects in their native format (sprites in 3d
 coordinates).  If your intent is to simplify the world, don't add
 in the fuzz of the artificial visual input, which is often flawed
 (e.g. clipping errors).  Give the Novababy access to the
 underlying framework of the world or it will be eternally
 confused as it tries to figure out why it can walk through trees,
 or why Mr. Smith's left toes are inside its own foot.

Very interesting suggestion, thanks!!!  Why not, indeed, supply it with
native format as well as useful but more derived/processed formats..

I want to add that we've also evaluated the possibility of using actual
robotic systems for NM teaching/training, but have concluded that right now
this would require significant financial resources, plus the collaboration
of a dedicated robotics team.  The situation is not as bad as when I last
explored this possibility in the late 90's -- I built a small mobile robot
hoping to use it to play with AI, but damn if I 

Re: [agi] Educating an AI in a simulated world

2003-07-12 Thread Philip Sutton
Ben,

I think there's a prior question to a Novamente learning how to 
perceive/act through an agent in a simulated world.

I think the first issue is for Novamente to discover that, as an intrinsic 
part of its nature, it can interact with the world via more than one agent 
interface.

Biological intelligences are born with one body and a predetermined set 
of sensors and actuators.  Later humans learn that we can extend our 
powers via technologically added sensors and actuators.

But an AGI is a much more plastic beast at the outset - it can be 
hooked to any number of sensor/actuator sets/combinations and these 
can be in the real world or in virtual reality.

My guess is that it might be useful for an AGI to learn from the outset 
that it needs to make conscious choices about which sensor/actuator 
set to use when trying to interact with the world 'out there'.

Probably to reduce early learning confusion it might be useful initially to 
give the AGI only 2 choices - between an agent that is a fixed-location 
box and an agent that is mobile - but with similar sensor sets so that it 
can fairly quickly learn that there is a relationship between what it 
perceives/learns via each sensor/actuator set.  (Biligual children often 
learn to speak quite a bit later than monolingual children - the young 
AGI doesn't want to have early leaning hurdles set too high.)

What I've said above I guess only matters if you are going to let a 
Novamente persist for a long period of time ie. you don't just reset it to 
the factory settings every time you run a learning session.  If the 
Novamente persists as an entity for any length of time then its early 
learning is going to play a huge role in shaping its capabilities and 
personality.

On a different matter, I think that it would be good for the AGI to learn 
to live initially in a world that is governed by the laws of physics, 
chemistry, ecology, etc.  So, although the best initial learning 
environment might be virtual world (mainly to reduce the need for 
massive sensory processing power), I think that world should simulate 
the bounded/non magical nature of the real world we live in.

Even if an AGI chooses to live in a non-bounded/magical virtual world 
most of the time in later life it needs to know that its fundamental 
existence is tied to a real world - it's going to need non-magical 
computational power and that's going to need real physical energy and 
it dependence on the real world has consequences for the other entities 
that live in the real world ie you and me an a few billion other people 
and some tens of million of other forms of life.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Educating an AI in a simulated world

2003-07-12 Thread Ben Goertzel


It's an interesting idea, to raise Novababy knowing that it can adopt
different bodies at will.  Clearly this will lead to a rather different
psychology than we see among humans --- making the in-advance design of
educational environments particularly tricky!!

On the other hand, creating a virtual world that obeys the laws of physics,
chemistry and so forth is simply not very plausible given the current state
of physics, chemistry etc.  Of course, we can make better or worse
approximations, though.  And, just as with the multiple-bodies idea, I think
a multiple-virtual-worlds plan makes sense.  The AGI will then grow up
without identification to a specific body, and also without identification
to a particular type of world!

more later,
Ben



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Philip Sutton
Sent: Saturday, July 12, 2003 9:56 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Educating an AI in a simulated world


Ben,

I think there's a prior question to a Novamente learning how to
perceive/act through an agent in a simulated world.

I think the first issue is for Novamente to discover that, as an intrinsic
part of its nature, it can interact with the world via more than one agent
interface.

Biological intelligences are born with one body and a predetermined set
of sensors and actuators.  Later humans learn that we can extend our
powers via technologically added sensors and actuators.

But an AGI is a much more plastic beast at the outset - it can be
hooked to any number of sensor/actuator sets/combinations and these
can be in the real world or in virtual reality.

My guess is that it might be useful for an AGI to learn from the outset
that it needs to make conscious choices about which sensor/actuator
set to use when trying to interact with the world 'out there'.

Probably to reduce early learning confusion it might be useful initially to
give the AGI only 2 choices - between an agent that is a fixed-location
box and an agent that is mobile - but with similar sensor sets so that it
can fairly quickly learn that there is a relationship between what it
perceives/learns via each sensor/actuator set.  (Biligual children often
learn to speak quite a bit later than monolingual children - the young
AGI doesn't want to have early leaning hurdles set too high.)

What I've said above I guess only matters if you are going to let a
Novamente persist for a long period of time ie. you don't just reset it to
the factory settings every time you run a learning session.  If the
Novamente persists as an entity for any length of time then its early
learning is going to play a huge role in shaping its capabilities and
personality.

On a different matter, I think that it would be good for the AGI to learn
to live initially in a world that is governed by the laws of physics,
chemistry, ecology, etc.  So, although the best initial learning
environment might be virtual world (mainly to reduce the need for
massive sensory processing power), I think that world should simulate
the bounded/non magical nature of the real world we live in.

Even if an AGI chooses to live in a non-bounded/magical virtual world
most of the time in later life it needs to know that its fundamental
existence is tied to a real world - it's going to need non-magical
computational power and that's going to need real physical energy and
it dependence on the real world has consequences for the other entities
that live in the real world ie you and me an a few billion other people
and some tens of million of other forms of life.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-11 Thread Philip Sutton
Hi Ben,

I think this is a great way to give one or more Novamentes the  
experience it/they need to develop mentally, in a controlled  
environment and in an environment where the need for massive  
computational power to handle sensory data is cut (I would imagine)  
hugely thus leaving Novamente a fair bit of computational power to do  
the cognitive self-development/thinking work.  

You've probably thought of this already, but the simulated environment 
could be the way for a Novamente's carers and teachers to interface 
with the Novamente.  Rather than trying to bring Novamente into our 
world we could enter its world via virtual reality - strictly both we and 
the Novamente(s) would enter each other's experience via a shared 
virtual reality world.  So a Novamente would control the behavior of an 
agent in a simulated world and it's carers/mentors would do likewise. 
The playpen that you've often talked about would be a simulated world 
and both Novamente(s) and humans could be in there together.  

I'd be very keen to collaborate on the design of the simulated world and  
on the roles/goals that Novamente might be set in such an  
environment.  I haven't got the skills to help with the development of  
Novamente internal architecture but I think I have something to offer in  
relationship to the project you are now contemplating.

Cheers, Philip


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-11 Thread Kevin
For a pretty cool subscription-based world of the type you are describing,
check out www.there.com

--Kevin


- Original Message - 
From: Peter Voss [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, July 11, 2003 2:42 PM
Subject: RE: [agi] Educating an AI in a simulated world


 Hi Ben,

 For the past few months our project's focus has shifted to exactly that.
Now
 that we have our basic framework and algorithms working, most of our
effort
 goes into defining, setting up, and implementing various tasks in a
(mainly)
 simulated environment. Our AGI engine interacts with the virtual world,
 learning stuff as it goes along. (It can also interact directly with the
 real world, or do both simultaneously.)

 Experience with this type of interactive learning to a large extend now
 drives our AGI's cognitive development.

 It is quite difficult coming up with a series of ever harder tasks that
are
 in the correct sequence, are not too redundant, nor rely too much on
 hard-coded/ instinctual abilities.

 We've ended up with a set of test/tasks inspired by animal
 cognition/intelligence studies, Piagetian infant developmental stages, and
 of course, our own theoretical model of intelligence. Obviously, actually
 running the tests tells us a lot more about what to change or focus on.

 We see our project using mainly this approach to drive up levels of AGI.

 I'd be interested to hear your ideas. Attached is an internal document
 (cryptically) listing some current and near-term tasks framed in terms of
 dog-like abilites (call me AIGO).

 Peter





 -Original Message- Ben Goertzel

 ... One of the things I've been thinking about lately is the potential use
 of our (in development) Novamente AI system to control the behavior of an
 agent in a simulated world -- say, a very richly and effectively
constructed
 massively multiplayer video game, or even a more futuristic VR-based
 simulation game

  I have developed some ideas about such an educational program, but
I'd
 like to hear others' thoughts on this and related topics, if anyone should
 have any...

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Educating an AI in a simulated world

2003-07-11 Thread Kevin
..except your body is *not* supplied with sensors and actuators in There..
But if its a virtual world, why do you need sensors and actuators??  There
is the presented visual display and control keys for moving around and
conversing..


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, July 11, 2003 11:53 AM
Subject: [agi] Educating an AI in a simulated world



 Hi,

 One of the things I've been thinking about lately is the potential use of
 our (in development) Novamente AI system to control the behavior of an
agent
 in a simulated world -- say, a very richly and effectively constructed
 massively multiplayer video game, or even a more futuristic VR-based
 simulation game

 This is not something we're working on in practice right now, but it's
 something that could happen in the future, and it seems an interesting
 context in which to explore the experiential learning aspect of
Novamente.

 The reader of the email is assumed to have some familiarity with the
 Novamente software design, minimally at the level of the overview
 yakkity-yakk on www.agiri.org

 Let's assume that the simulation world consists of a simulated 3D physical
 environment, and that Novamente is given control of an agent that is
 localized in a particular body within this environment.  The body is
 supplied with sensors and actuators, and

 · Each sensor results in a stream of perceptual relationships being
 presented to Novamente, over time.  These relationships are primitive
 perceptual relationship types, for example the output of a camera eye
might
 be represented using a relationship (PixelAt n m c) where n and m are ints
 representing locations on the camera screen, and c is a list representing
a
 perceived color.

 · Each actuator is represented by a function taking one or more arguments,
 e.g. move(v, a), where s is a float indicating speed and a is a list
 indicating direction.

 The particular set of sensors and actuators involved is very important for
 practical purposes, although the general approach described in the
document
 works for essentially any set of sensors and actuators.  We are thinking
in
 particular of

 · Sensors such as: simulated camera eyes, microphones,
 · Actuators such as: movement devices that can move in a specified
 direction with a specified speed, sensor control devices (e.g. pointing a
 camera in a certain direction)

 We are not concerning ourselves here with the details of robot control -
for
 instance, with the mechanisms of controlling a robot arm.  This sort of
 thing can be handled in Novamente, but, it is not required in gamelike sim
 worlds, and anyhow we feel it's a less interesting area of focus than
 higher-level control.

 Regarding sensory processing, we are willing to make use of existing
 sense-stream processing tools - for example, if camera-eye input is
 involved, we are quite willing to use existing vision processing software
 and feed Novamente its output.  We would also like Novamente to have
access
 to the raw output of the camera eye, so that it can carry out subtler
 perception processing if it judges this appropriate.

 Next, we assume that there are particular goals one wants the
 Novamente-controlled agent to achieve in the simulated environment.  These
 goals may be defined abstractly, but they should be definable formally, in
 terms of an Evaluator software object that can look at the log of
Novamente'
 s behavior in the simulated world over a period of time and assess the
 extent to which Novamente has fulfilled its goals.  While the end goals
for
 Novamente may be extremely sophisticated, we consider it important to
define
 a series of progressively more difficult and complex goals, beginning with
 very simple ones.  The goal series must be defined so that, with each goal
 Novamente learns to achieve, its internal ontology of learned cognitive
 procedures is appropriately enlarged.

 Recall that the Novamente software design does not provide a full
cognitive
 architecture, only a framework and a set of processes within which a
 cognitive architecture may emerge through experiential learning.  The
 cognitive architecture itself then consists of a dual network
 (hierarchy/heterarchy) of learned procedures, appropriate for various
sorts
 of activity in various sorts of context.  For the cognitive architecture
to
 build up properly, requires the right sort of experience.  And so one
needs
 an educational program ... a series of tasks to lead Novamente
through...
 to progressively let it build up the right internal declarative and
 procedural knowledge for surviving, flourishing and achieving its goals in
 the environment...

 OK -- I'll leave off the email here.  I have developed some ideas about
such
 an educational program, but I'd like to hear others' thoughts on this and
 related topics, if anyone should have any...

 -- Ben G

 ---
 To unsubscribe, change your address, or temporarily deactivate your

RE: [agi] Educating an AI in a simulated world

2003-07-11 Thread Ben Goertzel

In the case you're talking about, it seems

-- the actuators are the control keys
-- the sensors are a function that simply returns the presented visual
display as an array of pixels

This is a very very simple set of sensors and actuators...

Some existing games provide richer sensors and actuators than that.  For
instance, the GameCube controller (or auxiliary steering wheel, etc.) is a
better actuator than control keys (as would be a simulation of these
things).  Some games provide multiple screens that you can scroll between,
multiple views, etc. -- chat boxes and meaningful sounds as well as the
visual display.

Richer sensors and actuators lead a system to have multiple views of the
same situation it's embedded in.  This may be important, because it may help
a system to learn to develop multiple views of situations in a more abstract
sense.  This is only one example of the potential value of a richer system
of sensors and actuators for experiential interactive learning.

-- Ben G

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Kevin
 Sent: Friday, July 11, 2003 3:09 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] Educating an AI in a simulated world


 ..except your body is *not* supplied with sensors and actuators
 in There..
 But if its a virtual world, why do you need sensors and actuators??  There
 is the presented visual display and control keys for moving around and
 conversing..


 - Original Message -
 From: Ben Goertzel [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, July 11, 2003 11:53 AM
 Subject: [agi] Educating an AI in a simulated world


 
  Hi,
 
  One of the things I've been thinking about lately is the
 potential use of
  our (in development) Novamente AI system to control the behavior of an
 agent
  in a simulated world -- say, a very richly and effectively constructed
  massively multiplayer video game, or even a more futuristic VR-based
  simulation game
 
  This is not something we're working on in practice right now, but it's
  something that could happen in the future, and it seems an interesting
  context in which to explore the experiential learning aspect of
 Novamente.
 
  The reader of the email is assumed to have some familiarity with the
  Novamente software design, minimally at the level of the overview
  yakkity-yakk on www.agiri.org
 
  Let's assume that the simulation world consists of a simulated
 3D physical
  environment, and that Novamente is given control of an agent that is
  localized in a particular body within this environment.  The body is
  supplied with sensors and actuators, and
 
  · Each sensor results in a stream of perceptual relationships being
  presented to Novamente, over time.  These relationships are primitive
  perceptual relationship types, for example the output of a camera eye
 might
  be represented using a relationship (PixelAt n m c) where n and
 m are ints
  representing locations on the camera screen, and c is a list
 representing
 a
  perceived color.
 
  · Each actuator is represented by a function taking one or more
 arguments,
  e.g. move(v, a), where s is a float indicating speed and a is a list
  indicating direction.
 
  The particular set of sensors and actuators involved is very
 important for
  practical purposes, although the general approach described in the
 document
  works for essentially any set of sensors and actuators.  We are thinking
 in
  particular of
 
  · Sensors such as: simulated camera eyes, microphones,
  · Actuators such as: movement devices that can move in a specified
  direction with a specified speed, sensor control devices (e.g.
 pointing a
  camera in a certain direction)
 
  We are not concerning ourselves here with the details of robot control -
 for
  instance, with the mechanisms of controlling a robot arm.  This sort of
  thing can be handled in Novamente, but, it is not required in
 gamelike sim
  worlds, and anyhow we feel it's a less interesting area of focus than
  higher-level control.
 
  Regarding sensory processing, we are willing to make use of existing
  sense-stream processing tools - for example, if camera-eye input is
  involved, we are quite willing to use existing vision
 processing software
  and feed Novamente its output.  We would also like Novamente to have
 access
  to the raw output of the camera eye, so that it can carry out subtler
  perception processing if it judges this appropriate.
 
  Next, we assume that there are particular goals one wants the
  Novamente-controlled agent to achieve in the simulated
 environment.  These
  goals may be defined abstractly, but they should be definable
 formally, in
  terms of an Evaluator software object that can look at the log of
 Novamente'
  s behavior in the simulated world over a period of time and assess the
  extent to which Novamente has fulfilled its goals.  While the end goals
 for
  Novamente may be extremely sophisticated, we consider it important to
 define