RE: [agi] AGI morality

2003-02-11 Thread Ben Goertzel


Bill Hibbard wrote:
 On Mon, 10 Feb 2003, Ben Goertzel wrote:

A goal in Novamente is a kind of predicate, which is just a
   function that
assigns a value in [0,1] to each input situation it observes...
   i.e. it's a
'valuation' ;-)
  
   Interesting. Are these values used for reinforcing behaviors
   in a learning system? Or are they used in a continuous-valued
   reasoning system?
 
  They are used for those two purposes, AND others...

 Good. In that case the discussion about whether ethics
 should be built into Novamente from the start fails
 to recognize that it already is. Building ethics into
 reinforcement values is building them in from the start.

yes, I agree

 Solomonoff Induction (http://www.idsia.ch/~marcus/kolmo.htm)
 provides a good theoretical basis for intelligence, and
 in that context behavior is determined by only two things:

 1. The behavior of the external world.
 2. Reinforcement values.

 Real systems include lots of other stuff, but only to
 create a computationally efficient approximation to the
 behavior of Solomonoff Induction (which is basically
 uncomputable). You can try to build ethics into this
 other stuff, but then you aren't building them in
 from the start.

I also agree with this portrayal of AGI.  And I think that gradually, the
AGI
community is moving toward building a bridge
between the mathematical theory of Solomonoff induction and the practice of
AGI.

In the Artificial General Intelligence (formerly known as Real AI)
edited
volume we're putting together, you can see these connections forming...

We have, for example,

* a paper by Marcus Hutter giving a Solomonoff induction based theory of
general
intelligence

* a paper by Luke Kaiser giving a variant on Marcus's theory, introducing
directed
acyclic function graphs as a specific computational model within the
Solomonoff
induction framework

* Cassio's and  my paper on Novamente, including mention of the Novamente
schema
(procedure) module, which uses directed acyclic function graphs as Luke
describes.

-- Ben G





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Philip Sutton
Ben/Bill,

My feeling is that goals and ethics are not identical concepts.  And I 
would think that goals would only make an intentional ethical 
contribution if they related to the empathetic consideration of others.

So whether ethics are built in from the start in the Novamente 
architecture depends on whether there are goals *with ethical purposes* 
included from the start.

And whether the ethical system is *adequate* from the start would 
depend on the specific content of the ethically related goals and the 
resourcing and sophistication of effort that the AGI architecture directs 
at understanding and the acting on the implications of the goals vis-a-
vis any other activity that the AGI engages in.  I think the adequacy of 
the ethics system also depends on how well the architecture helps the 
AGI to learn about ethics.  If it a slow learner then the fact that it has 
machinery there to handle what it eventually learns is great but not 
sufficient.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values - plus early learning

2003-02-11 Thread Philip Sutton
Ben,

 Right from the start, even before there is an intelligent autonomous mind
 there, there will be goals that are of the basic structural character of
 ethical goals.  I.e. goals that involve the structure of compassion, of
 adjusting the system's actions to account for the well-being of others based
 on observation of and feedback from others. These one might consider as the seeds 
of future ethical goals.  They will
 grow into real ethics only once the system has evolved a real reflective
 mind with a real understanding of others...

Sounds good to me!  It feels right.

At some stage when we've all got more time, I'd like to discuss how the 
system architecture might be structured to assist the ethical learning of 
baby AGIs.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Bill Hibbard
On Wed, 12 Feb 2003, Philip Sutton wrote:

 Ben/Bill,

 My feeling is that goals and ethics are not identical concepts.  And I
 would think that goals would only make an intentional ethical
 contribution if they related to the empathetic consideration of others.
 . . .

Absolutely goals (I prefer the word values) and ethics
are not identical. Values are a means to express ethics.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote:

On Wed, 12 Feb 2003, Philip Sutton wrote:


Ben/Bill,

My feeling is that goals and ethics are not identical concepts.  And I
would think that goals would only make an intentional ethical
contribution if they related to the empathetic consideration of others.


Absolutely goals (I prefer the word values) and ethics
are not identical. Values are a means to express ethics.


Words goin' in circles... in my account there's morality, metamorality, 
ethics, goals, subgoals, supergoals, child goals, parent goals, 
desirability, ethical heuristics, moral ethical heuristics, metamoral 
ethical heuristics, and honor.

Roughly speaking you could consider ethics as describing regularities in 
subgoals, morality as describing regularities in supergoals, and 
metamorality as defining the computational pattern to which the current 
goal system is a successive approximation and which the current philosophy 
is an interim step in computing.

In all these cases I am overriding existing terminology to serve as a term 
of art.  In discussions like these, common usage is simply not adequate to 
define what the words mean.  (Those who find my definitions inadequate can 
find substantially more thorough definitions in Creating Friendly AI.)

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Philip,

On Tue, 11 Feb 2003, Philip Sutton wrote:

 Ben,

 If in the Novamente configuration the dedicated Ethics Unit is focussed
 on GoalNode refinement, it might be worth using another term to
 describe the whole ethical architecture/machinery which would involve
 aspects of most/all (??) Units plus perhaps even the Mind Operating
 System (??).

 Maybe we need to think about an 'ethics system' that is woven into the
 whole Novamente architecture and processes.
 . . .

I think discussing ethics in terms of goals leads to confusion.
As I described in an earlier post at:

  http://www.mail-archive.com/agi@v2.listbox.com/msg00390.html

reasoning must be grounded in learning and goals must be grounded
in values (i.e., the values used to reinforce behaviors in
reinforcement learning).

Reinforcement learning is fundamental to the way brains work, so
expressing ethics in terms of learning values builds those ethics
in to brain behavior in a fundamental way.

Because reasoning emerges from learning, expressing ethics in terms
of the goals of a reasoning system can lead to confusion, when the
goals derived from ethics turn out to be inconsistent with the goals
that emerge from learning values.

In my book I advocate using human happiness for learning values, where
behaviors are positively reinforced by human happiness and negatively
reinforced by human unhappiness. Of course there will be ambiguity
caused by conflicts between humans, and machine minds will learn
complex behaviors for dealing with such ambiguities (just as mothers
learn complex behaviors for dealing with conflicts among their
children). It is much more difficult to deal with conflict and
ambiguity in a purely reasoning based system.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel

  My idea is that action-framing and environment-monitoring are carried
  out in a unified way in Units assigned to these tasks generically.
  ..ethical thought gets to affect system behavior indirectly
  through a), via ethically-motivated GoalNodes, both general ones and
  context-specific ones.  Thus, the role of the ethics Unit I posited
  would be create ethically-motivated Goalnodes, which would then be
  exported to the generic action-framing and environment-monitoring Units
  to live and work along with the other Goalnodes. 
 
 OK - that makes sense.
 
 Presumably there would be a lot of feedback from the action-framing 
 and environment-monitoring Units to the Ethical Unit for it to create 
 additional or refined GoalNodes to help resolve previously unresolved 
 or ambiguous ethical issues?
 
 Cheers, Philip


Correct!

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov

Philip Sutton wrote:
Maybe we need to think about an 'ethics system' that is woven into the 
whole Novamente architecture and processes.

How about a benevolence-capped goal system where all the AI's actions 
flow from a single supergoal?  That way you aren't adding ethics into a 
fundamentally ethics-indifferent being, but creating a system that is 
ethical from the foundations upward.  Since humans aren't used to 
consciously thinking about our morality all day long and performing 
every action based on that morality, it's difficult to imagine a being 
that could.  But I believe that building an AI in that way would be 
much safer; as recursive self-improvement begins to take place, (it 
could at any point, we don't really know) it would probably be a good 
thing for the AI's high-level goals to be maximally aligned with any 
preexisting complexity within the AI.  Letting the AI grow up with 
whichever goals look immediately useful, (regularly check and optimize 
chunk of code X, win this training game, etc.) and then trying 
to weave in ethics works in humans because we already come pre-
equipped with cognitive machinery ready for behaving ethically; when 
we teach each other to be more good, we're only marginally tweaking 
the DNA-constructed cognitive architecture which is already there to 
begin with.  Weaving in ethics, by creating a set of injunctions and 
encouraging a ethically nascent AI to extrapolate off those injunctions 
(analogous to humans giving one another ethical advice) isn't as robust 
a system as one which starts off early with the ability to perform fine-
grained tweaks of its own goals and methods within the context of its 
top-level goal (which has no analogy: it's better than anything 
evolution could have come up with.) 

I wonder if the top of the ethics hierarchy is the commitment of the 
AGI to act 'ethically' - ie. to have a commitment to modifying its own 
behaviour to benefit non-self (including life, people, other AGIs,  
community, etc.)

This means that an AGI has to be able to perceive self and non-self 
and to be able to subdivide non-self into elements or layers or 
whatever that deserve focussed empathetic or compassionate 
consideration.  

Why does the AGI need to create a boundary between itself and others in 
order to help others?  You seem to be writing under the implicit 
assumption that the AGI has a natural tendency to become selfish; where 
will this tendency come from?  An AGI might have a variety of layers 
of self for different purposes, but how would the self/non-self 
distinction be useful for an AGI engaging in compassionate or 
benevolent acts?  Instead of be good to others, why not simply be 
good in general?

Maybe the experience of biological life, especially highly intelligent 
biological life, is useful here.  Young animals, including humans, 
seem to depend on hard wired instinct to see them through in relation 
to certain key issues before they have experienced enough to rely 
heavily or largely on learned and rational processes.

But the learned and rational processes are just the tip of the iceberg 
of underlying biological complexity, right?

Another key issue for the ethics system, but this time for more mature 
AGIs, is how the basic system architecture guides or restricts or 
facilitates the AGI's self modification process.  Maybe AGIs need to 
be designed to be social in that they have a really strong desire to: 

(a) talk to other advanced sentient beings to kick around ideas for 
self 
modification before they commit themselves to fundamental change.  

Probably a good idea just in case, but in a society of minds already 
independent from observer-biased moral reasoning, borrowing extra 
computing power for a tough decision is a more likely action 
than kicking around ideas in the way that humans do, right?  Or are 
we assuming a society of AIs with observer-biased moral reasoning?

This does not preclude changes that are not approved of by the 
collective but it might at least make an AGI give any changes careful 
consideration. If this is a good direction to go in it suggests that 
having more than one AGI around is a good thing.

What if the AGI could encapsulate the moral benefits of communal 
exchange through the introduction of a single cognitive module?  It 
could happen.  If we're building a bootstrapping AI, instead of 
building a bunch and launching them all at the same time, why not just 
build one we can trust to create buddies along the takeoff trajectory 
if circumstances warrant?  An AI that *really wanted* to be good from 
the start wouldn't need humans to create a society of AIs to keep their 
eyes on one another; it would do that on its own.

(c) maybe AGIs need to have reached a certain age or level of maturity 
before their machinary for fundamental self-modification is turned 
on...and maybe it gets turned on for different aspects of itself at 
different times in its process of maturation.

Of course, we'd have 

Re: [agi] AGI morality

2003-02-10 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:


However, it's to be expected that an AGI's ethics will be different than any
human's ethics, even if closely related.


What do a Goertzelian AGI's ethics and a human's ethics have in common 
that makes it a humanly ethical act to construct a Goertzelian AGI?

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel

I think we all agree that, loosely speaking, we want our AGI's to have a
goal of respecting and promoting the survival and happiness of humans and
all intelligent and living beings.

However, no two minds interpret these general goals in the same way.  You
and I don't interpret them exactly the same, and my children don't interpret
them exactly the same as me in spite of my explicit  implicit moral
instruction.  Similarly, an AGI will certainly have its own special twist on
the theme...

-- Ben G



 Ben Goertzel wrote:
 
  However, it's to be expected that an AGI's ethics will be
 different than any
  human's ethics, even if closely related.

 What do a Goertzelian AGI's ethics and a human's ethics have in common
 that makes it a humanly ethical act to construct a Goertzelian AGI?

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov

Ben Goertzel writes:
This is a key aspect of Eliezer Yudkowsky's Friendly Goal 
Architecture

Yeah; too bad there isn't really anyone else to cite on this one.  It 
will be interesting to see what other AGI pursuers have to say about 
the hierarchial goal system issue, once they write up their thoughts.

The Novamente design does not lend itself naturally to a hierarchical 
goal structure in which all the AI's actions flow from a single 
supergoal.

Doesn't it depend pretty heavily on how you look at it?  If the 
supergoal is abstract enough and generates a diversity of subgoals, 
then many people wouldn't call it a supergoal at all.  I guess it 
ultimately burns down to how the AI designer looks at it.

GoalNodes are simply PredicateNodes that are specially labeled as 
GoalNodes; the special labeling indicates to other MindAgents that 
they are used to drive schema (procedure) learning.

Okay; got it.

 Letting the AI grow up with
 whichever goals look immediately useful, (regularly check and 
optimize
 chunk of code X, win this training game, etc.) and then trying
 to weave in ethics ...

That was not my suggestion at all, though.  The ethical goals can be 
there
from the beginning.  It's just that a purely hierarchical goal 
structure is
highly unlikely to emerge as a goal map, i.e. an attractor, of 
Novamente's
self-organizing goal-creating dynamics.

Right, that statement was directed towards Philip Sutton's mail, but I 
appreciate your stepping in to clarify.  Of course, whether AIs with 
substantially prehuman (low) intelligence can have goals that deserve 
being called ethical or unethical is a matter of word choice and 
definitions.  

Michael Anissimov

-
http://eo.yifan.net
Free POP3/Web Email, File Manager, Calendar and Address Book

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Ben,

  I think discussing ethics in terms of goals leads to confusion.
  As I described in an earlier post at:
 
http://www.mail-archive.com/agi@v2.listbox.com/msg00390.html
 
  reasoning must be grounded in learning and goals must be grounded
  in values (i.e., the values used to reinforce behaviors in
  reinforcement learning).

 Bill, I think we differ mainly on semantics here.

 What you call values I'm just calling the highest-level goals in the goal
 hierarchy...

 A goal in Novamente is a kind of predicate, which is just a function that
 assigns a value in [0,1] to each input situation it observes... i.e. it's a
 'valuation' ;-)

Interesting. Are these values used for reinforcing behaviors
in a learning system? Or are they used in a continuous-valued
reasoning system?

Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI  53706
[EMAIL PROTECTED]  608-263-4427  fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AGI morality

2003-02-10 Thread Brad Wyble


 
 There might even be a benefit to trying to develop an ethical system for 
 the earliest possible AGIs - and that is that it forces everyone to strip 
 the concept of an ethical system down to its absolute basics so that it 
 can be made part of a not very intelligent system.  That will probably be 
 helpful in getting the clarity we need for any robust ethical system 
 (provided we also think about the upgrade path issues and any 
 evolutionary deadends we might need to avoid).
 
 Cheers, Philip

I'm sure this idea is nothing new to this group, but I'll mention it anyway out of 
curiosity.

A simple and implementable means of evaluating and training the ethics of an early AGI 
(one existing in a limited FileWorld type environment), would engage the AGI in 
variants of prisoner's dilemna with either humans or a copy of itself.   The payoff 
matrix(CC, CD, DD) could be varied to provide a number of different ethical 
situtations.  

Another idea is that the prisoner's dilemna could then be internalized, and the AGI 
could play the game between internal actors, with the Self evaluating their actions 
and outcomes.


-Brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-09 Thread Ben Goertzel

Hi Philip,

I agree that a functionally-specialized Ethics Unit could make sense in an
advanced Novamente configuration.

Essentially, it would just be a unit concerned with GoalNode refinement --
creation of new GoalNodes embodying subgoals of the GoalNodes embodying
basic ethical principles.  GoalNode refinement however involves a lot of
novamente processes, including first-order and higher-order inference,
predicate creation, association formation, etc.

The operations of this unit would not differ substantially from that of a
unit devoted to Goalnode refinement more generally.  However, devoting a
Unit to ethics goal-refinement on an architectural level would be a simple
way of ensuring resource allocation to ethics processing through
successive system revisions.  Of course, a system COULD revise itself so as
to create a mock ethics unit to fool human observers, and actually ignore
the output of this unit, but this is a low-probability scenario
(particularly if the ethics unit is working well ;)

-- Ben


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Philip Sutton
 Sent: Sunday, February 09, 2003 7:58 PM
 To: [EMAIL PROTECTED]
 Subject: RE: [agi] AGI morality


 Ben,

 One issue you didn't respond to that I suggested was:

  I also think that AGIs need to have a built in commitment to devote an
  adequate amount of mind space to monitoring the external environment
  and internal thought processes to identify issues where ethical
  considerations should apply.  I think this resource allocation needs to
  be reinforced by some hard wiring.

 What's you feeling on this?  If I understand the Novamente system
 structure, wouldn't ethical competence warrant the inclusion of an
 ethics processing 'unit' in a Novamente AGI?

 The elements that I think are needed are some goals  (established in
 GoalNodes??) that conform to a dual structure
 (hierarchical/heterarchical), a firm and adequate commitment of
 resources to ethical perception and implications action processing and
 some tie in the 'emotional' motivation systems via FeelingNodes (?),
 and some form of protection against frivolous reprogramming (ie.
 maybe some aspects are quarantines from reprogramming and other
 aspects can only rewired after a lot of very serious thought.), and some
 form of structuring into the Mind Operating System

 I think it might help the process of devising the ethical
 'machinery' of an
 AGI if we just agreed that it should have some (ethical 'machinery' )
 and then tried to figure out what the structure should be without getting
 bogged down in the specific ethical goals that should drive the system.

 Once we have a better feel for the ethics generation/processing
 architecture we could go back to the issue of what the ethical goals
 should be specifically.

 Cheers, Philip

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben,

 I agree that a functionally-specialized Ethics Unit could make sense in
 an advanced Novamente configuration. .devoting a Unit to ethics
 goal-refinement on an architectural level would be a simple way of
 ensuring resource allocation to ethics processing through successive
 system revisions. 

OK.  That's good.

You've dicussed this in terms of GoalNode refinement.  I probably don't 
understand the full range of what this means but my understanding of 
how ethics works is that an ethical sentient being starts with some 
general ethic goals (some hardwired, some taught and all blended!) 
and then the entity (a) frames action motivated by the ethics and (b) 
monitors the environment and internal processes to see if issues come 
up that call for an ethical response - then any or all the following 
happen - the goals might be refined so that it's possible to apply the 
goals to the complex current context and/or the entity goes on to 
formulate actions informed by the ethical cogitation.

So on the face of it an Ethics Unit of an AGI would need to do more 
than GoalNode refinement??  Or have I missed the point?

Cheer, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]