RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Philip Sutton
Ben/Bill,

My feeling is that goals and ethics are not identical concepts.  And I 
would think that goals would only make an intentional ethical 
contribution if they related to the empathetic consideration of others.

So whether ethics are built in from the start in the Novamente 
architecture depends on whether there are goals *with ethical purposes* 
included from the start.

And whether the ethical system is *adequate* from the start would 
depend on the specific content of the ethically related goals and the 
resourcing and sophistication of effort that the AGI architecture directs 
at understanding and the acting on the implications of the goals vis-a-
vis any other activity that the AGI engages in.  I think the adequacy of 
the ethics system also depends on how well the architecture helps the 
AGI to learn about ethics.  If it a slow learner then the fact that it has 
machinery there to handle what it eventually learns is great but not 
sufficient.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values - plus early learning

2003-02-11 Thread Philip Sutton
Ben,

 Right from the start, even before there is an intelligent autonomous mind
 there, there will be goals that are of the basic structural character of
 ethical goals.  I.e. goals that involve the structure of compassion, of
 adjusting the system's actions to account for the well-being of others based
 on observation of and feedback from others. These one might consider as the seeds 
of future ethical goals.  They will
 grow into real ethics only once the system has evolved a real reflective
 mind with a real understanding of others...

Sounds good to me!  It feels right.

At some stage when we've all got more time, I'd like to discuss how the 
system architecture might be structured to assist the ethical learning of 
baby AGIs.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Bill Hibbard
On Wed, 12 Feb 2003, Philip Sutton wrote:

 Ben/Bill,

 My feeling is that goals and ethics are not identical concepts.  And I
 would think that goals would only make an intentional ethical
 contribution if they related to the empathetic consideration of others.
 . . .

Absolutely goals (I prefer the word values) and ethics
are not identical. Values are a means to express ethics.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote:

On Wed, 12 Feb 2003, Philip Sutton wrote:


Ben/Bill,

My feeling is that goals and ethics are not identical concepts.  And I
would think that goals would only make an intentional ethical
contribution if they related to the empathetic consideration of others.


Absolutely goals (I prefer the word values) and ethics
are not identical. Values are a means to express ethics.


Words goin' in circles... in my account there's morality, metamorality, 
ethics, goals, subgoals, supergoals, child goals, parent goals, 
desirability, ethical heuristics, moral ethical heuristics, metamoral 
ethical heuristics, and honor.

Roughly speaking you could consider ethics as describing regularities in 
subgoals, morality as describing regularities in supergoals, and 
metamorality as defining the computational pattern to which the current 
goal system is a successive approximation and which the current philosophy 
is an interim step in computing.

In all these cases I am overriding existing terminology to serve as a term 
of art.  In discussions like these, common usage is simply not adequate to 
define what the words mean.  (Those who find my definitions inadequate can 
find substantially more thorough definitions in Creating Friendly AI.)

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]