[agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton
Pei,

 I also have a very low expectation on what the current Friendly AI
 discussion can contribute to the AGI research. 

OK - that's a good issue to focus on then.

In an earlier post Ben described three ways that ethical systems could 
be facilitated:
A)  Explicit programming-in of ethical principles (EPIP) 
B)  Explicit programming-in of methods specially made for the learning
of ethics through experience and teaching 
C)  Acquisition of ethics through experience and teaching, through
generic AI methods

It seems to me that (A) and (B) have immediate relevance to the 
research needed for the development of a friendly AGI.

And Kevin has proposed the development of machinery for a big red 
button which is another tangible issue.

So maybe we should take up your point and try to make the ethics 
discussion deliberately focussed on being relevant to the research and 
trial development issues.

Would you be prepared to help us focus the discussion in this way?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Let's make the friendly AI debate relevant to AGI research/development

2003-03-03 Thread Philip Sutton



Ben,


 I think Pei's 
point is related to the following point
 We're now working 
on aspects of
 A) explicit programming-in 
of ideas and processes
 B) Explicit programming-in 
of methods specially made for the learning
 
of ideas and processes through experience and teaching
 and that until 
we understand these better, there's not that much useful work
 to be done specifically 
pertaining to *ethical* ideas and processes.

OK. That makes sense. 
While there may be some interesting tweaks 
that arise from consideration of ethical issues, I can see the broad 
sense of this strategy.

Cheers, Philip