On 9/14/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
Ben wrote:
I don't think that Friendliness, to be meaningful, needs to have a
compact definition.
Anna's questions:
Then how will you build a "Friendly AI"?
If I wanted to build an AI embodying my own personal criterion of
Friendliness (or "benevolence", whatever), I would first build an AI
with the goal of figuring out a precise formalization of my own
personal criterion of Friendliness. (I am sure such a formalization
exists, but I'm also sure it's not compact and elegant.) Then, I
would make an AI with the goal of maximizing my personal criterion of
Friendliness.
However, I am not so sure this is the most sensible approach to
take.... The details of my own personal Friendliness criterion are
not that important (nor are the details of *anyone*'s particular
Friendliness criterion). It may be more sensible to create an AI with
a more abstract top-level goal representing more abstract and general
values....
-- Ben
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]