Hi Vlad,

Thanks for the response. It seems that you're advocating an incremental 
approach *towards* FAI, the ultimate goal being full attainment of 
Friendliness... something you express as fraught with difficulty but not 
insurmountable. As you know, I disagree that it is attainable, because it is 
not possible in principle to know whether something that considers itself 
Friendly actually is. You have to break a few eggs to make an omelet, as the 
saying goes, and Friendliness depends on whether you're the egg or the cook.

Terren

--- On Sat, 8/30/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> From: Vladimir Nesov <[EMAIL PROTECTED]>
> Subject: [agi] What is Friendly AI?
> To: [email protected]
> Date: Saturday, August 30, 2008, 1:53 PM
> On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam
> <[EMAIL PROTECTED]> wrote:
> > --- On Sat, 8/30/08, Vladimir Nesov
> <[EMAIL PROTECTED]> wrote:
> >
> >> You start with "what is right?" and end
> with
> >> Friendly AI, you don't
> >> start with "Friendly AI" and close the
> circular
> >> argument. This doesn't
> >> answer the question, but it defines Friendly AI
> and thus
> >> "Friendly AI"
> >> (in terms of "right").
> >
> > In your view, then, the AI never answers the question
> "What is right?".
> > The question has already been answered in terms of the
> algorithmic process
> > that determines its subgoals in terms of Friendliness.
> 
> There is a symbolic string "what is right?" and
> what it refers to, the
> thing that we are trying to instantiate in the world. The
> whole
> process of  answering the question is the meaning of life,
> it is what
> we want to do for the rest of eternity (it is roughly a
> definition of
> "right" rather than over-the-top extrapolation
> from it). It is an
> immensely huge object, and we know very little about it,
> like we know
> very little about the form of a Mandelbrot set from the
> formula that
> defines it, even though it entirely unfolds from this
> little formula.
> What's worse, we don't know how to safely establish
> the dynamics for
> answering this question, we don't know the formula, we
> only know the
> symbolic string, "formula", that we assign some
> fuzzy meaning to.
> 
> There is no final answer, and no formal question, so I use
> question-answer pairs to describe the dynamics of the
> process, which
> flows from question to answer, and the answer is the next
> question,
> which then follows to the next answer, and so on.
> 
> With Friendly AI, the process begins with the question a
> human asks to
> himself, "what is right?". From this question
> follows a technical
> solution, initial dynamics of Friendly AI, that is a device
> to make a
> next step, to initiate transferring the dynamics of
> "right" from human
> into a more reliable and powerful form. In this sense,
> Friendly AI
> answers the question of "right", being the next
> step in the process.
> But initial FAI doesn't embody the whole dynamics, it
> only references
> it in the humans and learns to gradually transfer it, to
> embody it.
> Initial FAI doesn't contain the content of
> "right", only the structure
> of absorb it from humans.
> 
> Of course, this is simplification, there are all kinds of
> difficulties. For example, this whole endeavor needs to be
> safeguarded
> against mistakes made along the way, including the mistakes
> made
> before the idea of implementing FAI appeared, mistakes in
> everyday
> design that went into FAI, mistakes in initial stages of
> training,
> mistakes in moral decisions made about what
> "right" means. Initial
> FAI, when it grows up sufficiently, needs to be able to
> look back and
> see why it turned out to be the way it did, was it because
> it was
> intended to have a property X, or was it because of some
> kind of
> arbitrary coincidence, was property X intended for valid
> reasons, or
> because programmer Z had a bad mood that morning, etc.
> Unfortunately,
> there is no objective morality, so FAI needs to be made
> good enough
> from the start to eventually be able to recognize what is
> valid and
> what is not, reflectively looking back at its origin, with
> all the
> depth of factual information and optimization power to run
> whatever
> factual queries it needs.
> 
> I (vainly) hope this answered (at least some of the) other
> questions as well.
> 
> -- 
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to