Bill Hibbard via AGI wrote:
> It is lamentable that there is so little
> support for research on ethical AI. The last
> section of my book describes this situation.
> Outreach to the AI mainstream by MIRI will
> hopefully help generate more funding for
> research on ethical AI. (Note some MIRI folks
> don't like the term 'ethical AI' - one of
> many issues on which I respectfully disagree.)

I skimmed parts of your book. It looks like you have a severely
constrained conceptualization of how a motivational system will work.
What if the AI uses a motivational system closely modeled on the human
system?    The types of simplifications that maf ppl like to make do not
translate well into practical systems. While having a theoretical basis
under your system can be a tremendous boon, trying to simplify things
down to utility = x is not helping anyone. The 300 page rant against
uploading in story that I'm writing tries to make the point that, at the
very least, a proper motivational system is no simpler than a fairly
long vector of values and that there is no meaningful way to derive a
concept of utility. Importantly, all of the information you need to
actually understand the system is in the vector and is thrown away if
you try to reduce it to a single scalar that could be crowned with the
name utility.

In terms of smileys, utility is like reducing everything to:

:)    and    :(  

With a vector motivational system you can have:

=)  :)  8)  %P  >= )   =\  =(   =~(   8P   =o)   X)    < ^o^ >   =P  =0
  and any of dozens of others...

-- 
IQ is a measure of how stupid you feel.

Powers are not rights.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to