Hi Philip,

> If the next big thing (advanced AGI) were to treat us like we treat the
> species we've advanced over, then I'd say humans have good reason
> to be nervous.
>
> But I think the solution is for humans and AGIs to grow up together and
> for AGIs to have to develop with well developed ethical
> capabilities/standards.
>
> Is anybody working on building ethical capacity into AGI from the
> ground up?

In my view reinforcement learning is fundamental to the
way that brains work, and that requires some values that
define positive and negative reinforcement of behaviors.

Human and animal brains have mostly selfish values, but
there is no good reason to design artificial brains with
selfish values. I'd like to see values based on human
happiness, as recognized in human faces, voices and body
language.

The danger is that reinforcement values will be based
on some corporation's profits and losses, or even some
sort of military values.

> As I mentioned to Ben yesterday, AGIs without ethics could end up
> being the next decade's e-viruses (on steriods).

I agree that AGI ethics will become a big public policy
issue, but I'm sorry to say probably not in the next
decade (but I'd love to be wrong about that).

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to