Bill Hibbard wrote:
> Human and animal brains have mostly selfish values, but
> there is no good reason to design artificial brains with
> selfish values. I'd like to see values based on human
> happiness, as recognized in human faces, voices and body
> language.
>
> The danger is that reinforceme
Hi Philip,
> If the next big thing (advanced AGI) were to treat us like we treat the
> species we've advanced over, then I'd say humans have good reason
> to be nervous.
>
> But I think the solution is for humans and AGIs to grow up together and
> for AGIs to have to develop with well developed et
>
> Is anybody working on building ethical capacity into AGI from the
> ground up?
>
> As I mentioned to Ben yesterday, AGIs without ethics could end up
> being the next decade's e-viruses (on steriods).
>
> Cheers, Philip
My thoughts on this are at
www.goertzel.org/dynapsyc/2002/AIMorality.htm
Philip,
IRT:
>
> Is anybody working on building ethical capacity into AGI from
> the ground up?
>
Yes. Checkout www.singinst.org, particularly the Friendly AI
initiative.
>
> As I mentioned to Ben yesterday, AGIs without ethics could end
> up being the next decade's e-viruses (on steriods).
Hi David
> What of the possibility, Ben, of an Asimov-like reaction to the
> possibility of thinking machines that compete with humans? It's the
> kind of dumb, Man-Was-Not-Meant-to-Go-There, scenario we see all the
> time on Sci-Fi Channel productions, but it is plausible, especially in
> a worl