Andrew,

> As for non-attachment being a path to "friendly" AI.  I'm going to have to
> say, that is wrong and dangerous.  Sociopath wrong.  It would make a system
> even less controllable as well as very hard to understand.  Now, sure, you
> might hope they are not such starving, greedy, self-absorbed creatures like
> most people out there, but with no needs, they'd be kind of useless.
>

Natural languages are rather ambiguous, and English is especially ambiguous
about internal states...

I don't think that non-attachment, in the sense that I considered it in my
blog post, would be dangerous for an AGI... quite the contrary

On the other hand, there are certainly interpretations of the English term
"non-attachment" under which non-attachment would be dangerous to an AGI...

There is non-attachment in the sense of not: retaining dependencies,
associations or subgoals beyond the point where there is reason to believe
they are valuable for you

And then there is non-attachment in the sense of not caring about anything
at all...

As I thought I made clear, I meant the former sense...

-- Ben



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to