> If humans, who have the benefit of massive evolutionary debugging, are so
> prone to meta-level errors, it seems unwise to assume that intelligence
> alone will automatically solve the problem. At a minimum, we
> should look for
> a coherent theory as to why humans make these kinds of mistakes,
> but the AI
> is unlikely to do so.
>
> Billy Brown

This is the sort of thing I was talking about in the language of "giving an
AGI the right attitude."

We humans have all sorts of emotional complexes that prevent us from being
objective about ourselves.

One should not anthropomorphically assume that AGI's will have similar
complexes!

But yet, one should not glibly assume that they will automatically emerge as
paragons of rationality and mental health either...

In my view, what we're talking about here is partly a matter of "AGI
personality psychology" ...

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to