Ben Goertzel wrote: > In my view, what we're talking about here is partly a matter of "AGI > personality psychology" ...
Exactly. My point is that there is no particular reason to assume that "AGI personality psychology" will be any easier than, say, computer vision, or natural language processing. In fact, the history of AI to date makes it seem safer to assume the opposite - just about everything else interesting that anyone has ever tried to do has turned out to require all sorts of specialized code and novel theoretical insights, so we ought to assume this will too. Now, that doesn't mean that all AI work should focus on this topic, of course. But it does mean that any serious AGI project can't expect that sane, ethical behavior will just naturally emerge once the basic problem of making the system think at all is solved. It would be more realistic to expect to encounter a whole new level of difficult problems that are poorly studied today, due to the lack of AI systems that are complex enough to produce them. Billy Brown ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
