Terren,

>is not embodied at all, in which case it is a mindless automaton

Researchers and philosophers define mind and intelligence in many
different ways = their classifications of particular AI systems
differ. What really counts though are problem solving abilities of the
system. Not how it's labeled according to a particular definition of
mind.

> So much talk about Friendliness implies that the AGI will have no ability to 
> choose its own goals.

Developer's choice.. My approach:
Main goals - definitely not;
Sub goals - sure, with restrictions though.

>It seems that AGI researchers are usually looking to create clever slaves.

We are talking about our machines.
What else are they supposed to be?

>clever slaves. That may fit your notion of general intelligence, but not mine.

To me, general intelligence is a cross-domain ability to gain
knowledge in one context and correctly apply it in another [in terms
of problem solving]. The source of the primary goal(s) (/problem(s) to
solve) doesn't (from my perspective) have anything to do with the
level of system's intelligence. It doesn't make it more or less
intelligent. It's just a separate thing. The system gets the initial
goal [from whatever source] and *then* it's time to apply its
intelligence.

Regards,
Jiri Jelinek


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to