YKY wrote:
> In case you cannot specify a utopian goal for the AI,
> then you'll have to create an AIs that cater to specific
> interests of specific groups, ie those with relatively
> rigid goal structures. My prediction is that this goal
> specification problem would become so complicated that
> people would be better off simply building simple utlity
> AIs that integrate with the economy. Notice the similarity
> between this goal specification problem and the problem of
> command economies.

Hmm... I'm not sure if you're arguing that

a) engineering a truly flexible AGI is impossible because the problem of
specifying goals for the AI is too hard

or

b) engineering an AGI that truly serves the long-term interests of humans,
in general, is impossible because the problem of specifying the associated
goals is too hard

If the latter, you might be right; if the former, I strongly disagree...

You could be right that "people would be better off", in the long term, if
we only engineered narrow-AI programs serving special interest groups.  This
embodies a lot of assumptions however, including the assumption that, absent
AI, people won't exterminate themselves....  It also seems to come from an
assumption that "people being better off" should be the goal of our work,
which isn't universally accepted.  If the goal is "Minds being better off,"
for example, then things may look quite different.

> Fact: You do not know how to specify utopia. Whereas
> individuals know what makes *themselves* happy.

Well, the latter statement clearly contradicts a lot of historical facts
about human psychology, but I guess it's not necessary or appropriate to go
into such things here ;-)  What we get into here is the peculiar and
multidimensional nature of the notion of "happiness"

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to