> >Do you believe that there's some amount of computing power, beyond which
> >increases in computing power no longer lead to commensurate increases in
> >intelligence?  If so, why?
> >
> >Do you believe that, at some amount of computing power, the amount of
> >intelligence achievable using this amount of computing power will not be
> >adequate to figure out how to gather more computing power?  If so, why?
>
> I agree with you that there will be no limits to the
> above 2 processes. What I'm skeptical about is how do
> we exploit this possibility. I cannot imagine how an
> AI can "impose" (can't think of better word) a morality
> on all human beings on earth, even given intergalactic
> computing resources. If this cannot be done, then we
> *must* default to self-organization of the free market
> economy. That means you have to specify what your AI
> will do, instead of relying on idealistic descriptions
> that have no bearing on reality.
>
> YKY

Hmmm....

I'm not sure why the notion of "free market" necessarily applies here.  The
market economy is a manifestation of a particular phase of human history,
not necessarily relevant to future AI systems.

Also, once AI's are truly intelligent and autonomous, they won't be relying
on human specifications anymore, at least not directly....  I agree that our
idealistic descriptions will not be too relevant to their behaviors however.

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to