Mike,

Please check out an essay I wrote a couple years ago,

http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

which is related to the issues you mention.  As I note there

"
My goal in this essay is to explore some particular aspects of the
difficulty of
creating Friendly AI, which ensue not from the subtleties of AI design but
rather from the
complexity of the notion of Friendliness itself, and the complexity of the
world in which
both humans and AI's are embedded.

...

... the basic arguments I present here regarding Friendliness are as
follows:

• Creating accurate formalizations of current human notions of action-based
Friendliness, while perhaps possible in the future with very significant
effort, is
unlikely to lead to notions of action-based Friendliness that will be robust
with
respect to future developments in the world and in humanity itself
• The world appears to be sufficiently complex that it is essentially
impossible for
seriously resource-bounded systems like humans to guarantee that any
system's
actions are going to have beneficent outcomes.  I.e., guaranteeing (or
coming
anywhere near to guaranteeing) outcome-based Friendliness is effectively
impossible.  And this conclusion holds for basically any highly specific
property,
not just for Friendliness as conventionally defined.  (What is meant by a
"highly
specific property" will be defined below.)

"

I don't conclude that the complexity of the world means AGI is impossible
though.  I just conclude that it means that creating very powerful AGI's
with
predictable effects is quite possibly not possible ;-)

-- Ben G


On 10/29/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>  Check out
>
>
> http://environment.newscientist.com/article/dn12833-climate-is-too-complex-for-accurate-predictions.html
>
> which argues:
>
> "Climate change models, no matter how powerful, can never give a precise
> prediction of how greenhouse gases will warm the Earth, according to a new
> study."
>
> What's that got to do with superAGI's? This: the whole idea of a superAGI
> "taking off" rests on the assumption that the problems we face in life are
> soluble if only we - or superAGI's- have more brainpower.
>
> The reality is that the problems we face are actually infinite or
> "practically endless."  Problems like predicting the weather, working out
> what to do in Iraq, how to seduce or persuade another person, working out
> what career path to follow, deciding how to invest on the stockmarket etc.
> You can think about them forever and screw up just as badly or worse than if
> you think about them for a minute. And a superAGI may be just as capable of
> losing a bundle on the market as we are, or producing a product that no one
> wants.
>
> That doesn't mean that a superior brain wouldn't have advantages, but
> rather that there would be considerable limits to its powers.Even a vast
> brain will have problems dealing with problematic, infinite problems. (And
> even mighty America with all its collective natural and artificial
> brainpower still has problems dealing with dumb peasants).
>
> What is rather disappointing to me , given that there is an awful lot of
> mathematical brainpower around here, is that there seems to be no interest
> in giving mathematical expression to the ideas I have just expressed.
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58639965-c45f46

Reply via email to