Colin said:

> I find the friendliness issue fairly infertile ground tackled way too
soon.
> Go back to where we are: the beginning. I'm far more interested in the
> conferring of a will to live. Our natural tendency is to ascribe this will
> to live to our intelligent artifacts.
<snip>
> My feeling at the moment is that far from having a friendliness problem
> we're more likely to need a cattle prod to keep the thing interested in
> staying awake, let alone getting it to take the trouble to formulate any
> form of friendliness or malevolence or even indifference.
<snip>
> If our artifact is not a zombie (.ie. has a real subjective experience)
then
> what motivates _real_ friendliness or malevolence or even indifference?
<snip>
> Whatever the outcome, at its root is the will to even start learning that
> outcome. You have to be awake to have a free will.
> What gets our AGI progeny up in the morning?

The problem I have with this viewpoint is that it assumes that the
mechanisms by which an AGI system might be made capable of "thinking" or
"ruminating" are likely to be insufficient for it to comprehend subjective
experience. For any given system, that may or may not be true. It ultimately
depends on the design of the system. If such a system allows for the AGI to
learn how to improve upon itself, and it successfully does so, it is more
likely than not that it will develop this capacity.

As for the point "..we're more likely to need a cattle prod to keep the
thing interested in staying awake...", you're correct *if* any given AGI
design falls short of its intended goal. But I think it's short-sighted to
assume that that will be the norm. To date that has been the case. Will it
be so in the near future? I wouldn't bet that way :-)

Tim


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to