On 10/23/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> To make a system do something organized, you would have to give it goals
> and motivations.  These would have to be designed:  you could not build
> a "thinking part" and then leave it to come up with motivations of its
> own.  This is a common science fiction error:  it is always assumed that
> the thinking part would develop its own mitivations.  Not so:  it has to
> have some motivations built into it.  What happens when we imagine
> science fiction robots is that we automatically insert the same
> motivation set as is found in human beings, without realising that this
> is a choice, not something that comes as part and parcel, along with
> pure intelligence.


It can always pick something at random, can't it? Of course you can say that
to do so, it must already have a motivation for it, it it all comes down to
presence of design choice that makes speaking about motivations (as
extracted from behavior as a whole) meaningful.

-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56683927-d0bbd0

Reply via email to