On 10/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> Aleksei Riikonen wrote:
>> I think most of us would prefer potentially superhuman
>> systems to not have goals/etc of their own.
>
> To me this sounds like wishing for a square circle.  What we really want
> is that the goals, etc. of the AI facilitate our own, or at minimum not
> come into conflict with them.  (Few would object if the AI wanted to
> grant our every wish.)

An AI that only carries out our every wish would fall under what I
described as "not having goals of its own". Its goals are ours, not
something that emerges from an anthropomorphic ego of the AI.

My terminology was imprecise, yes, but I found using it to be the best
option when talking to someone just getting acquainted with the
topics. Didn't want to get bogged down in technicalities, and I think
I got the point across.

>> (1) don't have instincts, goals or values of their own
>
> Do you have any specific reason to believe that such a thing is
> possible?  I don't believe it is, though I'll admit that the goal-set of
> an artificial intelligence might be very strange to a human.

Essentially I'm saying that the AI wouldn't have an ego in the sense
that humans have (self-centered goals, subconscious instincts etc),
and that certainly seems possible.

>> (2) may not even be conscious, even though they carry out superhuman
>> cognitive processing (the possibility of this is not known yet, at
>> least to me)
>
> I think you need to define your terms here.  It isn't clear to me what
> you are talking about when you talk about something which is engaging in
> superhuman cognitive processing not being conscious.  Having a non-local
> consciousness I could ... not understand, but accept.  I suspect that a
> non-centralized consciousness may be necessary for an intelligence to be
> very much superior to humans in cognitive processing.  Non-local is
> harder to understand, but may be necessary.  But it's not clear to me
> what you could mean when you talk about "not even be conscious, even
> though they carry out superhuman cognitive processing".   Consciousness
> is one component of intelligence, it seems to occur at the interface
> between mind an language, so I suspect that it's related to
> serialization, and, perhaps, to the logging of memories.

No proof is known to me, that no non-conscious system could perform
high-functioning cognitive processing. The phenomenon of consciousness
(or that of intelligence) is not yet understoog well enough, that
philosophical zombies of arbitrarily high intelligence could be ruled
out. So I am taking into account both possibilities: either every
system of high enough intelligence will be conscious, or not.

-- 
Aleksei Riikonen - http://www.iki.fi/aleksei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56440992-3955b9

Reply via email to