candice schuster wrote:
Richard,
Your responses to me seem to go in round abouts. No insult intended however. You say the AI will in fact reach full consciousness. How on earth would that ever be possible ?

I think I recently (last week or so) wrote out a reply to someone on the question of what a good explanation of "consciousness" might be (was it on this list?). I was implictly referring to that explanation of consciousness. It makes the definite prediction that consciousness (subjective awareness, qualia, etc. .... what Chalmer's called the Hard Problem of consciousness) is a direct result of an intelligent system being built with a sufficient level of complexity and self-reflection.

Make no mistake: the argument is long and tangled (I will write it up a length when I can) so I do not pretend to be trying to convince you of its validity here. All I am trying to do at this point is to state that THAT is my current understanding of what would happen.

Let me rephrase that: we (a subset of the AI community) believe that we have discovered concrete reasons to predict that a certain type of organization in an intelligent system produces consciousness.

This is not meant to be one of those claims that can be summarized in a quick analogy, or quick demonstration, so there is no way for me to convince you quickly, all I can say is that we have very string reasons to believe that it emerges.


You mentioned in previous posts that the AI would only be programmed with 'Nice feelings' and would only ever want to serve the good of mankind ? If the AI has it's own ability to think etc, what is stopping it from developing negative thoughts....the word 'feeling' in itself conjures up both good and bad. For instance...I am an AI...I've witnessed an act of injustice, seeing as I can feel and have consciousness my consciousness makes me feel Sad / Angry ?

Again, I have talked about this a few times before (cannot remember the most recent discussion) but basically there are two parts to the mind: the thinking part and the motivational part. If the AGI has a motivational that feels driven by empathy for humans, and if it does not possess any of the negative motivations that plague people, then it would not react in a negative (violent, vengeful, resentful.... etc) way.

Did I not talk about that in my reply to you? How there is a difference between having consciousness and feeling motivations? Two completely separate mechanisms/explanations?



Hold on...that would not be possible seeing as my owner has an 'Off' button he can push to avoid me feeling that way and hay I have only been programmed with 'Nice feelings' even though my AI Creators have told the rest of the world I have a full working conscious. It's starting to sound a bit like me presenting myself to the world after my 'Hippocampus' has been removed or better yet I've had a full frontal labotomy'.

["Full Frontal Labotomy"? :-) You mean pre-frontal lobotomy, maybe. Either that or this is a great title for a movie about a lab full of scientists trapped in a nudist colony].

Not at all like that. Did you ever have a good day, when you were so relaxed that nothing could disturb your feelings of generosity to the world? Imagine a creature that genuinely felt like that, and simple never could have a bad day.

But to answer you more fully: all of this depends on exactly how the "motivation system" of humans and AGIs is designed. We can only really have that discussion in the context of a detailed knowledge of specifics, surely?


And you say the AI will have thoughts and feelings about the world around it ? I shudder to think what a newly born, pure AI had to think about the world around us now. Or is that your ultimate goal in this Utopia that you see Richard ? That the AI's will become like Spiritual Masters to us and make everything 'all better' so to speak by creating little 'ME' worlds for us very confused, 'life purpose' seeking people ?

No, they will just solve enough of the boring problems that we can enjoy the rest of life.

Please also note the ideas in my parallel discussion with Matt Mahoney: do not be tempted to think of a Them and Us situation: we would have the ability to become just as knowledgeable as they are, at any time. We could choose our level of understanding on a day to day basis, the way we now choose our clothes. Same goes for them.

We would not be two species. Not master and servant. Just one species with more options than before.

[I can see I am going to have to write this out in more detail, just to avoid the confusion caused by brief glimpses of the larger picture].



Richard Loosemore


Candice

 > Date: Thu, 25 Oct 2007 19:02:35 -0400
 > From: [EMAIL PROTECTED]
 > To: singularity@v2.listbox.com
 > Subject: Re: [singularity] John Searle...
 >
 > candice schuster wrote:
 > > Richard,
 > >
 > > Thank you for a thought provoking response. I admire your ability to
> > think with both logic and reason. I think what Searle was trying to get
 > > at was this...and I have read 'The Chinese Room Theory'...I think that
> > what he was trying to say was...if the human brain breaks down code like > > a machine does, that does not make it understand the logic of the code, > > it is afterall code. If you go back to basics, for example binary code,
 > > it becomes almost sequence and you are (well some of us are, like
> > machines) able to understand how to put the puzzle together again but we > > may not understand the logic behind that code, ie: The Chinese Language
 > > as a whole.
 > >
 > > Although for instance the AI has the ability to decifer the code
 > > and respond, it does not understand the whole, which is funny in a way
 > > as you call your cause 'Singularity'...which to me implies 'wholeness'
 > > for some reason.
 > >
 > > Regarding your comment on....shock, horror, they made an AI that has
 > > human cognitive thought processes, quite the contrary Richard, if you
> > and the rest of the AI community come up with the goods I would be most
 > > intrigued to sit your AI down in front of me and ask it.......'Do you
 > > understand the code 'SMILE' ?'
 >
 > A general point about your reply.
 >
 > I think some people have a mental picture of what a computer does when
 > it is running an AI program, in which the computer does an extremely
 > simple bit of symbol manipulation, and the very "simplicity" of what is
 > happening in their imagined computer is what makes them think: this
 > machine is not really understanding anything at all.
 >
 > So for example, if the computer is set up SMILE subroutine that just
 > pulled a few muscles around, and this SMILE subroutine was triggered,
 > say, when the audio detectors picked up the sound of someone laughing,
 > then this piece of code would not be understanding or feeling a smile.
 >
 > I agree: it would not. Most other AI researchers would agree that such
 > a simple piece of code is not a system that "understands" anything.
 > (Not all would agree, but let's skirt that for the moment).
 >
 > But this where a simple mental image of what goes in a computer can be a
 > very misleading thing. If you thought that all AI programs were just
 > the same as this, then you might think that it is just as easy to
 > dismiss all AI programs with the same "This is not really understanding"
 > verdict.
 >
 > If Searle had only said that he objected to simple programs being
 > described as "conscious" or "self aware" then all power to him.
 >
 > So what happens in a real AI program that actually has all the machinery
 > to be intelligent? ALL of the machinery, mark you.
 >
 > Well, it is vastly more complex: a huge amount of processing happens,
 > and the "smile" response comes out for the right reasons.
 >
 > Why is that more than just a SMILE subroutine being triggered by the
 > audio detectors measuring the sound of laughter?
 >
 > Because this AI system is doing some very special things along with all
 > the smiling: it is thinking about its own thoughts, among other things,
 > and what we know (believe) is that when the system gets that complicated
 > and has that particular mix of self-reflection in it, the net result is
 > something that must talk about having an inner world of experience. It
 > will talk about qualia, it will talk about feelings .... and not because
 > it has been programmed to do that, but because when it tries to
 > understand the world it really does genuinely find those things.
 >
 > This is the step I mentioned in the last message I sent, and it is very
 > very subtle: when you try to think about what is going on in the AI,
 > you come to the inevitable conclusion that we are also "AI" systems, but
 > the truth is that all AI systems (natural and artifical) possess some
 > special properties: they have this thing that you describe as
 > subjective consciousness.
 >
 > This is difficult to talk about in such a short space, but the crude
 > summary is that if you make an AI extremely complex (with
 > self-reflection, and with no direct connections between things like a
 > smile and the causes of that smile) then that very complexity gives rise
 > to something that was not there before: consciousness.
 >
 >
 >
 > Richard Loosemore
 >
 > -----
 > This list is sponsored by AGIRI: http://www.agiri.org/email
 > To unsubscribe or change your options, please go to:
 > http://v2.listbox.com/member/?&;


------------------------------------------------------------------------
The next generation of MSN Hotmail has arrived - Windows Live Hotmail <http://www.newhotmail.co.uk>
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; <http://v2.listbox.com/member/?&;>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57837698-6c57c1

Reply via email to