On 04.02.2012 01:10 meekerdb said the following:
On 2/3/2012 1:50 PM, Evgenii Rudnyi wrote:
On 03.02.2012 22:07 meekerdb said the following:
On 2/3/2012 12:23 PM, Evgenii Rudnyi wrote:
On 02.02.2012 21:49 meekerdb said the following:
On 2/2/2012 12:38 PM, Craig Weinberg wrote:
On Jan 30, 6:54 pm, meekerdb<meeke...@verizon.net> wrote:
On 1/30/2012 3:14 PM, Craig Weinberg wrote:

On Jan 30, 6:08 pm, meekerdb<meeke...@verizon.net>
wrote:
On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind
of you to inform us of your unsupported opinion.
I was commenting on your unsupported opinion.
Except that my opinion is supported by the fact that
within the context of chess the machine acts just like a
person who had those emotions. So it had at least the
functional equivalent of those emotions. Whereas your
opinion is simple prejudice.
I agree my opinion would be simple prejudice had we not
already been over this issue a dozen times. My view is that
the whole idea that there can be a 'functional equivalent
of emotions' is completely unsupported. I give examples of
puppets, movies, trashcans that say THANK YOU,
voicemail...all of these things demonstrate that there need
not be any connection at all between function and interior
experience.

Except that in every case there is an emotion in your
examples...it's just the emotion of the puppeter, the
screenwriter, the trashcan painter. But in the case of the
chess playing computer, there is no person providing the
'emotion' because the 'emotion' depends on complex and
unforeseeable events. Hence it is appropriate to attribute
the 'emotion' to the computer/program.

Brent

Craig's position that computers in the present form do not
have emotions is not unique, as emotions belong to
consciousness. A quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard
Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as
established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious
robots?"

p. 130. "So, while we may grant robots the power to form
meaningful categorical representations at a level reached by
the unconscious brain and by the behaviour controlled by the
unconscious brain, we should remain doubtful whether they are
likely to experience conscious percepts. This conclusion should
not, however, be over-interpreted. It does not necessarily
imply that human beings will never be able to build artefacts
with conscious experience. That will depend on how the trick of
consciousness is done. If and when we know the trick, it may be
possible to duplicate it. But the mere provision of behavioural
dispositions is unlikely to be up to the mark."

If we say that computers right now have emotions, then we must
be able exactly define the difference between unconscious and
conscious experience in the computer (for example in that
computer that has won Kasparov). Can you do it?

Can you do it for people? For yourself? No. Experiments show
that people confuse the source of their own emotions. So your
requirement that we be able to "exactly define" is just something
you've invented.

Brent

I believe that there is at least a small difference. Presumably we
 know everything about the computer that has played chess. Then it
 seems that a hypothesis about emotions in that computer could be
verified without a problem - hence my notion on "exactly define".
On the other hand, consciousness remains to be a hard problem and
here "exactly define" does not work.

However, the latter does not mean that consciousness does not exist
as a phenomenon. Let us take for example life. I would say that
there is not good definition what life is ("exactly define" does
not work), yet this does not prevent science to research it. This
should be the same for conscious experience.

Evgenii

So you've reversed your theory? If computers have emotions like
people we must *not* be able to exactly define them. And if we can
exactly define them, that must prove they are not like people?

No, I do not. My point was that we can check the statement "a computer have emotions" exactly. Then it would be possible to check if such a definition applies to people. I have nothing against of such a way - make a hypothesis what emotion in a computer is, research it, and then try to apply this concept to people.

Yet, if we know in another direction, from people to computers, then first we should research what emotion in a human being is. Here is the difference with the computer, we cannot right now make a strict definition. We can though still research emotions in people.

Actually, if we've made an intelligent chess playing computer, one
that learns from experience, we probably don't know everything about
it. We might be able to find out - but only in the sense that in
principle we could find out all the neural connections and functions
in a human brain. It's probably easier and more certain to just watch
behavior.

Brent

In the case of a computer we have everything to prove our hypothesis about its behavior and define our hypothesis in a precise language. Why do you suggest not to do it?

Evgenii

P.S. For those who would like to learn Artificial Intelligence. I have just received an announcement that there will be a new free course

AI for Robotics

http://www.udacity.com

"Our goal is to teach you to program a self-driving car in 7 weeks."


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to