On 03 Feb 2012, at 21:23, Evgenii Rudnyi wrote:

On 02.02.2012 21:49 meekerdb said the following:
On 2/2/2012 12:38 PM, Craig Weinberg wrote:
On Jan 30, 6:54 pm, meekerdb<meeke...@verizon.net> wrote:
On 1/30/2012 3:14 PM, Craig Weinberg wrote:

On Jan 30, 6:08 pm, meekerdb<meeke...@verizon.net> wrote:
On 1/30/2012 2:52 PM, Craig Weinberg wrote: So kind of you to
inform us of your unsupported opinion.
I was commenting on your unsupported opinion.
Except that my opinion is supported by the fact that within the
context of chess the machine acts just like a person who had
those emotions. So it had at least the functional equivalent of
those emotions. Whereas your opinion is simple prejudice.
I agree my opinion would be simple prejudice had we not already
been over this issue a dozen times. My view is that the whole idea
that there can be a 'functional equivalent of emotions' is
completely unsupported. I give examples of puppets, movies,
trashcans that say THANK YOU, voicemail...all of these things
demonstrate that there need not be any connection at all between
function and interior experience.

Except that in every case there is an emotion in your examples...it's
just the emotion of the puppeter, the screenwriter, the trashcan
painter. But in the case of the chess playing computer, there is no
person providing the 'emotion' because the 'emotion' depends on
complex and unforeseeable events. Hence it is appropriate to
attribute the 'emotion' to the computer/program.

Brent

Craig's position that computers in the present form do not have emotions is not unique, as emotions belong to consciousness. A quote from my favorite book

Jeffrey A. Gray, Consciousness: Creeping up on the Hard Problem.

The last sentence from the chapter "10.2 Conscious computers?"

p. 128 "Our further discussion here, however, will take it as established that his can never happen."

Now the last paragraph from the chapter "10.3 Conscious robots?"

p. 130. "So, while we may grant robots the power to form meaningful categorical representations at a level reached by the unconscious brain and by the behaviour controlled by the unconscious brain, we should remain doubtful whether they are likely to experience conscious percepts. This conclusion should not, however, be over- interpreted. It does not necessarily imply that human beings will never be able to build artefacts with conscious experience. That will depend on how the trick of consciousness is done. If and when we know the trick, it may be possible to duplicate it. But the mere provision of behavioural dispositions is unlikely to be up to the mark."

If we say that computers right now have emotions, then we must be able exactly define the difference between unconscious and conscious experience in the computer (for example in that computer that has won Kasparov). Can you do it?

Yes. It is the point of AUDA. We can do it in the theoretical framework, once we accept some theory (axiomatic) of knowledge. Also, if your theory is that we (in the 3-sense) are not Turing emulable, you have to explain us why, and what it adds to the explanation. With comp, the trick of both consciousness and matter is not entirely computable. You have to resist to a reductionist conception of numbers and machines.

No computers has ever emotion "right now", they have *always* "right now emotions". With comp, the mind-body link is a bit tricky. Real consciousness is better seen to be associated to an infinity of computations instead of one, as we are programmed to do by years of local evolution.



Hence I personally find this particular Craig's position as supported.

You might miss the discovery of the universal machine and its self- reference logic.

Clark is right on this, emotion are easy, despite being able to run very deep, and to govern us. Esay but not so easy, you need the sensible matter non communicable hyposases.

The emotion of your laptot is unknown, and unmanifested, because your laptop has no deep persistant self-reference ability to share with you. We want a slave, and would be anxious in front of a machine taking too much independence.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to