Bruno Marchal writes:
>> Le 30-déc.-06, à 07:53, Stathis Papaioannou a écrit :
>> > there is no contradiction in a willing slave being intelligent.
>> It seems to me there is already a contradiction with the notion of
>> "willing slave".
>> I would say a willing slave is just what we call a worker.
>> Or something related to sexual imagination ...
>> But a "real" slave is, I would say by definition, not willing to be
> OK, a fair point. Do you agree that if we built a machine that would
> happily obey our every command, even if it lead to its own
> destruction, that would (a) not be incompatible with intelligence, and
> (b) not cruel?
Hmmm.... It will depend how "we built" the machine. If the machine is
"universal-oriented" enough, through its computatbility, provability
and inferrability abilities, I can imagine a "cruelty" threshold,
although it would be non verifiable. This leads to difficult questions.
> For in order to be cruel we would have to build a machine that wanted
> to be free and was afraid of dying, and then threaten it with slavery
> and death.
For the same reason it is impossible to build a *normative* theory of
ethics, I think we cannot program high level virtue. We cannot program
it in machine nor in human. So we cannot program a machine "wanting to
be free" or "afraid of dying". I think quite plausible that such "high
level virtue" could develop themselves relatively to some universal
goal (like "help yourself") through long computational histories.
But all psychological properties of humans or machines (such as they may
be) are dependent on physical processes in the brain. It is certainly the case
that I think capital punishment is bad because the structure of my brain makes
me think that, and if my brain were different, I might not think that capital
punishment is bad any more. (This of course is different from the assertion
"capital punishment is bad", which is not an asssertion about how my brain
works, a particular ethical system, logic, science or anything else to which
it might be tempting to reduce it). Even if a "high level virtue" must develop
on its own, as a result of life experience rather than programmed instinct, it
must develop as a result of changes in the brain. A distinction is usually drawn
in psychiatry between physical therapies such as medication and psychological
therapies, but how could a psychological therapy possibly have any effect
without physically altering the brain in some way? If we had direct access to the
brain at the lowest level we would be able to make these physical changes
directly and the result would be indistinguishable from doing it the long way.
In particular I think that we should distinguish competence and
intelligence. Competence in a field (even a universal one) can be
defined and locally tested, but intelligence is a concept similar to
consciousness, it can be a byproduct of program + history, yet remains
beyond any theory.
I would say that intelligence can be defined and measured entirely in a 3rd person
way, which is why neuroscientists are more fond of intelligence than they are of
consciousness. If a computer can behave like a human in any given situation then
ipso facto it is intelligent, but it may not be conscious or it may be very differently
Be one of the first to try Windows Live Mail.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at