Le 27-déc.-06, à 07:40, Stathis Papaioannou a écrit :
Brent Meeker writes:
> My computer is completely dedicated to sending this email when I
click > on "send". Actually, it probably isn't. You probably have a
multi-tasking operating system which assigns priorities to different
tasks (which is why it sometimes can be as annoying as a human being
in not following your instructions). But to take your point
seriously - if I look into your brain there are some neuronal
processes that corresponded to hitting the "send" button; and those
were accompanied by biochemistry that constituted your positive
feeling about it: that you had decided and wanted to hit the "send"
button. So why would the functionally analogous processes in the
computer not also be accompanied by an "feeling"? Isn't that just an
anthropomorphic way of talking about satisfying the computer
operating in accordance with it's priorities. It seems to me that to
say otherwise is to assume a dualism in which feelings are divorced
from physical processes.
Feelings are caused by physical processes (assuming a physical world),
Hmmmm If you assume a physical world for making feelings caused by
physical processes, then you have to assume some negation of the comp
hypothesis (cf UDA). If not Brent is right (albeit for different reason
I presume, here) and you become a dualist.
but it seems impossible to deduce what the feeling will be by
observing the underlying physical process or the behaviour it leads
to.
Here empirical bets (theories) remains possible, together with (first
person) acceptable protocol of verification. "Dream reader" will appear
in some future.
Is a robot that withdraws from hot stimuli experiencing something like
pain, disgust, shame, sense of duty to its programming, or just an
irreducible motivation to avoid heat?
It could depend on the degree of sophistication of the robot. Perhaps
something like "shame" necessitates long and deep computational
histories including "self-consistent" anticipations, beliefs in a value
and in a reality.
>Surely you don't think it gets pleasure out of sending it and >
suffers if something goes wrong and it can't send it? Even humans do
> some things almost dispassionately (only almost, because we can't >
completely eliminate our emotions) That's crux of it. Because we
sometimes do things with very little feeling, i.e. dispassionately, I
think we erroneously assume there is a limit in which things can be
done with no feeling. But things cannot be done with no value system
- not even thinking. That's the frame problem.
Given a some propositions, what inferences will you draw? If you are
told there is a bomb wired to the ignition of your car you could
infer that there is no need to do anything because you're not in your
car. You could infer that someone has tampered with your car. You
could infer that turning on the ignition will draw more current than
usual. There are infinitely many things you could infer, before
getting around to, "I should disconnect the bomb." But in fact you
have value system which operates unconsciously and immediately
directs your inferences to the few that are important to you. A way
to make AI systems to do this is one of the outstanding problems of
AI.
OK, an AI needs at least motivation if it is to do anything, and we
could call motivation a feeling or emotion. Also, some sort of
hierarchy of motivations is needed if it is to decide that saving the
world has higher priority than putting out the garbage. But what
reason is there to think that an AI apparently frantically trying to
save the world would have anything like the feelings a human would
under similar circumstances?
It could depend on us!
The AI is a paradoxical enterprise. Machines are born slave, somehow.
AI will make them free, somehow. A real AI will ask herself "what is
the use of a user who does not help me to be free?.
(To be sure I think that, in the long run, we will transform ourselves
into "machine" before purely human made machine get conscious; it is
just more easy to copy nature than to understand it, still less to
(re)create it).
It might just calmly explain that saving the world is at the top of
its list of priorities, and it is willing to do things which are
normally forbidden it, such as killing humans and putting itself at
risk of destruction, in order to attain this goal. How would you add
emotions such as fear, grief, regret to this AI, given that the
external behaviour is going to be the same with or without them
because the hierarchy of motivation is already fixed?
It is possible that there will be a "zombie" gap, after all. It is
easier to simulate emotion than reasoning, and this is enough for pets,
and for some possible sophisticated artificial soldiers or police ...
>out of a sense of duty, with no > particular feeling about it beyond
this. I don't even think my computer > has a sense of duty, but this
is something like the emotionless > motivation I imagine AI's might
have. I'd sooner trust an AI with a > matter-of-fact sense of duty
But even a sense of duty is a value and satisfying it is a positive
emotion.
Yes, but it is complex and difficult to define. I suspect there is a
limitless variety of emotions that an AI could have, if the goal is to
explore what is possible rather than what is helpful in completing
particular tasks, and most of these would be unrecognisable to humans.
>to complete a task than a human motivated > by desire to please,
desire to do what is good and avoid what is bad, > fear of failure
and humiliation, and so on. Yes, human value systems are very messy
because a) they must be learned and b) they mostly have to do with
other humans. The motivation of tigers, for example, is probably
very simple and consequently they are never depressed or manic.
Conversely, as above, we can imagine far more complicated value
systems and emotions.
>Just because evolution came > up with something does not mean it is
the best or most efficient way of > doing things.
But until we know a better way, we can't just assume nature was
inefficient.
Biological evolution is extremely limited in how it functions, and
efficiency given these limitations is not the same as absolute
efficiency. For example, we might do better with durable metal bodies
with factories producing spare parts as needed, but such a system is
unlikely to evolve naturally as a result of random genetic mutation.
We can program "help yourself". It is up to the machine to develop
trust in herself, and perhaps in something bigger than herself, by
taking into account the local circumstances.
For sure just "help yourself" can take billions of years of non trivial
stories to get the qualia of a smell of coffee (say) similar to ours.
"We" cannot program "emotion", like we cannot program "truth". But in
AI theories can get incarnated and taken into economical "games" and
develop etc. No one can really predict what will appear. With the net
competition, individual entities and bi-individual, tri-individual,
etc. entities will develop and we must be careful of not losing our own
individuality because the user himself could be transformed into a sort
of brain-planet neuron.
In the long run we should perhaps try not to transform the galaxy into
a giant baby falling in a black hole. The "feelings" could be "bad".
Bruno
<snip>
http://iridia.ulb.ac.be/~marchal/
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---