On 10/3/2013 4:53 PM, Pierz wrote:
On Thursday, October 3, 2013 4:59:17 AM UTC+10, Brent wrote:
On 10/1/2013 11:49 PM, Pierz wrote:
On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote:
On 10/1/2013 9:56 PM, Pierz wrote:
> Yes, I understand that to be Chalmer's main point. Although, if the
qualia
can be
> different, it does present issues - how much and in what way can it
vary?
Yes, that's a question that interests me because I want to be able to
build
intelligent
machines and so I need to know what qualia they will have, if any. I
think it
will depend
on their sensors and on their values/goals. If I build a very
intelligent Mars
Rover,
capable of learning and reasoning, with a goal of discovering whether
there was
once life
on Mars; then I expect it will experience pleasure in finding evidence
regarding this.
But no matter how smart I make it, it won't experience lust.
"Reasoning" being what exactly? The ability to circumnavigate an obstacle
for
instance? There are no "rewards" in an algorithm. There are just paths
which do or
don't get followed depending on inputs. Sure, the argument that there must
be
qualia in a sufficiently sophisticated computer seems compelling. But the
argument
that there can't be seems equally so. As a programmer I have zero
expectation that
the computer I am programming will feel pleasure or suffering. It's just as
happy
to throw an exception as it is to complete its assigned task. *I* am the
one who
experiences pain when it hits an error! I just can't conceive of the
magical point
at which the computer goes from total indifference to giving a damn. That's
the
point Craig keeps pushing and which I agree with. Something is missing from
our
understanding.
What's missing is you're considering a computer, not a robot. As robot has
to have
values and goals in order to act and react in the world. It has complex
systems and
subsystems that may have conflicting subgoals, and in order to learn from
experience
it keeps a narrative history about what it considers significant events.
At that
level it may have the consciousness of a mouse. If it's a social robot,
one that
needs to cooperate and compete in a society of other persons, then it will
need a
self-image and model of other people. In that case it's quite reasonable
to suppose
it also has qualia.
Really? You believe that a robot can experience qualia but a computer can't? Well that
just makes no sense at all. A robot is a computer with peripherals. When I write the
code to represent its "self image", I will probably write a class called "Self". But
once compiled, the name of the class will be just another string of bits, and only the
programmer will understand that it is supposed to represent the position, attitude and
other states of the physical robot.
But does the robot understand the class; i.e. does it use it in it's planning and modeling
of actions, in learning, does it reason about itself. Sure it's not enough to just label
something self - it has to be something represented just as the robot represents the world
in order to interact successfully.
Do the peripherals need to be real or can they just be simulated?
They can be simulated if they only have to interact with a simulated world.
Brent
Does a brain in a Futurama-style jar lose its qualia because it's now a computer not a
robot? Come on.
> I'm curious what the literature has to say about that. And if
functionalism
means
> reproducing more than the mere functional output of a system, if it
potentially means
> replication down to the elementary particles and possibly their
quantum
entanglements,
> then duplication becomes impossible, not merely technically but in
principle.
That seems
> against the whole point of functionalism - as the idea of "function"
is
reduced to
> something almost meaningless.
I think functionalism must be confined to the classical functions,
discounting
the quantum
level effects. But it must include some behavior that is almost
entirely
internal - e.g.
planning, imagining. Excluding quantum entanglements isn't arbitrary;
there
cannot have
been any evolution of goals and values based on quantum entanglement
(beyond the
statistical effects that produce decoherence and quasi-classical
behavior).
But what do "planning" and "imagining" mean except their functional
outputs? It
shouldn't matter to you how the planning occurs - it's an "implementation
detail"
in development speak.
You can ask a person about plans and imaginings, and speech in response is
an action.
Your argument may be valid regarding quantum entanglement, but it is still
an
argument based on what "seems to make sense" rather than on genuine
understanding
of the relationship between functions and their putative qualia.
But I suspect that there is no understanding that would satisfy Craig as "genuine".
Do we have a "genuine" understanding of electrodynamics? of computation? What we
have is the ability to manipulate them for our purposes. So when we can
make an
intelligent robot that interacts with people AS IF it experiences qualia
and we can
manipulate and anticipate that behavior, then we'll have just as genuine an
understanding of qualia as we do of electrodynamics.
Well if my daughter has a doll that cries until it's picked up - i.e., it acts AS IF it
had qualia - do we have a genuine understanding of qualia?
I think that's a chimera. What do you think a "genuine understanding of qualia" would be
like. How would it be more than the ability to engineer the behavior of a robot so that
it exhibited the intelligent and emotional behavior similar to that of humans?
Brent
I just fail to comprehend the boundary between that obviously false scenario and a more
sophisticated robot in which the qualia are supposed to be real. I have no answer, but I
find it hard to believe that even computationalist true believers such as yourself don't
secretly find this problem just a little bit puzzling too.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
No virus found in this message.
Checked by AVG - www.avg.com <http://www.avg.com>
Version: 2014.0.4142 / Virus Database: 3604/6718 - Release Date: 10/02/13
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.