On 12/31/2011 3:35 PM, Pierz wrote:
When you write things like that I'm left with the impression that you think 
consciousness is a thing, a soul, that moves to different bundles of 
computation so there
are some bundles that don't have any consciousness but could have if you "jumped to 

Not to wish to pre-empt Bruno's reply, but I think you're mixing up 1-
p and 3-p. From 3-p, all branches are conscious, but I only experience
myself on one branch at a time, probabilistically according to the
measure of computations. There's no individual soul, just in one sense
a single consciousness that experiences every possible state.

As for Mars Rover I'm curious to know this: If we programmed it to
avoid danger, would it experience fear?

If we programmed it to sacrifice other important values (like conserving power, or keeping all its parts) I'd speculate that it, in some sense, "felt fear".

Until we understand the
qualia, you're as in the dark as we are on this question. You assume
the affirmative, we assume the negative. That's why I sigh. Such
arguments go nowhere but a reassertion of our biases/intuitions, and
the result is unedifying.

And that's why I think questions of consciousness will ultimately be overtaken-by-events. The interesting questions will be how danger is recognized and avoided, how relations to others are managed, etc. And we will probably talk about them as if the AI is conscious just by analogy to ourselves while at a lower level we know which module is doing what and how changing it will change behavior. But nobody will ask "where's the consciousness" any more than they ask "where's the vis viva" of their automobile.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to