On 29.09.2012 21:28 meekerdb said the following:
On 9/29/2012 5:43 AM, Evgenii Rudnyi wrote:
I have understood Brent in such a way that when engineers
develop a robot they must just care about functionality to
achieve and they can ignore consciousness at all. Whether it
appears in the robot or not, it is not a business of
engineers. Do you agree with such a statement or not?
In my defense, I only said that the engineers could develop
artificial intelligences without considering consciousnees. I
didn't say they *must* do so, and in fact I think they are
ethically bound to consider it. John McCarthy has already
written on this years ago. And it has nothing to do with whether
supervenience or comp is true. In either case an intelligent
robot is likely to be a conscious being and ethical
Dear Bruno and Brent,
Frankly speaking I do not quite understand you answers. When I try
to convert your thoughts to some guidelines for engineers
developing robots, I get only something like as follows.
1) When you make your design, do not care about consciousness, just
implement functions required.
Where did I say that. Don't paraphrase, quote.
It well might be that I have interpreted your words incorrectly. Sorry
if this is the case.
P.S. I would say that the text above
>>> In my defense, I only said that the engineers could develop
>>> artificial intelligences without considering consciousnees.
belong to you.
2) When a robot is ready, it may have consciousness. We have not a
clue how to check if it has it but you must consider ethical
implications (say shutting a robot down may be equivalent to a
P.S. In my view 1) and 2) implies epiphenomenolism for
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at