On 1/13/2014 11:16 PM, LizR wrote:
On 14 January 2014 19:42, meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:

    On 1/13/2014 10:18 PM, LizR wrote:

So you don't think a discussion of what counts as an AI is a good idea? OK, that's fine by me (you're the one who wants to discuss it, after all!)

No, I meant I don't think it's a good idea to try to restrict AI to mean a *conscious* computer. I'm certainly willing to assume it's possible. I don't think a philosophers zombie is possible.

So let's just assume it's possible, since it's presumably a consequence of Edgar's theory, that a computer could be conscious,

I don't know that Edgar agrees to that, although it would seem to be a consequence of his theory. He hasn't said what it would take for a robot to be conscious. Would intelligent behavior be enough?


and go back to the original discussion.


We were talking about whether a person can always know if they are in a simulated reality. Suppose the person is an AI inside a simulation. Would they necessarily know it was a simulation?

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to