2009/4/29 Stathis Papaioannou <stath...@gmail.com>:
> 2009/4/29 Quentin Anciaux <allco...@gmail.com>:
>>>> In Bruno's Washington/Moscow thought experiment that information isn't
>>>> in your consciousness, although it's available via third persons. My
>>>> view of the experiment is that you would lose a bit of consciousness,
>>>> that you can't slice consciousness arbitrarily finely in time.
>>> Could the question be settled by actual experiment, i.e. asking the
>>> subject if they noticed anything unusual?
>>> Stathis Papaioannou
>> For this you would need an actual AI and also that everybody agreed on
>> the fact that this AI is conscious and not a zombie.
>> If you can settle that, then an interview should be counted as proof.
>> But I'm not sure you can prove the AI is conscious, nor with the same
>> argument I'm not sure I could prove to you that I am.
> Well, you could just ask the teleported human.
Yes... but that needs working teleportation ;) don't know if it is
easier to do than an AI :D
But as you say below, the zombie argument still stands. So beforehand
we should have an accepted "test" that tells if an entity is or not
conscious. And I've the feeling it is the most difficult part (besides
construction of a teleportation device or an AI).
> If he says he feels
> fine, didn't notice anything other than the scenery changing, would
> that count for anything? I suppose you could argue that of course he
> would say that since a gap in consciousness is by definition not
> noticeable, but then you end up with a variant of the zombie argument:
> he says everything feels OK, but in actual fact he experiences
> Stathis Papaioannou
All those moments will be lost in time, like tears in rain.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org
To unsubscribe from this group, send email to
For more options, visit this group at