Quentin Anciaux wrote:
> 2008/8/13 1Z <[EMAIL PROTECTED]>:

...
>>>>>>>>> No you devise this in 2 parts, I think only the abstract world is
>>>>>>>>> ontologically primary.
>>>>>>>> That is your conclusions. You cannot assume it in order to
>>>>>>>> argue for it.
>>>>>>> I do not assume them.
>>>>>> Then you need some other way of getting your multiple instantiations.
>>>>> Well I believe (note the word) that we (the mind) are a computation
>>>>> and as such I believe in strong AI, such that we will do conscious
>>>>> digital entities... Either these entities will be truly conscious (and
>>>>> it is possible for them to be conscious as we have assume that
>>>>> consciousness is a computational process) or they won't, if they won't
>>>>> and never will be conscious, it is only possible if contrary to the
>>>>> assumption, consciousness is not (only) a computational process. Now
>>>>> if  consciousness is a computational process and we build an AI (I
>>>>> don't see how we couldn't if consciousness is a computation, what
>>>>> could prevent it ?) then here you are with multiple implementations.
>>>> And if we don't build an AI, here you are without them. (And with
>>>> computationalism still true, and without any subjective
>>>> indeterminacy).
>>> If it is a computation explain why we wouldn't with logical
>>> argument... if the world is not destroyed tomorrow and consciousness
>>> is a computational process then we'll build AI....
>>
>> There is no reason to build and AI duplicate of everybody,
>> and there is no reason to single out me. So this is another
>> appeal to coincidence.
> 
> i've never said that and that's not the point. This AI could be
> duplicated and run in multiple instance when does she die ? when you
> pull the last plug of the last computer running it ? by pulling out
> all devices capable of running it ? by destroying the whole everything

I've been following this back-and-forth with interest.  The above leads to an 
interesting question which I will raise after a couple of background points. 
First, I don't think a conscious AI can exist independent of some environment 
of 
which it is conscious.  Of course this doesn't mean you can't create an AI 
which, like us, is conscious of this particular world. Second, I think a 
conscious AI must necessarily remember and learn.  A consequence of these two 
is 
that if you copy an AI the two copies will immediately start to diverge due to 
different experiences.  So the indeterminancy will immediately vanish.  There 
will be two different consciousnesses; which is perfectly ordinary except that 
they will share a lot of memories.  So what does this have to do with MMW?

Brent Meeker


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to