On 9 May 2015 at 11:24, meekerdb <[email protected]> wrote:

>  On 5/8/2015 2:58 PM, LizR wrote:
>
>  On 9 May 2015 at 09:02, meekerdb <[email protected]> wrote:
>
>>   On 5/8/2015 1:33 AM, LizR wrote:
>>
>>  On 8 May 2015 at 18:37, meekerdb <[email protected]> wrote:
>>
>>> On 07 May 2015, at 14:45, Bruce Kellett wrote:
>>>
>>>>
>>>>  We can use an original biological brain, or an equivalent digital
>>>>> replacement -- it does not make any significant difference to the 
>>>>> argument.
>>>>> The first point is that in some conscious experience, be it a dream or
>>>>> anything else, there might be a portion of the 'brain' (in quotes because
>>>>> it can be biological or digital) that is not activated, so this can be
>>>>> removed without affecting the conscious experience.
>>>>>
>>>>
>>>  This idea of removing unused parts of brain so only "active" elements
>>> remain, seems problematic to me and not just because of counterfactual
>>> correctness.  The ability to do this is implicit in the assumption that the
>>> physics of the brain is classical.
>>
>>
>>  But comp is based on the assumption that consciousness is the result of
>> classical computation. If that assumption's wrong then comp fails, of
>> course, from step 0 - no need to worry about the MGA.
>>
>>  Bruno points out that a classical computer can compute anything that a
>> quantum computer can so it doesn't exactly fail; what I think it implies
>> that the classical computation must include the "environemnt", i.e. all the
>> extra physical degrees of freedom and entanglement that make the brain
>> computation (approximately) classical.
>>
>
>  That sounds like putting the cart before the horse. The question is, can
> the brain and environment be extracted from the assumption that
> consciousness is classical computation? Which is, of course, still an open
> question.
>
>  True, it's a problem from either end.  If you just assume computation is
> fundamental then you have to get QM out of it and ALSO the approximate
> classicality of the physically realized computation.
>

Exactly. That is Bruno's problem.

>   Plus, assuming no quantum entanglement with the environment is involved
> in consciousness (as seems likely given the decoherence times of neurons
> etc)
>
>  That's not taking the QM seriously.  QM says that it's the decoherence
> due to entanglement with the environment that produces the classical
> behavior.
>

OK, I think even with a little brain I'm beginning to see the point here.
I'm not yet sure if it's relevant, however. I think Max Tegmark's point was
that the environment of a neuron is other neurons (and the surrounding
material - glia, blood, etc) and that everything in a brain is decohering
far faster than than the timescales of consciousness. Why would taking the
QM seriously prevent the brain behaving as a classical computer on those
timescales? Or to move the question into a (perhaps) better known realm,
why would it stop a computer running an AI programme behaving as a
classical computer?

If the brain is fundamentally different to an AI due to quantum effects,
that invalidates comp at step 0 (and possibly invalidates strong AI as
well).

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to