Brent,

I think that the only way out of these dilemmas is to accept the brain as a self organizing neural network, and consciousness as an emergent phenomena that we could survive without. In Marvin Chester's, Primer of Quantum Mechanics, he states "The most important dictum of quantum mechanics is this one: WHAT YOU CAN MEASURE IS WHAT YOU CAN KNOW." Does that apply to the macroscopic world? When we have built a conscious neural network, with memristor technology, we may get some real insight into these issues, assuming we will be able to tell that it is conscious. Reference: Cristof Koch and Giulio Tononi, "Can a machine be conscious? Yes and a new Turing test might prove it." IEEE Spectrum, June 2008. Also of interest regarding the neurobiology of consciousness, Cristof Koch, The Quest for Consciousness (Roberts, 2004), with a forward by Francis Crick.


William


On Mar 17, 2010, at 10:36 PM, Brent Meeker wrote:

On 3/17/2010 9:28 PM, Stathis Papaioannou wrote:

On 18 March 2010 04:34, Brent Meeker <meeke...@dslextreme.com> wrote:


However I think there is something in the above that creates the "just a recording problem". It's the hypothesis that the black box reproduces the
I/O behavior.  This implies the black box realizes a function, not a
recording. But then the argument slips over to replacing the black box with a recording which just happens to produce the same I/O and we're led to an absurdum that a recording is conscious. But what step of the argument should we reject? The plausible possibility is that it is the different response to counterfactuals that the functional box and the recording realize. That would seem like magic - a different response depending on all
the things that don't happen - except in the MWI of QM all those
counterfactuals are available to make a difference..

I think that was Jack's problem with the fading qualia argument: it
would imply that a recording or random process could be conscious,
which is a no-no. He therefore contrives to explain how fading qualia
(with identical behaviour) could in fact happen. But I don't buy it: I
still think the idea of the partial zombie is incoherent.

If a chunk were removed out of my computer's CPU and replaced with a
black box which accidentally reproduces the I/O behaviour of the
missing part the computer would function perfectly normally. We would
not say that it isn't "really" running Windows and Firefox. Why do we
say this about consciousness?


Is it coherent to say a black box "accidentally" reproduces the I/ O? It is over some relatively small number to of I/Os, but over a large enough number and range to sustain human behavior - that seems very doubtful. One would be tempted to say the black box was obeying a "natural law". It would be the same as the problem of induction. How do we know natural laws are consistent - because we define them to be so.

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to