On 3/9/2010 2:38 AM, Stathis Papaioannou wrote:
On 9 March 2010 09:06, Jack Mallah<jackmal...@yahoo.com>  wrote:

If consciousness supervenes on the physical realization of a computation, 
including the inactive part, it means you attach consciousness on an unknown 
physical phenomenon. It is a magical move which blurs the difficulty.
There is no new physics or magic involved in taking laws and counterfactuals 
into account, obviously.  So you seem to be just talking nonsense.

The only charitable interpretation of what you are saying that I can think of 
is that, like Jesse Mazer, you don't think that details of situations that 
don't occur could have any effect on consciousness.  Did you follow the 
'Factual Implications Conjecture' (FIC)?  I do find it basically plausible, and 
it's no problem for physicalism.

For example, suppose we have a pair of black boxes, A and B.  The external 
functioning of each box is simple: it takes a single bit as input, and as 
output it gives a single bit which has the same value as the input bit.  So 
they are trivial gates.  We can insert them into our computer with no problem.  
Suppose that in the actual run, A comes into play, while B does not.

The thing about these boxes is, while their input-output relations are simple, 
inside are very complex Rube Goldberg devices.  If you study schematics of 
these devices, it would be very hard to predict their functioning without 
actually doing the experiments.

Now, if box A were to function differently, the physical activity in our 
computer would have been different.  But there is a chain of causality that 
makes it work.  If you reject the idea that such a system could play a role in 
consciousness, I would characterize that as a variant of the well-known Chinese 
Room argument.  I don't agree that it's a problem.

It's harder to believe that the way in which box B functions could matter.  
Since it didn't come into play, perhaps no one knows what it would have done.  
That's why I agree that the FIC is plausible.  However, in principle, there 
would be no 'magic' involved even if the functioning of B did matter.  It's a 
part of the overall system, and the overall system implements the computation.
But the consciousness of the system would be the same *whatever* the
mechanism inside box A, wouldn't it? Suppose box A contains a
probabilistic mechanism that displays the right I/O behaviour 99% of
the time. Would the consciousness of the system be perfectly normal
until the box misbehaved, or would the consciousness of the system be
(somehow) 1% diminished even while the box was functioning
appropriately? The latter idea seems to me to invoke magic, as if the
system "knows" there is a dodgy box in there even if there is no
evidence of it.



That assumes that the consciousness is localized, i.e. not in the box. But consciousness may be a more system or holistic phenomenon.

Here's an interesting theory of consciousness in which counterfactuals would make a difference.

http://ntp.neuroscience.wisc.edu/faculty/fac-art/tononiconsciousness.pdf

Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to