On Thu, Dec 22, 2011 at 11:45 PM, Joseph Knight <joseph.9...@gmail.com>wrote:

> On Thu, Dec 22, 2011 at 10:34 PM, Jason Resch <jasonre...@gmail.com>wrote:
>> On Thu, Dec 22, 2011 at 11:21 PM, Joseph Knight <joseph.9...@gmail.com>wrote:
>>> On Thu, Dec 22, 2011 at 6:13 PM, Jason Resch <jasonre...@gmail.com>wrote:
>>>> Joseph,
>>>> I found your post very interesting.  While I agree with your
>>>> conclusion, how I get there is a little different.
>>>> I think that at the time all of Alice's neuronal firings are triggered
>>>> by random particles she is a zombie.  It is less clear in the case of a
>>>> single malfunctioning neuron.  This is because of the modularity of our
>>>> brains: Different sections of the brain perform specific functions.  Some
>>>> neurons may serve only as communication links between different regions in
>>>> the brain, while others may be involved in processing.  I think that the
>>>> malfunction and correction of a "communication neuron" might not alter
>>>> Alice's experience, in the same way we could correct a faulty signal in her
>>>> optic nerve and not expect her experience to be affected.  I am less sure,
>>>> however, that a neuron involved in processing could have its function
>>>> replaced by a randomly received particle, as this changes the definition of
>>>> the machine.
>>>> Think of a register containing a bit '1'.  If the bit is '1' because
>>>> two inputs were received and the logical AND operation is applied, this is
>>>> an entirely different computation from two bits being ANDed, the result
>>>> placed in that register, then (regardless of the result) the bit '1' is set
>>>> in that register.  This erases any effect of the two input bits, and
>>>> redefines the computation altogether.  This 'set 1' instruction is much
>>>> like the received particles from the super nova causing neurons to fire.
>>>>  It is a very shallow computation, and in my opinion, not likely to lead to
>>>> any consciousness.
>>> I see what you are saying here, but I don't think this counterargument
>>> works because the wiring (i.e. logical rules) of Alice's neural network
>>> have not themselves been changed by her malfunctioning -- only the
>>> individual inputs themselves. The way those inputs are processed has not
>>> changed.
>> If every neuron is processing only inputs generated by the exploded star
>> then the neurons might as well be completely isolated
>  If the logical rules are processing these star-generated inputs
>> (equivalent to input generated by a "set 1" instruction) then there would
>> be no deep computations, no recursion, etc.  Would you argue that Alice's
>> neurons firing in complete physical isolation from each other could create
>> conscious?
> Assume each neuron fires at the same times it would if it were still in
>> Alice's functioning mind.
> I am truly agnostic. I really have no earthly idea. But assuming
> computationalism, as in the MGA, I have to say yes. With this assumption,
> the particular physical implementation of a program, however bizarre, is
> not relevant -- only the execution of the algorithm matters.

I agree that only the algorithm matters, but my contention is that in this
case, the isolated neurons operating on input from the star do not
implement the same algorithm.  A bunch of logic gates from a CPU,
physically separated (without any intercommunication) has no causal
interdependence, the bits they contain have no relation to each other.

> As Bruno has emphasized again and again, if you reject this, then you
> reject comp as well.

I accept comp, but I reject the MGA (as I currently understand it).  It may
be that I do not understand the MGA.  My objection is that I find it less
than clear whether Alice remains conscious when her neurons are processing
shallow input.

> What you say is interesting though, because I think it bears on the issue
> of the unity of 
> consciousness<http://plato.stanford.edu/entries/consciousness-unity/>.
> Perhaps such a completely disconnected brain would be incapable of
> experiencing what we humans call the" unity of consciousness".

If the neurons in this scenario really do implement the same algorithm,
then they would necessarily experience the same unity of consciousness.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to