Joseph,

I found your post very interesting.  While I agree with your conclusion,
how I get there is a little different.

I think that at the time all of Alice's neuronal firings are triggered by
random particles she is a zombie.  It is less clear in the case of a single
malfunctioning neuron.  This is because of the modularity of our brains:
Different sections of the brain perform specific functions.  Some neurons
may serve only as communication links between different regions in the
brain, while others may be involved in processing.  I think that the
malfunction and correction of a "communication neuron" might not alter
Alice's experience, in the same way we could correct a faulty signal in her
optic nerve and not expect her experience to be affected.  I am less sure,
however, that a neuron involved in processing could have its function
replaced by a randomly received particle, as this changes the definition of
the machine.

Think of a register containing a bit '1'.  If the bit is '1' because two
inputs were received and the logical AND operation is applied, this is an
entirely different computation from two bits being ANDed, the result placed
in that register, then (regardless of the result) the bit '1' is set in
that register.  This erases any effect of the two input bits, and redefines
the computation altogether.  This 'set 1' instruction is much like the
received particles from the super nova causing neurons to fire.  It is a
very shallow computation, and in my opinion, not likely to lead to any
consciousness.

Jason

On Thu, Dec 22, 2011 at 5:27 PM, Joseph Knight <joseph.9...@gmail.com>wrote:

> Hello everyone and everything,
>
>
> I have pompously made my own thread for this, even though we have another
> MGA thread going, because the other one (sigh, I created that one too)
> seems to have split into at least two different discussions, both of which
> are largely different from what I have to say, so I want to avoid confusion.
>
>
> Here, I will explain why I believe the Movie Graph Argument (MGA) is
> invalid. I will start with an exegesis of my understanding of the MGA, so
> that Bruno or others can point out if I have failed to understand some
> important aspect of the argument. Then I will explain what is wrong. I
> believe confusion regarding the concept of supervenience has been
> responsible for some invalid reasoning. (At the end I will also explain why
> I find Maudlin’s thought experiment to be inconclusive.)
>
>
> As it is explained 
> here<http://groups.google.com/group/everything-list/browse_thread/thread/201ce36c784b2795/aa1e30fe5b731a40>,
> here<http://groups.google.com/group/everything-list/browse_thread/thread/18539e96f75bb740/b748e386a6795f3c>,
> and 
> here<http://groups.google.com/group/everything-list/browse_thread/thread/a0e1758bf03bc080/6f9f14d6fb505261>,
> the MGA consists of three parts. Throughout the argument we are assuming
> comp and materialism to be true.
>
>
> *The MGA*
>
> *
> *
>
> In *Part 1*, Bruno asks us to consider Alice. Alice is a conscious being.
> Alice already has an artificial brain, to make the reasoning easier. We are
> assuming here (with no loss of generality) that, under normal
> circumstances, Alice’s consciousness supervenes on this artificial brain.
> Alice is taking a math exam, when at a certain moment one of the logic
> gates A fails to signal logic gate B. At this precise moment, however, a
> particle arrives from some far-away cosmic explosion and triggers gate B
> anyway. Assuming comp we (pretty safely) conclude that Alice’s
> consciousness is unaffected by this change in causation – after all, the
> computation has been performed.    Moreover, we can assume any number –
> thousands, say – of such failures in Alice’s brain, with lucky cosmic rays
> arriving to save the day. Indeed, *all *of Alice’s neurons could be
> disabled, with cosmic rays triggering each one in just the right way so as
> to maintain her consciousness. Bruno (wisely, in my opinion) likes to end
> the steps of his argument with questions. At the end of MGA 1, he asks, is
> Alice a zombie during the exam? We are really forced to say that she isn’t,
> because of our comp assumption. So Alice is just as conscious as she was
> before her brain started short-circuiting.
>
>
> In *Part 2*,* *we build on the ideas of part 1 but without cosmic rays.
> Bruno assumes for the sake of argument, again with no loss of generality,
> that Alice is dreaming and that her brain has no inputs or outputs. Now,
> Alice’s (artificial) brain is a 3D Boolean graph (network being the more
> common term), which, with a few wiring changes, can be deformed into a 2D
> Boolean graph and thus laid out on a plane. Next Bruno asks us to imagine
> us instantiating Alice’s 2D graph-brain as a system of laser beams
> connecting nodes (instead of wires, and with destructive interference
> helping out with NOR, etc.), all in some special material. The graph is
> placed between two glass plates, and a special crystalline material is
> sandwiched between the plates which has the property that if a beam of
> light connects two nodes, the “right” laser is triggered to signal the
> right node at that location. (Unlikely, but conceivable and valid, which is
> all we intrepid philosophers need anyway!)
>
>
> So Alice is dreaming (conscious), with her dream supervening on the 2D
> optical graph, and with no malfunctions. Suppose we film these computations
> with a video camera. Now suppose Alice begins to dream the same dream again
> but after a while, Alice’s 2D graph begins making mistakes, i.e. not
> sending signals where signals should be sent. But if we, in all our
> humanitarian goodwill, project the (perfectly aligned) film onto the
> optical material/graph, we can preserve Alice’s consciousness completely.
> If it worked with the cosmic rays from part 1, it works here too, by comp.
> Alice remains conscious.
>
>
> Finally, in *Part 3*, we reach some apparent contradictions. Bruno
> introduces a (safe) principle at the beginning, namely that if some part of
> a system is not used for the functioning of that system in some given task,
> then it can be removed and still complete that task. If Alice doesn’t use
> neuron X to complete her math exam, we can remove neuron X during the exam
> and she will perform the same way. I will call this the principle of
> irrelevant subsystems.
>
>
> So, back to Alice and the filmed 2D optical graph. We are apparently
> forced, at this point, to conclude that Alice’s consciousness supervenes on
> the projection of the movie. In Bruno’s words:
>
> Is it necessary that someone look at that movie? Certainly not. No more
> than it is needed that someone is look at your reconstitution in Moscow for
> you to be conscious in Moscow after a teleportation. All right? (with MEC
> [comp] assumed of course). Is it necessary to have a screen? Well, the
> range of activity here is just one dynamical description of one
> computation. Suppose we make a hole in the screen. What goes in and out of
> that hole is exactly the same, with the hole and without the hole. For that
> unique activity, the hole in the screen is functionally equivalent to the
> subgraph which the hole removed. Clearly we can make a hole as large as the
> screen, so no need for a screen. But this reasoning goes through if we make
> the hole in the film itself. Reconsider the image on the screen: with a
> hole in the film itself, you get a "hole" in the movie, but everything
> which enters and go out of the hole remains the same, for that (unique)
> range of activity.  The "hole" has trivially the same functionality than
> the subgraph functionality whose special behavior was described by the
> film. And this is true for any subparts, so we can remove the entire film
> itself.
>
> In short, we are forced to accept that Alice’s consciousness supervenes on
> a vacuum. Of course, we don’t really have to go this far, because already
> we have Alice’s consciousness supervening on a film which performs no
> meaningful computations, i.e., is “inert” – an absurdity. There are several
> other ways of stating this part of the argument, none of which changes the
> result, and if anyone is confused I recommend reading the links above. But
> otherwise this concludes the MGA. It has apparently been shown that there
> is a contradiction between computationalism and materialism.
>
>
> *The Problem*
>
> *
> *
>
> I said initially that my concern was with the treatment of the
> supervenience concept. This term is often thrown around on the Everything
> list. It is an important concept. What is supervenience? If system X
> supervenes on system Y, then there cannot be a change in X without a change
> in Y. Note that there can be a change in Y without a change in X. In other
> words, IF change in X, THEN change in Y. (Supervenience is silent on issues
> of entailment/causation, etc.)
>
>
> Bruno’s (and Maudlin’s, for that matter) argument hinges on the issue of
> supervenience, specifically: on what does consciousness supervene? If it
> can be shown that, assuming computationalism and materialism, that
> consciousness supervenes on a vacuum, or on a “causally inert” object, then
> we have shown something important. But it matters how we get there.
>
>
> In *Part 1*, when the neurons (nodes) in Alice’s brain are all
> malfunctioning, she is saved by the cosmic rays. The rays trigger the
> neurons precisely when and where they must be in order to instantiate
> Alice’s consciousness. But Alice’s consciousness *does not* supervene on
> the cosmic rays. Nor does her consciousness supervene on her damaged brain.
> Her consciousness supervenes on the system (brain + cosmic rays). There *can
> *be a change in consciousness without a change in the cosmic ray pattern:
> Alice’s consciousness might change, say, if a neuron (node) from the brain
> is removed, preventing the corresponding cosmic ray from triggering it and
> leading to an alteration in her consciousness. Likewise, there *can *be a
> change in consciousness without a change in her (damaged) brain, say, if
> the cosmic shower had occurred in a slightly different way.
>
>
> Bruno’s argument is a conflation of necessary and sufficient conditions,
> as well as a conflation of supervenience and entailment. The cosmic rays
> are necessary to execute Alice’s consciousness, but not sufficient. It
> would be an invalid move to remove her brain (however faulty it may be) and
> focus exclusively on the cosmic particles as a cause of her consciousness.
> By this fallacious reasoning, we might conclude that because the left
> hemisphere of someone’s brain is necessary for their consciousness, it is
> also sufficient. It is the confusion of “IF change in (relevant) cosmic
> rays, THEN change in consciousness” with “IF change in consciousness, THEN
> change in (relevant) cosmic rays”.
>
>
> The same problem arises in *Part 2*. Bruno claims that we are forced to
> accept that Alice’s consciousness supervenes on the film. But this is not
> correct – Alice’s consciousness supervenes on (film + optical graph), for
> the same reasons as above. There *can *be a change in consciousness
> without a change in the film: suppose I destroy a portion of the
> glass/crystal medium, hence some of the nodes in the graph. The film is
> unchanged, but (film + optical graph) is certainly changed, and Alice’s
> dream turns out differently (if it occurs at all).
>
>
> Bruno isolates the film and thus reaches his apparent contradictions. But
> this is not a permissible move. Not only is the definition of supervenience
> violated, but his principle of irrelevant subparts is violated as well –
> for the optical graph is *not *irrelevant for the execution of Alice’s
> consciousness. We certainly cannot remove it and expect Alice to remain
> conscious, any more than we can remove the artificial brain of *Part 1 *and
> expect Alice to pass her exam – in both cases, we are left merely with an
> interesting light show. In conclusion, we are *not *forced to conclude
> that Alice’s consciousness supervenes on a vacuum, or on an inert film reel.
>
>
>
> Please discuss, and tell me if I myself have made any errors.
>
>
> Regarding Maudlin’s argument: Russell has recently stated that Maudlin’s
> argument doesn’t work in a multiverse, and that consciousness is thus a
> multiverse phenomenon. I disagree for the same reason that Bruno disagrees:
> the region of the multiverse on which consciousness supervenes can just be
> Turing emulated in a huge water/trough/block computer, and Maudlin’s
> argument can be reapplied. I realize that this could lead to an infinite
> regress…hmm…
>
>
> The real reason I don’t find Tim Maudlin’s argument convincing is largely
> due to recent comments made by Brent. It is not patently absurd that a
> constant program/algorithm cannot be conscious – it is for intelligence,
> however. For all I know, this has not been decided either way. Maybe in the
> future a “consciousness theorem” will decide the matter one way or another,
> but until then I don’t think that Maudlin has demonstrated a contradiction,
> just an irritating fact. (It seems to me, and is worth noting, that if the
> principle of irrelevant subparts is true, then we are forced to conclude
> that a constant program/algorithm can be conscious, rendering Maudlin’s
> paradox, well, not a paradox.) Intelligence is tricky, as it has the notion
> of counterfactual bound up within its definition. But there is no *a
> priori *reason to assume this to be the case for consciousness.
>
> --
> Joseph Knight
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to