On Fri, Dec 23, 2011 at 4:13 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:

>
> On 22 Dec 2011, at 23:27, Joseph Knight wrote:
>
> Hello everyone and everything,
>
>
> I have pompously made my own thread for this, even though we have another
> MGA thread going, because the other one (sigh, I created that one too)
> seems to have split into at least two different discussions, both of which
> are largely different from what I have to say, so I want to avoid confusion.
>
>
> Here, I will explain why I believe the Movie Graph Argument (MGA) is
> invalid. I will start with an exegesis of my understanding of the MGA, so
> that Bruno or others can point out if I have failed to understand some
> important aspect of the argument. Then I will explain what is wrong. I
> believe confusion regarding the concept of supervenience has been
> responsible for some invalid reasoning. (At the end I will also explain why
> I find Maudlin’s thought experiment to be inconclusive.)
>
>
> As it is explained 
> here<http://groups.google.com/group/everything-list/browse_thread/thread/201ce36c784b2795/aa1e30fe5b731a40>
> , 
> here<http://groups.google.com/group/everything-list/browse_thread/thread/18539e96f75bb740/b748e386a6795f3c>,
> and 
> here<http://groups.google.com/group/everything-list/browse_thread/thread/a0e1758bf03bc080/6f9f14d6fb505261>,
> the MGA consists of three parts. Throughout the argument we are assuming
> comp and materialism to be true.
>
>
> *The MGA*
>
> *
> *
>
> In *Part 1*, Bruno asks us to consider Alice. Alice is a conscious being.
> Alice already has an artificial brain, to make the reasoning easier. We are
> assuming here (with no loss of generality) that, under normal
> circumstances, Alice’s consciousness supervenes on this artificial brain.
> Alice is taking a math exam, when at a certain moment one of the logic
> gates A fails to signal logic gate B. At this precise moment, however, a
> particle arrives from some far-away cosmic explosion and triggers gate B
> anyway. Assuming comp we (pretty safely) conclude that Alice’s
> consciousness is unaffected by this change in causation – after all, the
> computation has been performed.    Moreover, we can assume any number –
> thousands, say – of such failures in Alice’s brain, with lucky cosmic rays
> arriving to save the day. Indeed, *all *of Alice’s neurons could be
> disabled, with cosmic rays triggering each one in just the right way so as
> to maintain her consciousness. Bruno (wisely, in my opinion) likes to end
> the steps of his argument with questions. At the end of MGA 1, he asks, is
> Alice a zombie during the exam? We are really forced to say that she isn’t,
> because of our comp assumption. So Alice is just as conscious as she was
> before her brain started short-circuiting.
>
>
> In *Part 2*,* *we build on the ideas of part 1 but without cosmic rays.
> Bruno assumes for the sake of argument, again with no loss of generality,
> that Alice is dreaming and that her brain has no inputs or outputs. Now,
> Alice’s (artificial) brain is a 3D Boolean graph (network being the more
> common term), which, with a few wiring changes, can be deformed into a 2D
> Boolean graph and thus laid out on a plane. Next Bruno asks us to imagine
> us instantiating Alice’s 2D graph-brain as a system of laser beams
> connecting nodes (instead of wires, and with destructive interference
> helping out with NOR, etc.), all in some special material. The graph is
> placed between two glass plates, and a special crystalline material is
> sandwiched between the plates which has the property that if a beam of
> light connects two nodes, the “right” laser is triggered to signal the
> right node at that location. (Unlikely, but conceivable and valid, which is
> all we intrepid philosophers need anyway!)
>
>
> So Alice is dreaming (conscious), with her dream supervening on the 2D
> optical graph, and with no malfunctions. Suppose we film these computations
> with a video camera. Now suppose Alice begins to dream the same dream again
> but after a while, Alice’s 2D graph begins making mistakes, i.e. not
> sending signals where signals should be sent. But if we, in all our
> humanitarian goodwill, project the (perfectly aligned) film onto the
> optical material/graph, we can preserve Alice’s consciousness completely.
> If it worked with the cosmic rays from part 1, it works here too, by comp.
> Alice remains conscious.
>
>
> Finally, in *Part 3*, we reach some apparent contradictions. Bruno
> introduces a (safe) principle at the beginning, namely that if some part of
> a system is not used for the functioning of that system in some given task,
> then it can be removed and still complete that task. If Alice doesn’t use
> neuron X to complete her math exam, we can remove neuron X during the exam
> and she will perform the same way. I will call this the principle of
> irrelevant subsystems.
>
>
> So, back to Alice and the filmed 2D optical graph. We are apparently
> forced, at this point, to conclude that Alice’s consciousness supervenes on
> the projection of the movie. In Bruno’s words:
>
> Is it necessary that someone look at that movie? Certainly not. No more
> than it is needed that someone is look at your reconstitution in Moscow for
> you to be conscious in Moscow after a teleportation. All right? (with MEC
> [comp] assumed of course). Is it necessary to have a screen? Well, the
> range of activity here is just one dynamical description of one
> computation. Suppose we make a hole in the screen. What goes in and out of
> that hole is exactly the same, with the hole and without the hole. For that
> unique activity, the hole in the screen is functionally equivalent to the
> subgraph which the hole removed. Clearly we can make a hole as large as the
> screen, so no need for a screen. But this reasoning goes through if we make
> the hole in the film itself. Reconsider the image on the screen: with a
> hole in the film itself, you get a "hole" in the movie, but everything
> which enters and go out of the hole remains the same, for that (unique)
> range of activity.  The "hole" has trivially the same functionality than
> the subgraph functionality whose special behavior was described by the
> film. And this is true for any subparts, so we can remove the entire film
> itself.
>
> In short, we are forced to accept that Alice’s consciousness supervenes on
> a vacuum. Of course, we don’t really have to go this far, because already
> we have Alice’s consciousness supervening on a film which performs no
> meaningful computations, i.e., is “inert” – an absurdity. There are several
> other ways of stating this part of the argument, none of which changes the
> result, and if anyone is confused I recommend reading the links above. But
> otherwise this concludes the MGA. It has apparently been shown that there
> is a contradiction between computationalism and materialism.
>
>
> *The Problem*
>
> *
> *
>
> I said initially that my concern was with the treatment of the
> supervenience concept. This term is often thrown around on the Everything
> list. It is an important concept. What is supervenience? If system X
> supervenes on system Y, then there cannot be a change in X without a change
> in Y. Note that there can be a change in Y without a change in X. In other
> words, IF change in X, THEN change in Y. (Supervenience is silent on issues
> of entailment/causation, etc.)
>
>
> OK.
>
>
>
> Bruno’s (and Maudlin’s, for that matter) argument hinges on the issue of
> supervenience, specifically: on what does consciousness supervene? If it
> can be shown that, assuming computationalism and materialism, that
> consciousness supervenes on a vacuum, or on a “causally inert” object, then
> we have shown something important. But it matters how we get there.
>
>
> In *Part 1*, when the neurons (nodes) in Alice’s brain are all
> malfunctioning, she is saved by the cosmic rays. The rays trigger the
> neurons precisely when and where they must be in order to instantiate
> Alice’s consciousness. But Alice’s consciousness *does not* supervene on
> the cosmic rays. Nor does her consciousness supervene on her damaged brain.
> Her consciousness supervenes on the system (brain + cosmic rays).
>
> Correct.
>
>
> There *can *be a change in consciousness without a change in the cosmic
> ray pattern: Alice’s consciousness might change, say, if a neuron (node)
> from the brain is removed, preventing the corresponding cosmic ray from
> triggering it and leading to an alteration in her consciousness. Likewise,
> there *can *be a change in consciousness without a change in her
> (damaged) brain, say, if the cosmic shower had occurred in a slightly
> different way.
>
>
> OK.
>
>
> Bruno’s argument is a conflation of necessary and sufficient conditions,
> as well as a conflation of supervenience and entailment.
>
> Where?
>
>
> The cosmic rays are necessary to execute Alice’s consciousness, but not
> sufficient. It would be an invalid move to remove her brain (however faulty
> it may be) and focus exclusively on the cosmic particles as a cause of her
> consciousness. By this fallacious reasoning, we might conclude that because
> the left hemisphere of someone’s brain is necessary for their
> consciousness, it is also sufficient. It is the confusion of “IF change in
> (relevant) cosmic rays, THEN change in consciousness” with “IF change in
> consciousness, THEN change in (relevant) cosmic rays”.
>
> OK. I use this.
>
>
> The same problem arises in *Part 2*. Bruno claims that we are forced to
> accept that Alice’s consciousness supervenes on the film.
>
> No. On the projection of the pellicle on the Boolean graph, and then on
> the Boolean graph missing part. The idea is that we built again the right
> physical activity, with the projection of the film playing the role of the
> cosmic rays.
>

What is a pellicle? (Sorry) I understand this part, however. My objections
arise later.


>
>
> But this is not correct – Alice’s consciousness supervenes on (film +
> optical graph),
>
> Sure.
>
>
> for the same reasons as above. There *can *be a change in consciousness
> without a change in the film: suppose I destroy a portion of the
> glass/crystal medium, hence some of the nodes in the graph. The film is
> unchanged,
>
> ? the film is changed, in this case.
>
>
By film I mean the movie, the recording. If you change the
glass/crystal/node "screen", the recording is unchanged. We should indeed
fix any terminological misunderstandings asap.


>
> but (film + optical graph) is certainly changed, and Alice’s dream turns
> out differently (if it occurs at all).
>
> With comp + sup-phys, it can't.
>

Why? If we assume sup+phys, then some changes in the physical system on
which the dream supervenes certainly will lead to changes in the dream.


>
>
>
> Bruno isolates the film and thus reaches his apparent contradictions. But
> this is not a permissible move.
>
> I think that the term "film" could have different meaning in french and
> english. But the film here means the projection of the pellicle on the
> glass/crystal medium. This one is never broken. It is a process which takes
> time, and occur in some place.
>
>
>
> Not only is the definition of supervenience violated, but his principle of
> irrelevant subparts is violated as well – for the optical graph is *not 
> *irrelevant
> for the execution of Alice’s consciousness.
>
>
> Of course, but once we put away the nodes, the physical activity
> corresponding to the computation are not changed. The optical graph becomes
> irrelevant for the physical activity on which Alice's consciousness is
> supposed to supervene, by comp+sup-phys.
>

This is where my problem lies. Of course the physical activity of the
system is changed when you (invalidly) remove the optical graph from the
system. It is far from irrelevant. For example, what mechanism causes the
light to triggers the lasers? There must be some "internal" mechanisms at
work as well. The nodes aren't "connected" to one another, but it matters
whether or not the recording is being projected on an optical graph, vs. a
concrete wall, vs. movie screen....

Let me restate my concern: Consciousness supervenes on the optical
graph+the recording, *even when the nodes are completely disconnected. *It
is true that "most of the work" is being done by the recording, but not all
of the work. The optical graph still matters, and the "physical activity"
of the system is not solely provided by the recording, as it still depends
on how the projected light interacts (physically) with the glass/crystal
surface. There is a point in the argument at which you ignore the
glass/crystal system and focus solely on the movie/recording, claiming that
Alice's consciousness supervenes on the movie/recording. But this is false.
*At no point *does Alice's consciousness supervene on the recording, *not
even *when the nodes are completely disconnected.


>
>
> We certainly cannot remove it and expect Alice to remain conscious, any
> more than we can remove the artificial brain of *Part 1 *and expect Alice
> to pass her exam – in both cases, we are left merely with an interesting
> light show. In conclusion, we are *not *forced to conclude that Alice’s
> consciousness supervenes on a vacuum, or on an inert film reel.
>
> The film is never inert. That is why the stroboscopic argument work.
>
>
>
>
> Please discuss, and tell me if I myself have made any errors.
>
>
> I think you have interopret the word "film" differently. A film is a
> dynamical event taking some times and involving projection. It is never
> inert.
>

?


> Regarding Maudlin’s argument: Russell has recently stated that Maudlin’s
> argument doesn’t work in a multiverse, and that consciousness is thus a
> multiverse phenomenon. I disagree for the same reason that Bruno disagrees:
> the region of the multiverse on which consciousness supervenes can just be
> Turing emulated in a huge water/trough/block computer, and Maudlin’s
> argument can be reapplied. I realize that this could lead to an infinite
> regress…hmm…
>
> Yes, that is why, in fine Russell's move leads to multi-multi- ... multi
> verse. To keep the existence of primary physical stuff necessary, is last
> addition of multi playng his role, he has to make it non Turing emulable.
> The UD dovetails on all multi-^alpha universes with alpha constructive
> ordinal. Russell should impose a Multi-^alpha universe with alpha not being
> constructive, then the MGA can no more be applied, but then comp is also
> false.
>
>
>
> The real reason I don’t find Tim Maudlin’s argument convincing is largely
> due to recent comments made by Brent. It is not patently absurd that a
> constant program/algorithm cannot be conscious – it is for intelligence,
> however.
>
> I guess you meant "It is not patently absurd that a constant
> program/algorithm can be conscious"?
>

Yes, sorry.


> Well, I find this patently absurd. To have consciousness ,qua computatio,
> you need complex self-referential computing.
>

Assume a constant input for a conscious machine. Say one removes one-by-one
the parts of the machine that would allow it to handle counterfactuals,
until all that remains are the parts needed for that constant input. (This
is like Maudlin in reverse) You would say that there is some point at which
this machine stops being conscious? I find *that *hard to believe. And I
don't see why computationalism says this must be so. Moreover, this seems
to contradict the principle of irrelevant subparts which you introduce for
part 3 of MGA.


>
> For all I know, this has not been decided either way.
>
> But it follows from the computationalist hypothesis.
>
>
 How?

> Maybe in the future a “consciousness theorem” will decide the matter one
> way or another, but until then I don’t think that Maudlin has demonstrated
> a contradiction, just an irritating fact. (It seems to me, and is worth
> noting, that if the principle of irrelevant subparts is true, then we are
> forced to conclude that a constant program/algorithm can be conscious,
> rendering Maudlin’s paradox, well, not a paradox.) Intelligence is tricky,
> as it has the notion of counterfactual bound up within its definition. But
> there is no *a priori *reason to assume this to be the case for
> consciousness.
>
> Even when you accept an artificial digital brain, and believe that you
> survive because your computation is running correctly by that digital
> machine?
>
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>



-- 
Joseph Knight

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to