I find that discussions around the comp thesis keep coming back to the 'Movie Graph Argument' (MGA). Each time I read one of the accounts in Bruno's SANE04 or COMP(2013) papers, or Russell's 'MGA Revisited', I get the feeling that something crucial to the argument is missing.

The account in COMP(2013) is probably the most detailed, so I will look at this in some detail. The MGA (FGA) is introduced as "a direct Mechanistic argument showing that consciousness ... cannot possibly supervene on physical activity of the brain." This is supposed to be shown by deriving a contradiction from the assumption of physical supervenience.

We can use an original biological brain, or an equivalent digital replacement -- it does not make any significant difference to the argument. The first point is that in some conscious experience, be it a dream or anything else, there might be a portion of the 'brain' (in quotes because it can be biological or digital) that is not activated, so this can be removed without affecting the conscious experience. More generally, we suppose that there is some part of the 'brain' that is required, but is defective for some reason. However, serendipitously, when that part is required, some cosmic event luckily stimulates the required activity, so that the physical activity of the 'brain' is maintained, and it corresponds to the actual physical activity relevant for that computation. This breaks counterfactual correctness, but if the 'dream' and the processing is properly determined, the physical activity corresponding to the computation, and relevant to it, is maintained. Counterfactual correctness is thus shown not to be relevant to *that* conscious experience. If the lack of counterfactual correctness were able to change the personal experience, then the brain could recognize where its internal inputs came from, which is considered to be absurd since the assumption does not provide cognition of the elementary parts of the computing machinery.

So far, so good. The MGA seeks to extend this line of reasoning to show that the physical activity is not relevant at all. We image that we are able to make a film of all the internal activity of the 'brain' during some conscious experience. (A dream if we wish to reduce dependence on physical inputs and outputs.) We now run the computation again after breaking some or all of the original physical connections in the 'brain'. But at this time we also project the film directly on to the machinery. Now, when the broken connections are needed, despite the fact that they cannot give the relevant outputs, the machine will still perform the original physical activity relevant for that computation -- the movie, which comes from a film of the correct activity, will supplement any lacking information. The movie plays the role of the lucky cosmic event in the previous example.

Now the first person experience will be absolutely identical to the one which would have been obtained by the unbroken machinery. We can eliminate more of the machinery, indeed we can eliminate /all/ of it, without changing an active consciousness into a fading consciousness, because this would be experienced by the subject, contrary to the assumption that the person is never conscious of any part of the internal machinery.

Counterfactual correctness might be restored at any stage by adding additional counterfactual machinery, but, as before, this does not really make any difference. Counterfactual correctness is relevant only to the general requirement that different inputs can give different experiences, but we are always using the same inputs here as we repeatedly run the same program, although with less and less of the original machinery intact -- always providing the missing bits from the film of the original conscious computation.

Again, this all seems reasonably clear, but then the argument becomes very clouded, and it is not at all clear what conclusions are being drawn directly from this thought experiment. Bruno talks about the possibility of lowering the level of substitution for the digital 'brain' replacement. He also mentions the MWI of quantum mechanics, as suggested by Russell as a way to overcome the "non-robust universe" objection to the dovetailer. The relevance of these comments is quite opaque to me. Because then Bruno simply says: "Then, as an applied science in the fundamental realm, we can use Occam's razor to eliminate the 'material principle'."

The conclusion is "the FGA (MGA) shows that any universal machine is unable to distinguish a real physical realm from an arithmetical one, or a combinatorial one, or whatever initial notion of Turing universality is chosen as initial basic ontology." Hence, consciousness is not a physical phenomenon, nor can it be a phenomenon relating to observed matter at all. Consciousness can no longer be related to any physical phenomenon whatsoever, nor can any subjective appearance of matter be based on a notion of primitive matter.

I hope I have summarized the argument as in COMP(2013) sufficiently accurately for the present purposes. Most of the above is direct quotation from Bruno's text, with paraphrases in some less significant areas in order to shorten the presentation.

Now, having read this many times, and looked at the other summaries of the MGA, I still feel that something crucial is missing. We go from the situation where we remove more and more of the original 'brain', replacing the removed functionality by the projections from the movie, which, it is agreed, does not alter the conscious experience of the first person involved, to the conclusion that the physical brain is entirely unnecessary; indeed, irrelevant.

I am sorry, but this just does not follow. The original physical functionality is admitted to be still intact -- provide, admittedly, by the projected movie, but that is still a physical device, operating with a physical film in a physical projector, and projecting on to the original (albeit damaged) physical machinery. How has the physical element in all of this been rendered redundant? The original functionality of the 'brain' has been preserved by the movie; the conscious experience is still intact even though much of the original functionality has been provided by another external physical device. How does this differ from the original "Yes Doctor" scenario in which the subject agrees to have his brain replaced by a physical device that simulates (emulates) his original brain functionality? I submit that it does not.

The only difference between the movie replacing the functionality of the original experience and having that functionality replaced by a computer would seem to be that the computer can emulate a wider range of conscious experiences -- it is 'counterfactually correct' in that it can respond appropriately to different external inputs. The film, being a static record of one conscious experience, cannot do this. But it has been admitted that the film can reproduce the original conscious experience with perfect fidelity. And the film is every bit as physical as the original 'brain'. So the physical has not been shown to be redundant. It cannot be cut away with Occam's razor after all. If it were, there would be no conscious experience remaining.

I conclude that the MGA fails to establish the conclusions that it purports to establish.

Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to