I would like to go along with Maudlin's point emphasized in Bruno's text
below, adding that "causal" structure is restricted to the limited model of
which we *CAN *choose likely 'causes' within our perceived reality, while
the unlimited possibilities include wider 'intrusions' of domains 'beyond
our present epistemic cognitive inventory'. So the "most likely" *cause *-
although applicable to a 'physical role' (which as well is figmentous) - is
limited. In congruence - I think - with Bruno's words below.
Bruno's: "...the description, although containing the genuine information is
just not a computation at all..." (AMEN!)
continued, however by: "...It miss the logical relation between the steps,
made possible by the universal machine..." still does not *- DO - *
those 'steps' neither *OPERATE* the machine.
Looks like we want to 'assume' that if there is a possibility, it is also
I am looking at the "physical creator" (haha) keeping the contraption moving
and us in it. Not to speak about 'making it'. (Deus ex machina?)
Once all is there and moving, everything is fine.
I salute the "...infinitely many such relations, ..." that gives me the idea
of a 'physical' supervenience in terms of a restrictive Occam, cutting off
everything that dos not fit into our goals.
*"States"* seem to be identified by our limited views. I feel that both the
referred Maudlin-text and Jesse's comment are on the static side, as
'descriptive', while I can presume into Bruno's "relations" some sort of a
functional (operative) relation that would lend some dynamism (action?) into
the descriptional stagnancy. I still did not detect: *HOW?*
On Wed, Apr 29, 2009 at 4:19 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Maudlin's point is that the causal structure has no physical role, so if
> you maintain the association of consciousness with the causal, actually
> computational structure, you have to abandon the physical supervenience. Or
> you reintroduce some magic, like if neurons have some knowledge of the
> absence of some other neurons, to which they are not related, during some
> But read the movie graph which shows the same thing without going through
> the question of the counterfactuals. If you believe that consciousness
> supervene on the physical implementation, or even just one universal machine
> computation, then you will associate consciousness to a description of that
> computation. but the description, although containing the genuine
> information is just not a computation at all. It miss the logical relation
> between the steps, made possible by the universal machine. So you can keep
> on with mechanism only by associating consciousness with the logical,
> immaterial, relation between the states. from inside they are infinitely
> many such relations, and this means the physical has to supervene on the sum
> of those relations "as seen from inside". By Church thesis and
> self-reference logic, they have a non trivial, redundant, structure.
> On 29 Apr 2009, at 21:16, Jesse Mazer wrote:
> Bruno wrote:
> On 29 Apr 2009, at 00:25, Jesse Mazer wrote:
> and I think it's also the idea behind Maudlin's Olympia thought
> experiment as well.
> >Maudlin's Olympia, or the Movie Graph Argument are completely different.
> Those are arguments showing that computationalism is incompatible with the
> physical supervenience thesis. They show that consciousness are not related
> to any physical activity at all. Together with UDA1-7, it shows that physics
> has to be reduced to a theory of consciousness based on a purely
> mathematical (even arithmetical) theory of computation, which exists by
> Church Thesis.
> The movie graph argument was originally only a tool for explaining how
> difficult the mind-body problem is, once we assume mechanism.
> OK, I hadn't been able to find Maudlin's paper online, but I finally
> located a pdf copy in a post from this list at
> ...now that I read it I see the argument is distinct from Chalmers' "Does
> a Rock Implement Every Finite-State Automaton", although they are
> thematically similar in that they both deal with difficulties in defining
> what it means for a given physical system to "implement" a given
> computation. Chalmers' idea was that the idea of a rock implementing every
> possible computer program could be avoided if we defined an "implementation"
> in terms of counterfactuals, but Maudlin argues that this contradicts the
> "supervenience thesis" which says that "the presence or absence of inert,
> causally isolated objects cannot effect the presence or absence of
> phenomenal states associated with a system", since two systems may have
> different counterfactual structures merely by virtue of an inert subsystem
> in one which *would have* become active if the initial state of the system
> had been slightly different.
> It seems to me that there might be ways of defining "causal structure"
> which don't depend on counterfactuals, though. One idea I had is that for
> any system which changes state in a lawlike way over time, all facts about
> events in the system's history can be represented as a collection of
> propositions, and then causal structure might be understood in terms of
> logical relations between propositions, given knowledge of the laws
> governing the system. As an example, if the system was a cellular automaton,
> one might have a collection of propositions like "cell 156 is colored black
> at time-step 36", and if you know the rules for how the cells are updated on
> each time-step, then knowing some subsets of propositions would allow you to
> deduce others (for example, if you have a set of propositions that tell you
> the states of all the cells surrounding cell 71 at time-step 106, in most
> cellular automata that would allow you to figure out the state of cell 71 at
> the subsequent time-step 107). If the laws of physics in our universe are
> deterministic than you should in principle be able to represent all facts
> about the state of the universe at all times as a giant (probably infinite)
> set of propositions as well, and given knowledge of the laws, knowing
> certain subsets of these propositions would allow you to deduce others.
> "Causal structure" could then be defined in terms of what logical relations
> hold between the propositions, given knowledge of the laws governing the
> system. Perhaps in one system you might find a set of four propositions A,
> B, C, D such that if you know the system's laws, you can see that A&B imply
> C, and D implies A, but no other proposition or group of propositions in
> this set of four are sufficient to deduce any of the others in this set.
> Then in another system you might find a set of four propositions X, Y, Z and
> W such that W&Z imply Y, and X implies W, but those are the only deductions
> you can make from within this set. In this case you can say these two
> different sets of four propositions represent instantiations of the same
> causal structure, since if you map W to A, Z to B, Y to C, and D to X then
> you can see an isomorphism in the logical relations. That's obviously a very
> simple causal structure involving only 4 events, but one might define much
> more complex causal structures and then check if there was any subset of
> events in a system's history that matched that structure. And the
> propositions could be restricted to ones concerning events that actually did
> occur in the system's history, with no counterfactual propositions about
> what would have happened if the system's initial state had been different.
> Thinking in this way, it's not obvious that Maudlin is right when he
> assumes that the original "Olympia" defined on p. 418-419 of the paper
> cannot be implementing a unique computation that gives rise to complex
> conscious experiences. It's true that the armature itself is not responding
> in any way to the states of successive troughs it passes over, but there is
> an aspect of the setup that might give the system a nontrivial causal
> structure, namely the fact that certain troughs may be connected to other by
> pipes to other troughs in the sequence, so that as the armature empties or
> fills one it is also emptying or filling the one it's connected to (this is
> done to emulate the idea of a Turing machine's read/write head returning to
> the same memory address multiple times, even though Olympia's armature just
> steadily progresses down the line of troughs in sequence--troughs connected
> by pipes are supposed to represent a single memory address). If we
> represented the Olympia system as a set of propositions about the state of
> each trough and the position of the armature at each time-step, then the
> fact that the armature's interaction with one trough changes the state of
> another trough the armature won't visit until a later step may be enough to
> give different programs markedly different causal structures, in spite of
> the fact that the armature itself is just dumbly moving from one trough to
> the next.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to
For more options, visit this group at