Hi Hal,

Thank you very much for you work in writing this review and commentary of the Maulding paper. I have not read it yet, but would like to ask some questions and interject some comments, even if I end up looking like a fool. ;-)


----- Original Message ----- From: ""Hal Finney"" <[EMAIL PROTECTED]>
To: <everything-list@eskimo.com>
Sent: Sunday, August 07, 2005 4:20 PM
Subject: Maudlin's Machine and the UDist

Rutgers philosopher Tim Maudlin has a paper intended to challenge certain
views about consciousness and computation, which we have discussed
occasionally on this list.  It is called "Computation and Consciousness",
Journal of Philosophy v86, pp. 407-432.  I have temporarily put a copy
online at http://www.finney.org/~hal/maudlin.pdf .  This is a personal
copy and I would ask you not to redistribute it.


It is sad that copyrights are what they have become such that the free flow and accessibility of papers is becoming more and more like the guilds of yore. :_(

The background question is when a given physical system can be said
to implement a given computation (especially, a conscious computation).
We imagine that the computation is specified in abstract terms, perhaps as
a program in a conventional computer language, or perhaps in a lower-level
form as a program for a Turing Machine or some other model of computation.
But it is abstract.  We then ask, given a certain physical system P,
is it implementing computation C?


   This seems to touch on the question that Chalmers asked here:


In practice, it seems that this is an easy question to answer.
Our computers implement the programs which are fed into them.  No one
denies that.

But philosophers have argued that there is a way of viewing the activity
of a physical system that can make it appear to be implementing *any*
computation C.  We can think of C as passing through a series of states:
C1, C2, C3, and so on.  And we can think of the physical system P as
passing through a series of states: P1, P2, P3.  So if we map P1 to C1,
P2 to C2, and so on, we can argue that P is implementing C, for any C
and for pretty much any P.


It seems to me that there should be nothing special about the ordering of the P_i for the COMP assumptions to hold, OTOH, there seems to be some requirement that some aspect of P_n be relatable to P_n-1 in a way that is independent of how the particular P_i are extent, no?

The philosophers' response to this is that it is not enough to be able to
set up such a mapping.  What is also necessary, to claim that P implements
(or "instantiates") C, is that the *counterfactuals* would have to
exist in correspondence as well.  That is, not only the states C1, C2,
C3 but also other states of C that would have been taken if the inputs
had been different, have to have correspondences in P.  It is claimed
that the kind of arbitrary mapping described above will not work once
counterfactuals are taken into account.  (I'm not sure I fully understand
or agree with this rebuttal but it is well accepted in the field.)


The contrafactuals have been shown, at least in QM experiements, to be just as *real* when it comes to notion of causation as the factuals. So if we are to take empirical evidence as a guide, it seems that there is some reason to expect that contrafactuals can not be dismissed out of hand.


The principle that whether P implements C depends on these counterfactuals
is one of the issues that Maudlin addresses.  When referring to conscious
computations, this principle is generally considered part of the
"computationalist" hypothesis, that the instantiation of an appropriate
computation C in a physical system P actually produces a corresponding
sensation of consciousness.  Implicit in this hypothesis is that to be
said to be instantiating C, P must have had enough structure to also
produce counterfactuals, if they had occured.

Well, that's a lot of background!  And there's more.  The other thesis
that Maudlin considers is called supervenience, a philosophical word for
what is fortunately a pretty straightforward concept.  It is that whether
a physical system implements a given computation depends solely on the
activity of that physical system.  No mystical or non-physical elements
or attributes need to be considered in judging whether P implements C.
All that matters is P's physical activity.


   We can hope that no "obscurum per occultum" is involved! ;-)

In a nutshell, Maudlin argues that these two common views on the
matter are actually in contradiction.  But frankly, although Maudlin's
argument is complicated and involves all kinds of thought experiments and
elaborate, imaginary machines, I think it is actually quite an obvious
point.  Supervenience means that implementation depends on what P does,
while support for counterfactuals means that implementation depends on
what P doesn't do.  Q.E.D.!  Maudlin merely takes great time and care
to illustrate the contradiction in detail.

Another place that counterfactuals come into play is when considering
whether replays are conscious.  If we re-run a calculation, but this
time we don't include the logic aspects but merely passively replay a
recording of each component's activity, is this conscious?  Does this
P instantiate C?  The answer, according to the counterfactual view, is
no, because a mere replay of a recorded set of events does not include
counterfactuals and will not work right if a different input is supplied.
On the other hand if the re-run is a genuine computation which merely
happens to be given the same input and hence to follow the same sequence
of operations, then that re-run would in fact be considered to generate
the consciousness.


Is it Live or Memorex? What is the difference between object and Representation?

Now to bring in the multiverse perspective, in the flavor I have been
pursuing.  From the viewpoint that says that all information objects
exist and are distributed according to the Universal Distribution (UDist),
what can we say about these questions?

First, the question of whether a physical system P implements a
computation C is seen to be the wrong question to ask.  C is an
information object and has a measure.  Likewise, although we think
of P as physical, in the UDist model the universe containing P is
merely one information object, and P is a subset of that universe.
The question is therefore, how much measure does P contribute to C?
That will depend on the measure of P and on how much of P's measure can
be seen as contributing to C.


I am still worried about how a measure can exist over a set, collecton, class, or whatever of computations! Does not the notion of a measure require the existence of a space where each point is an object of the class and the measure itself defines the similarity/difference between one object, here a computation, and some given other?
   What ontological status does "Computation Space" have?

Now, here I need to address an ambiguity in some of the philosophical
discussion of this issue, which shows up in Maudlin's paper among other
places.  What do we mean by an abstract computation?  Is it a program,
or is it a run of a program (i.e. the sequence of states it goes through,
also called the "trace")?


I personally make a big deal about a difference between a program and the running of a program! The former is merely an ontic question and the latter involves some supervinient "process" that implements the program, no?

Well, I suppose it could mean either one; from the UDist perspective,
both views can be supported.  As information objects, we can distinguish
programs and program traces, just as we could distinguish theorems and
proofs, or numbers and sequences.  They are related but not the same.
We could ask for the measure of a program, and define it by representing
the program in some canonical form like Java source code, then finding
the shortest program that would output that program.  Or we could ask
for the measure of a program trace, define it as a sequence of states
again in some canonical form, and ask for the shortest program that
would output that sequence.

Maudlin talks about programs as sequences of states, which provides
for the most direct way of thinking of them as causing or creating
consciousness.  I'll do it that way, too.  So when I speak of P or C
here I am talking about sequences of states, not static structures
like machine descriptions or source code listings.

The specific mechanism in the UDist framework for calculating the
contribution of P to C's measure is to imagine a program which takes
P as input and produces C as output.  Both P and C are considered as
information objects for this purpose.  Such a program can be quite small
for the case where our common sense would say that P does implement C.
We can find a mapping of P into C that is simple and straightforward,
and therefore corresponds to a small program.

On the other hand in cases where common sense would say that P does
nothing to implement C, such as where P is a clock and C is a conscious
program, any such mapping from P to C is going to have to be about as
big as C itself.  P is of no help in calculating such a C and hence
there is no short program that produces C given P.


Could it be that a "clock" (the physical object) IS, in fact, the shortest program that implements "what it is like to be a clock" - the program?

There is no need to consider counterfactuals in this framework.  What
keeps a clock from contributing measure to arbitrary programs is not its
failure to implement counterfactuals, it is the fact that no short program
can use the structure of a clock to output an arbitrary computation C.

This analysis also produces a different answer than the traditional
approach for considering the impact of replays.  Both "passive" replays
(which just play back a recorded sequence of states) and "active" ones
(which re-do a computational sequence) would both add roughly equal
amounts to measure, because both would allow for similarly short programs
which turn the sequence of P states to the sequence of C states.

Turning to Maudlin's paper, his main effort is to construct a machine
(named Olympia) which essentially does a passive replay but which contains
triggers which will turn on an "active" machine if the inputs are not
what were recorded during the replay.  In this way he argues that the
machine does implement counterfactuals.  However when run in replay mode,
none of the counterfactual machinery gets activated, because in practice
the inputs are all given the correct values.  And his machine is so
simple in how it does the replay that the actual physical activity is
almost non-existent.  In this way he argues that the supervenience thesis
would find insufficient activity for consciousness, in contradiction to
the computationalist principle.


This seems to support my ill-formed argument that there is a huge and important difference between the existence of some Interger N and the computation of N; again the former merely "exists" the latter requires a process... Additionally, I hope that Mauldin assumes that some kind of "activity", re: process, is involved in consciousness. From your depiction I assume that he does, but will have to hold back until I read the paper. This is a very important point, IMHO, because it is distinguishing between the actual implentation of a conscious mind and a sequence of "snap shots" of the brain assumed to exist a priori.

I'm not sure I find Maudlin's argument persuasive even within the standard
framework, for a variety of reasons, but I won't go into that here.


   I would really like to read your thoughts on this!

My goal is to analyze it within the UDist framework.  In that case the
paradox or contradiction doesn't arise, because the only question is
the degree to which the structure of Maudlin's machine Olympia allows
for a short program that maps to the computational pattern C.  We can
first ignore the extra machinery for handling counterfactuals since
that is not an issue in the UDist approach.  For the rest, Maudlin
rearranges the program states and includes in Olympia a map from every
program state to some other program state, which will be an enormous map
for any actual conscious experience.  I suspect that the size of this
additional structure, and the way the states have been scrambled, will
complicate the program that has to do the translation somewhat, but that
it will still be possible to output C using a reasonably short program.
Therefore I would expect that his machine might not contribute quite
as much to the measure of C as a conventional machine would, but still
would contribute an appreciable measure.

In any case, his machine certainly does not pose a challenge or paradox
for the UDist framework, since UDist is merely a recipe for how to
calculate a number, the measure of C.  All kinds of bizarre machines
might be imagined which would in some way relate to C, and still in
principle we could estimate out how much measure they would each add to C.
It seems that no paradox can arise from this type of analysis.

Hal Finney


   I do hope that there is more to UD than Scholastic speculations. ;-)



Reply via email to