On 29 Mar 2015, at 07:24, meekerdb wrote:
On 3/28/2015 11:02 PM, Bruce Kellett wrote:
meekerdb wrote:
On 3/28/2015 12:33 AM, Bruce Kellett wrote:
No, as I said, I do not think it is helpful to describe the
sequence of brain states as a calculation. If you simulate the
actual brain states by doing a lot of calculations on a computer,
then you will reproduce the original conscious moment. But the
conscious moment itself does not calculate anything. The
simulation of brain states could be written out on paper, or use
any number of look-up tables (as efficient programs tend to do).
It is still a simulation of the original brain states, and if
accurate, the conscious experience will be recreated.
Ok, I was using the term "calculation" to distinguish the static
thing, as written out on paper, from the dynamic process,
"computation", because I thought it was a distinction you were
making so that the latter was conscious but not the former. Did I
misinterpret you?
I wasn't really making a distinction between 'calculation' and
'computation'. According to the OED, 'computation' is a result got
by calculation, though I see it can also mean the act of
calculation. Wikipedia says: "Calculation is a term for the
computation of numbers, while computation is a wider reaching term
for information processing in general." I don't think this latter
distinction has much traction outside the computer science community.
The point I was making was I see a calculation as the evaluation of
a function over numbers. In this context, taking some input and
producing an output. The conscious state does not really produce an
output. The calculations (computations) involve take input action
potentials (or whatever) and responds to these via a sequence of
neuron firings and signal transmissions. Is the output the result
of computing a function? I suppose in the most general sense of
'computing' you might say so, but consciousness supervenes on these
neural processes: it is not actually the calculation itself, so
simulating the results of the original computations can still
produce consciousness.
The calculation written out on paper is a static thing, but the
result of that calculation might still be part of a simulation that
produces consciousness. Though, unless Barbour is right and the
actuality of time can be statically encoded in his 'time capsules
(current memories of past instances)', I was thinking in terms of a
sequence of these states (however calculated).
Yes, I agree that the computation should not have to halt (compute a
function) in order to instantiate consciousness; it can just be a
sequence of states.
Almost OK. It can be just a sequence of states ordered by some
universal machine/number.
Written out on paper it can be a sequence of states ordered by
position on the paper. But that seems absurd, unless you think of
it as consciousness in the context of a world
I begin to think that your world play the role of the universal
number. It is what does the computation. But we need only a universal
number or a universal system to do that. Fixing one particular
universal number (the world) does not work, because below our
substitution level, infinitely many other universal numbers do the
job, and some makes your state differentiating.
that is also written out on the paper, such that the writing that is
conscious is conscious of this written out world.
OK, but writting even the second system on paper will only lead to
description of computation, which are not computation. Eventually, as
you say, we need a real "universal number" (real world) doing the job.
Well, they do that in the "real" arithmetical reality, so universal
numbers are not lacking. In fact they are too much numerous a priori,
which leads to the global indeterminacy, and we must solve that
problem. It happens that actual machines like PA, ZF, have already
solve the "propositional part of the problem", and with promising hint
for the existence of the measure, which defines internally the
physical reality.
But in the MGA (or Olympia) we are asked to consider a device which
is a conscious AI and then we are led to suppose a radically broken
version of it works even though it is reduced to playing back a
record of its processes. I think the playback of the record fails
to produce consciousness
OK. Nice. In the 1988 paper (the first one with the FPI, the white
rabbit problem (the measure problem), the MGA, etc.) when I come to
the fact that physical supervenience entails the supervenience of
consciousness on movie, I stop the reductoio there by saying that
confusing a movie and reality is the biggest error a philosopher can
do. But other people came with that idea, and that is why I refer to
Maudlin, or use the stroboscope argument to make clear that this is
absurd.
To say, like Quentin, and myself sometimes, that we can stop because
computationalism associate consciousness to a computation, and that
there is no computation in the movie might not been enough (for those
who like to split the hairs), because comp associate consciousness to
a computation, but not necessarily only a computation.
because it is not counterfactually correct and hence is not actually
realizing the states of the AI -
Indeed. A state is always relative to a universal system/number/
machine. In fact, this is what makes the global FPI a problem.
those states essentially include that some branches were not taken.
Maudlin's invention of Klara is intended to overcome this objection
OK.
and provide a counterfactually correct but physically inert sequence
of states.
Yes.
But I think it Maudlin underestimates the problem of context and the
additions necessary for counterfactual correctness will extend far
beyond "the brain" and entail a "world".
Not really. It entails only an "environment", and with comp, it has to
be Turing emulable, and the problem comes back for that "you
+environment" description. Even when you take the whole physical
world. Unless it is not Turing emulable, but then "you" are no more
Turing emulable, and we are abandoning comp, then.
These additions come for free when we say "Yes" to the doctor
replacing part of our brain because the rest of the world that gave
us context is still there. The doctor doesn't remove it.
OK, but this works only if you put some magic in the world. If the
world is needed and if it is Turing emulable, then the problem crops
again. Of course, taking the world, or a big environment, makes the
movie graph very big, and the thought experiement looks "unreasonable
in practice", but the conceptual difficulty remains.
Bruno
Brent
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.