LizR wrote:
On 28 March 2015 at 00:06, Quentin Anciaux <[email protected]
<mailto:[email protected]>> wrote:
1- It is assumed you have a machinery/program that is conscious. (a
real conscious AI)
2- You have (for example) a conversation with it.
3- While doing that conversation, you record all inputs fed to the
machine.
4- You replay those inputs to the machine.
To make sure I have this right - you reboot it, or whatever - this is a
machine that starts from the same starting state as the one you talked
to originally. It doesn't remember the first conversation, and hence by
hypothesis goes through the same states as before.
5- Assuming in 3 the machine was conscious, replaying the same
inputs, the machine should still be conscious.
6- You remove from the machine all the transistor not in use during
that particular run (given the recorded input)
7- You replay those inputs to the ("crippled") machine.
8- Assuming in 3 and 5 the machine was conscious, replaying the same
inputs, the machine should still be conscious as in 5 (because what
you removed wasn't in use anyway).
OK
9- You break one transistor, but you make a device (in the MGA it's
the projection of the record on the graph) that permits (even if the
transistor is broke) to mimic the output at the exact moment it
should have happen if the transistor wasn't broken (like the lucky
cosmic ray replacing the firing of a neuron).
OK
10- Assuming in 3,5 and 8 the machine was conscious, replaying the
same inputs, the machine should still be conscious as the broken
transistor while not working did nonetheless gave the correct
output thanks to the lucky ray/devide/movie projection.
11- You do 9 for all the transistor, so as to leave only the mimic...
Aha. Yes that makes sense. It's a slippery logical slope ...
12- Assuming in 3,5,8 and 10 the machine was conscious, then the
machine is still conscious while no computation occur anymore....
contradicting computationalism.
Yes, so you are finally playing just a recording because for every
component you have to know exactly what its outputs were, so you have to
record everything, not just the inputs. At this point you have shown
that either consciousness can supervene on playing back a recording OR
that consciousness doesn't supervene on the original physical substrate
that was supposed to be performing the computation.
From that, either computationalism is false or physical
supervenience is false.
Hmmm....I'm not sure where I sit on that. I do feel like some sleight of
hand has been pulled - not intentionally, of course. Perhaps the broken
version might still be conscious, which means that ... eek. That's like
saying Klara's conscious despite being inert, isn't it?
I think it's the "thinking about what it all means afterwards" part that
ties my brain in knots. I want to just throw my hands up and say "well
of course physical supervenience doesn't work! How can a bunch of atoms
do that, anyway?" But then they do seem to ...
Computers are just bunches of atoms......
I think it is the all or nothing aspect of computationalism that is a
problem. I have no difficulty in accepting that a simulation run on a
computer can be conscious -- silicon brain rather than wetware. But I
have difficult in accepting that a computation can be run in the
required way without a physical substrate of some sort (computer or
whatever). A computation is just calculating a function over the reals.
Consciousness involves memory and I do not think that a memory can be
formed in the abstract. Memory is remembered -- laying down memories
increases entropy, and abstract ideas do not have entropy. (Although
thinking abstract ideas does increase the entropy associated with your
brain!) I don't think any of this works without a physical substrate,
so, in the end, all consciousness supervenes on the physical, no matter
how indirectly.
Bruce
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.