On 3/28/2015 11:54 PM, Bruce Kellett wrote:
meekerdb wrote:
On 3/28/2015 11:02 PM, Bruce Kellett wrote:
meekerdb wrote:
The calculation written out on paper is a static thing, but the result of that
calculation might still be part of a simulation that produces consciousness. Though,
unless Barbour is right and the actuality of time can be statically encoded in his
'time capsules (current memories of past instances)', I was thinking in terms of a
sequence of these states (however calculated).
Yes, I agree that the computation should not have to halt (compute a function) in order
to instantiate consciousness; it can just be a sequence of states. Written out on
paper it can be a sequence of states ordered by position on the paper. But that seems
absurd, unless you think of it as consciousness in the context of a world that is also
written out on the paper, such that the writing that is conscious is /*conscious of*/
this written out world.
My present conscious state includes visual, auditory and tactile inputs -- these are
part of the simulation. But they need simulate only the effect on my brain states during
that moment -- they do not have to simulate the entire world that gave rise to these
inputs. The recreated conscuious state is not counterfactually accurate in this respect,
but so what? I am reproducing a few conscious moments, not a fully functional person.
But isn't it the case that your brain evolved/learned to interpret and be conscious of
these stimuli only because it exists in the context of this world?
But in the MGA (or Olympia) we are asked to consider a device which is a conscious AI
and then we are led to suppose a radically broken version of it works even though it is
reduced to playing back a record of its processes. I think the playback of the record
fails to produce consciousness because it is not counterfactually correct and hence is
not actually realizing the states of the AI - those states essentially include that
some branches were not taken. Maudlin's invention of Klara is intended to overcome this
objection and provide a counterfactually correct but physically inert sequence of
states. But I think it Maudlin underestimates the problem of context and the additions
necessary for counterfactual correctness will extend far beyond "the brain" and entail
a "world". These additions come for free when we say "Yes" to the doctor replacing
part of our brain because the rest of the world that gave us context is still there.
The doctor doesn't remove it.
In the "yes doctor" scenario as reported by Russell, it talks only about replacing your
brain with an AI program on a computer. It does not mention connecting this to sense
organs capable of reproducing all the inputs one normally gets from the world. If this
is not clearly specified, I would certainly say 'No' to the doctor. There is little
point or future in being a functioning brain without external inputs. As I recall
sensory deprivation experiments, subjects rapidly subside into a meaningless cycle of
states -- or go mad -- in the absence of sensory stimulation.
The question as posed by Bruno, is whether you will say yes to the doctor replacing part
of your brain with a digital device that has the connections to the rest of your
brain/body and which implements the same input/output function for those connections.
Would that leave your consciousness unchanged?
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.