Brent Meeker writes:
> > The brain-with-wires-attached cannot interact with the environment, because
> > all its sense organs have been removed and the stimulation is just coming
> > from
> > a recording. Instead of the wires + recording we could say that there is a
> > special
> > group of neurons with spontaneous activity that stimulates the rest of the
> > brain
> > just as if it were receiving input from the environment. Such a brain would
> > have
> > no ability to interact with the environment, unless the effort were made to
> > figure out its internal code and then manufacture sense organs for it - but
> > I
> > think that would be stretching the definition of "potential interaction".
> > In any
> > case, I don't see how "potential interaction" could make a difference.
> Yet you had to refer to "stimulate...as if it were receiving input from the
> environment" to create an example. If there were no potential interaction
> there could be no "as if". So istm that the potential interaction can be an
> essential part of the definition. That's not to say that such a definition
> is right - definitions aren't right or wrong - but it's a definition that
> makes a useful distinction that comports with our common sense.
It's very difficult to define "potential interaction". With even a completely
computer we could imagine taking readings at various points in the circuit with
oscilloscope and/or changing circuit voltages, capacitance, resistance etc. Is
fact that we *could* do this enough to make the computer conscious? Or would it
only be conscious if we had access to its design specifications, so that we
principle communicate with it meaningfully rather than just making random
What if the human race died out but the computer continued to function, with no
hope that anyone might ever talk to it? What if the computer had very complex
(putatively) conscious thoughts, but rather simple input and output, eg. it
when the counts from a connected geiger counter matches the number it happens
to be thinking of at the time: would that be enough to make it conscious or
environmental interaction have to match or reflect (or potentially so) the
of its internal thoughts?
> >If you had
> > two brains sitting in the dark, identical in anatomy and electrical
> > activity except
> > that one has its optic nerves cut, will one brain be conscious and the
> > other not?
> Where did the brains come from? Since they had optic nerves can we suppose
> that they had the potential to see photons and they still have this
> potential given replacement optic nerves? Not necessarily. Suppose one
> came from a cat that was raised in complete darkness. We know
> experimentally that this cat can't see...even when there is light. The lack
> of stimulus results in the brain not forming the necessary structures for
> interpreting signals from the retina. Now suppose it were raised with no
> stimulus whatever, even in utero. I conjecture that it would not "think" at
> all - although there would be "computation", i.e. neurons firing in some
> order. But it would no longer have the potential for interaction; even with
> its own body.
Yes, the cat would be missing essential brain structures so it would not be
conscious of light even if you somehow gave it eyes and optic nerves. But I
this makes the point that perception/consciousness does not occur in the
but in the brain. If you have the right environmental inputs but the wrong
there is no perception, whereas if you have the right brain with the neurons
in the right way, but in the absence of the right environmental inputs, the
a hallucination indistinguishable from reality.
Be one of the first to try Windows Live Mail.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at