On 25 Jul 2011, at 15:23, Craig Weinberg wrote:
On Jul 25, 8:32 am, Stathis Papaioannou <[email protected]> wrote:
The replacement neurons are integrated so that they interact with the
rest of the brain just as normal brain tissue would. An example is
the
one you came up with, neurons without their nucleus, which would
function normally at least for a few minutes.
If they can only function for a few minutes, then that function may
not be 'normal' to anything except us as distantly removed observers.
This like saying that a plane which crashes at 10 pm, was not really
flying before.
That a car which would break at 120 km/h is not really riding when at
100 km/h.
(That error is often done in critics of the step 8 of UDA (which leads
to the "immateriality" of both mind and matter when digital mechanism
is assumed)).
If any of those things happened you would say, "Hey, things look
strange!" But you can't say this, because the normal brain tissue,
including the neurons that enable speech, receive normal input from
the replacement neurons. So either everything looks just the same, or
everything looks different but you can't be aware of any difference.
(Please don't say that they *don't* receive normal input, because
that
is the entire point of the thought experiment establishing
functionalism!)
They may receive some normal input, but there may be a lot more input
which we have no way to understand from our perceptual distance which
gets amputated.
That would just means that the replacement has not been done at the
right level. With mechanism, we cannot know *for sure* what is our
level, but we can reason from assuming that there is a level and study
the consequence. One consequence is that we cannot know indeed what is
our level. But *that* we can prove, assuming the existence of the level.
You assume, or talk like if you were assuming that the level is
infinitely low. Indeed, only in that case you can affirm systemically
(for blocking the consequences of the thought experience) that "there
may be a lot more input which we have no way to understand from our
perceptual distance which gets amputated".
To make the level infinitely low, is a way to introduce an infinite
complexity, which, if well chosen can contradict the "natural"
infinities we get from the computationalist assumption. The "well
chosen" can be very complex. You might need to diagonalize against the
whole of computer science.
All that for not bringing a steak to my sun in law who survived some
fatal brain cancer with a computer?
Bruno
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to [email protected]
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.