On Wednesday, April 3, 2013 5:53:40 PM UTC-4, jessem wrote:
> On Wed, Apr 3, 2013 at 2:42 PM, Craig Weinberg
> > wrote:
>> In a universe of functionalism or comp, I would expect that this would
>> never happen, as my brain should always prioritize the information made
>> available by any eye that is open over that of an eye which is closed.
> I don't think the "function" in functionalism is supposed to refer to
> utility or purpose. Functionalism as I understand it just refers to the
> idea that if you replaced each part of the brain with a "functionally
> identical" part, meaning that its input/output relationship is the same as
> the original part, then this will result in no change in conscious
> experience, regardless of the material details of how the part produces
> this input/output relation (a miniature version of the "Chinese room"
> thought experiment could work, for example).
Right, but in the nervous system, the "input/output relationship" is the
same as utility or purpose. Think of it this way. If I make a cymatic
pattern in some sand spread out on top of a drum head by vibrating it with
a certain frequency of sound, then functionalism says that whatever I do to
make that pattern must equal a sound. We know that isn't true though. I
could make that cymatic pattern simply by making a mold of it and filling
that mold with sand. I could stamp out necklaces with miniature versions of
that pattern in bronze. I could design a device which records the motion of
the sand as the pattern forms optically and then reproduces the same motion
and the same pattern in some other medium, like a TV screen. All of these
methods reproduce the "input/output relationship" which creates the
pattern, yet none of them involve carrying over the sound which I initially
used to make the pattern.
It's a little different because we can change our conscious experience by
changing the pattern of our brain activity, and that activity can be
changed in the same way by different means, so that functionalist
assumptions can be used legitimately to understand brain physiology - but -
that does not mean that the functionalist assumptions automatically tell
the whole story. If they did, then we would not need subjective reports to
correlate with brain activity, we would be able to simply detect subjective
qualities as functions, which of course we cannot do in any way. Just as
there is more than one way to make a pattern in sand, there is more than
one expression of any given experience. On one level it is hundreds of
billions of molecules reconfiguring each other, and on another is a single
experience which contains within it a billion times that number of
experiences on different levels.
It's also self-evident that there should be no behavioral change, *if* we
> assume the reductionist idea that the large-scale behavior of any physical
> system is determined by the rules governing the behavior and interactions
> of each of its component parts (you would probably dispute this, but the
> point is just that this seems to be one of the assumptions of
> 'functionalism', and of course almost all modern scientific theories of
> systems composed of multiple parts work with this assumption).
Look at how freeway traffic works. We can statistically analyze the
positions and actions of the cars and with a few simple rules, predict a
model of general traffic flow. Such a model is very effective for
predicting and controlling traffic, but it does not have access to the
meaning of the traffic - which is in fact the narrative agendas of each
individual driver trying to leave one location and get to another. That is
the reason the traffic exists; because drivers are using vehicles to
realize their motives. We could model traffic instead as a torrent of
automotive particles, which attract drivers inside of them automatically
through a wave like field which happens to be synchronized with rush hour
and lunch hour, and our model would not be incorrect in its predictions,
but of course, it would lead us to a completely false conclusion about the
nature of cars.
> For example, if you have a tumor which is altering your consciousness and
> disrupting some other abilities like speech, that is obviously not serving
> any useful function, but "functionalism" wouldn't claim it should, it would
> just say that if you replaced the tumor with an artificial device that
> affected the surrounding neurons in exactly the same way, the affected
> patient wouldn't notice any subjective difference (likewise with more
> useful parts of the brain, of course).
> There may of course be different meanings that philosophers have assigned
> to the term "functionalism", but I think this is one, and I'm pretty sure
> it's part of what "COMP" is taken to mean on this list.
Point taken. I was referring more to the 'ontological implications of
functionalism' rather than functionalism itself. It's important to follow
the implications through all the way, especially since this list is
supposed to be the Everything List.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.