Russell Standish <[EMAIL PROTECTED]> writes: > On Tue, Jun 20, 2006 at 09:35:12AM -0700, "Hal Finney" wrote: > > I think that one of the fundamental principles of your COMP hypothesis > > is the functionalist notion, that it does not matter what kind of system > > instantiates a computation. However I think this founders on the familiar > > paradoxes over what counts as an instantiation. In principle we can > > come up with a continuous range of devices which span the alternatives > > from non-instantiation to full instantiation of a given computation. > > Without some way to distinguish these, there is no meaning to the question > > of when a computation is instantiated; hence functionalism fails. > > > > I don't follow your argument here, but it sounds interesting. Could you > expand on this more fully? My guess is that ultimately it will depend > on an assumption like the ASSA.
I am mostly referring to the philosophical literature on the problems of what counts as an instantiation, as well as responses considered here and elsewhere. One online paper is Chalmers' "Does a Rock Implement Every Finite-State Automaton?", http://consc.net/papers/rock.html. Jacques Mallah (who seems to have disappeared from the net) discussed the issue on this list several years ago. Now, Chalmers (and Mallah) claimed to have a solution to decide when a physical system implements a calculation. But I don't think they work; at least, they admit gray areas. In fact, I think Mallah came up with the same basic idea I am advocating, that there is a degree of instantiation and it is based on the Kolmogorov complexity of a program that maps between physical states and corresponding computational states. For functionalism to work, though, it seems to me that you really need to be able to give a yes or no answer to whether something implements a given calculation. Fuzziness will not do, given that changing the system may kill a conscious being! It doesn't make sense to say that someone is "sort of" there, at least not in the conventional functionalist view. A fertile source of problems for functionalism involves the question of whether playbacks of passive recordings of brain states would be conscious. If not (as Chalmers and many others would say, since they lack the proper counterfactual behavior), this leads to a machine with a dial which controls the percentage of time its elements behave according to a passive playback versus behaving according to active computational rules. Now we can turn the knob and have the machine gradually move from unconsciousness to full consciousness, without changing its behavior in any way as we twiddle the knob. This invokes Chalmers' "fading qualia" paradox and is again fatal for functionalism. Maudlin's machines, which we have also mentioned on this list from time to time, further illustrate the problems in trying to draw a bright line between implementations and clever non-implementations of computations. In short I view functionalism as being fundamentally broken unless there is a much better solution to the implementation question than I am aware of. Therefore we cannot assume a priori that a brain implementation and a computational implementation of mental states will be inherently the same. And I have argued in fact that they could have different properties. Hal Finney --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~----------~----~----~----~------~----~------~--~---