Stathis Papaioannou wrote:
> Brent meeker writes:
>>Stathis Papaioannou wrote:
>>>Peter Jones writes:
>>>>Stathis Papaioannou wrote:
>>>>>Like Bruno, I am not claiming that this is definitely the case, just that
>>>>>it is the case if
>>>>>computationalism is true. Several philosophers (eg. Searle) have used the
>>>>>absurdity of the idea as an argument demonstrating that computationalism
>>>>>is false -
>>>>>that there is something non-computational about brains and consciousness.
>>>>>I have not
>>>>>yet heard an argument that rejects this idea and saves computationalism.
>>>>[ rolls up sleaves ]
>>>>The idea is easilly refuted if it can be shown that computation doesn't
>>>>interpretation at all. It can also be refuted more circuitously by
>>>>computation is not entirely a matter of intepretation. In everythingism
>>>>is equal. If some computations (the ones that don't depend on
>>>>"more equal than others", the way is still open for the Somethinginst
>>>>that interpretation-independent computations are really real, and the
>>>>The claim has been made that computation is "not much use" without an
>>>>Well, if you define a computer as somethin that is used by a human,
>>>>that is true.
>>>>It is also very problematic to the computationalist claim that the
>>>>human mind is a computer.
>>>>Is the human mind of use to a human ? Well, yes, it helps us stay alive
>>>>in various ways.
>>>>But that is more to do with reacting to a real-time environment, than
>>>>performing abstract symbolic manipulations or elaborate
>>>>re-interpretations. (Computationalists need to be careful about how
>>>>they define "computer". Under
>>>>some perfectly reasonable definitions -- for instance, defining a
>>>>a human invention -- computationalism is trivially false).
>>>I don't mean anything controversial (I think) when I refer to interpretation
>>>computation. Take a mercury thermometer: it would still do its thing if all
>>>sentient life in the universe died out, or even if there were no sentient
>>>build it in the first place and by amazing luck mercury and glass had come
>>>in just the right configuration. But if there were someone around to observe
>>>understand it, or if it were attached to a thermostat and heater, the
>>>would have extra meaning - the same thermometer, doing the same thermometer
>>>stuff. Now, if thermometers were conscious, then part of their "thermometer
>>>stuff" might include "knowing" what the temperature was - all by themselves,
>>>benefit of external observer.
>>We should ask ourselves how do we know the thermometer isn't conscious of the
>>temperature? It seems that the answer has been that it's state or activity
>>be intepreted in many ways other than indicating the temperature; therefore
>>be said to unconscious of the temperature or we must allow that it implements
>>conscious thought (or at least all for which there is a possible
>>mapping). But I see it's state and activity as relative to our shared
>>and this greatly constrains what it can be said to "compute", e.g. the
>>the expansion coefficient of Hg... With this constraint, then I think there
>>problem in saying the thermometer is conscious at the extremely low level of
>>aware of the temperature or the expansion coefficient of Hg or whatever else
>>within the constraint.
> I would basically agree with that. Consciousness would probably have to be a
> if computationalism is true. Even if computationalism were false and only
> those machines
> specially blessed by God were conscious there would have to be a continuum,
> different species and within the lifespan of an individual from birth to
> death. The possibility
> that consciousness comes on like a light at some point in your life, or at
> some point in the
> evolution of a species, seems unlikely to me.
>>>Furthermore, if thermometers were conscious, they
>>>might be dreaming of temperatures, or contemplating the meaning of
>>>again in the absence of external observers, and this time in the absence of
>>>with the real world.
>>>This, then, is the difference between a computation and a conscious
>>>a computation is unconscious, it can only have meaning/use/interpretation in
>>>of a beholder or in its interaction with the environment.
>>But this is a useless definition of the difference. To apply we have to know
>>some putative conscious computation has meaning to itself; which we can only
>>knowing whether it is conscious or not. It makes consciousness ineffable and
>>makes the question of whether computationalism is true an insoluble mystery.
> That's what I have in mind.
>>Even worse it makes it impossible for us to know whether we're talking about
>>thing when we use the word "consciousness".
> I know what I mean, and you probably know what I mean because despite our
> our minds are not all that different from each other. Of course, I can't be
> absolutely sure that
> you are conscious, but it's one of those things even philosophers only worry
> about when at
> work. On the other hand, there is serious debate about whether dogs are
> conscious, or whether
> fish feel pain.
>>>>If a computation is conscious,
>>>it may have meaning/use/interpretation in interacting with its environment,
>>>other conscious beings, and for obvious reasons all the conscious
>>>encounter will fall into that category; but a conscious computation can also
>>>all by itself, to itself.
>>I think this is implicitly circular. Consciousness supplies meaning through
>>intepretation. But meaning is defined only as what consciousness supplies.
>>It is to break this circularity that I invoke the role of the enviroment.
>>for language, it is our shared environment that makes it possible to assign
> I don't see a fundamental problem with using this sort of definition. A
> archive is an archive which opens itself without need of external programs.
> and "awareness" are two possible functions that programs can have, and they
> can operate
> on either themselves or aspects of their environment.
>>>You might argue, as Brent Meeker has, that a conscious being would
>>>quickly lose consciousness if environmental interaction were cut off, but I
>>>think that is just
>>>a contingent fact about brains, and in any case, as Bruno Marchal has
>>>pointed out, you
>>>only need a nanosecond of consciousness to prove the point.
>>I don't think any human can experience consciousness for a nano-second. I
>>as part of the lesson of the Libet's and Grey Walter's experiments.
>>has a time-scale on the order of tenths of a second. But that's only a
>>The real importance of the environment is not that it keeps our brains from
>>into "do loops", but that it makes interpretation or meaning possible.
> Why can't you have meaning during a loop? We might be doomed to repeat this
> forever as part of some Nietzchian eternal return, but it will be just as
> meaningful to us (or
> not) every time.
>>>>It is of course true that the output of a programme intended to do one
>>>>("system S", say) could be re-interpeted as something else. But what
>>>>does it *mean* ?
>>>>If computationalism is true whoever or whatever is doing the
>>>>interpreting is another
>>>>computational process. SO the ultimate result is formed by system S in
>>>>with another systen. System S is merely acting as a subroutine. The
>>>>intended conclusion is that every physical system implements every
>>>That's what I'm saying, but I certainly don't think everyone agrees with me
>>>on the list, and
>>>I'm not completely decided as to which of the three is more absurd: every
>>>implements every conscious computation, no physical system implements any
>>>computation (they are all implemented non-physically in Platonia), or the
>>>idea that a
>>>computation can be conscious in the first place.
>>Why not reject the first two and accept that computations, to the degree they
>>certain structures, are conscious, i.e. self-intepreting relative to their
>>>>But the evidence -- the re-interpretation scenario -- only supports the
>>>>that any computational system could become part of a larger system that
>>>>doing something else. System S cannot be said to be simultaneously
>>>>every possible computation *itself*. The multiple-computaton -- i.e
>>>>-- scenario is dependent on a n intepreter. Having made computation
>>>>on interpretation, we cannot the regard the interpreter as redundant,
>>>>so that it
>>>>is all being done by the system itself. (Of course to fulfil the
>>>>"every possible interpretation" you need not just interpreters but
>>>>every possible intepreter, but that is another problem for another
>>>Only the unconscious thermometers require external thermometer-readers.
>>Require for what?
> They don't actually *require* them; they are what they are regardless of the
> world. However, something can only have meaning, or taste, or beauty in the
> eye of the
> beholder, and sometimes the beholder is the object as well as subject.
> Stathis Papaioannou
Aye, there's the rub. What's a beholder? It doesn't behold itself beholding
doesn't behold much of anything - but I might say it does "behold" the
the expansion of Hg or some such.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at