On 26 March 2018 at 15:20, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 3/25/2018 7:14 PM, Stathis Papaioannou wrote:
>
>
>
> On 26 March 2018 at 04:57, Brent Meeker <meeke...@verizon.net> wrote:
>
>>
>>
>> On 3/25/2018 2:15 AM, Bruno Marchal wrote:
>>
>>
>> On 21 Mar 2018, at 22:56, Brent Meeker <meeke...@verizon.net> wrote:
>>
>>
>>
>> On 3/21/2018 2:27 PM, Stathis Papaioannou wrote:
>>
>>
>> On Thu, 22 Mar 2018 at 5:45 am, Brent Meeker <meeke...@verizon.net>
>> wrote:
>>
>>>
>>>
>>> On 3/20/2018 11:29 PM, Stathis Papaioannou wrote:
>>>
>>> On Wed, 21 Mar 2018 at 9:03 am, Brent Meeker <meeke...@verizon.net>
>>> wrote:
>>>
>>>>
>>>>
>>>> On 3/20/2018 1:14 PM, Stathis Papaioannou wrote:
>>>>
>>>>
>>>> On Wed, 21 Mar 2018 at 6:34 am, Brent Meeker <meeke...@verizon.net>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 3/20/2018 3:58 AM, Telmo Menezes wrote:
>>>>>
>>>>> The interesting thing is that you can draw conclusions about consciousness
>>>>> without being able to define it or detect it.
>>>>>
>>>>> I agree.
>>>>>
>>>>>
>>>>> The claim is that IF an entity
>>>>> is conscious THEN its consciousness will be preserved if brain function is
>>>>> preserved despite changing the brain substrate.
>>>>>
>>>>> Ok, this is computationalism. I also bet on computationalism, but I
>>>>> think we must proceed with caution and not forget that we are just
>>>>> assuming this to be true. Your thought experiment is convincing but is
>>>>> not a proof. You do expose something that I agree with: that
>>>>> non-computationalism sounds silly.
>>>>>
>>>>> But does it sound so silly if we propose substituting a completely
>>>>> different kind of computer, e.g. von Neumann architecture or one that just
>>>>> records everything instead of an episodic associative memory, for the
>>>>> brain.  The Church-Turing conjecture says it can compute the same
>>>>> functions.  But does it instantiate the same consciousness.  My intuition
>>>>> is that it would be "conscious" but in some different way; for example by
>>>>> having the kind of memory you would have if you could review of a movie of
>>>>> any interval in your past.
>>>>>
>>>>
>>>> I think it would be conscious in the same way if you replaced neural
>>>> tissue with a black box that interacted with the surrounding tissue in the
>>>> same way. It doesn’t matter what is in the black box; it could even work by
>>>> magic.
>>>>
>>>>
>>>> Then why draw the line at "surrounding tissue".  Why not the external
>>>> enivironment?
>>>>
>>>
>>> Keep expanding the part that is replaced and you replace the whole brain
>>> and the whole organism.
>>>
>>> Are you saying you can't imagine being "conscious" but in a different
>>>> way?
>>>>
>>>
>>> I think it is possible but I don’t think it could happen if my neurones
>>> were replaced by a functionally equivalent component. If it’s functionally
>>> equivalent, my behaviour would be unchanged,
>>>
>>>
>>> I agree with that.  But you've already supposed that functional
>>> equivalence at the behavior level implies preservation of consciousness.
>>> So what I'm considering is replacements in the brain far above the neuron
>>> level, say at the level of whole functional groups of the brain, e.g. the
>>> visual system, the auditory system, the memory,...  Would functional
>>> equivalence at the body/brain interface then still imply consciousness
>>> equivalence?
>>>
>>
>> I think it would, because I don’t think there are isolated consciousness
>> modules in the brain. A large enough change in visual experience will be
>> noticed by the subject, who will report that things look different. This
>> could only happen if there is a change in the input to the language system
>> from the visual system; but we have assumed that the output from the visual
>> system is the same, and only the consciousness has changed, leading to a
>> contradiction.
>>
>>
>> But what about internal systems which are independent of perception...the
>> very reason Bruno wants to talk about dream states.  And I'm not
>> necessarily asking that behavior be identical...just that the body/brain
>> interface be the same.  The "brain" may be different in how it processes
>> input from the eyeballs and hence report verbally different perceptions.
>> In other words, I'm wondering how much does computationalism constrain
>> consciousness.  My intuition is that there could be a lot of difference in
>> consciousness depending on how different perceptual inputs are process
>> and/or merged and how internal simulations are handled.  To take a crude
>> example, would it matter if the computer-brain was programmed in a
>> functional language like LISP, an object-oriented language like Ruby, or a
>> neural network?  Of course Church-Turing says they all compute the same set
>> of functions, but they don't do it the same way
>>
>>
>> They can do it in the same way. They will not do it in the same way with
>> a compiler, but will do it in the same way when you implement an
>> interpreter in another interpreter. The extensional CT (in terms if which
>> functions are calculated) entails the intensional CT (in terms of which
>> computations can be processed. Babbage machine could emulate a quantum
>> brain. It involves a relative slow-down, but the subject will not notice
>> without external clues. In arithmetic, all possible sorts of computers are
>> implemented infinitely often, with some special redundancy which plays a
>> role in the computations statistics, the first person statistics and the
>> origin of the physical appearances.
>>
>>
>> You retreat into what is possible.  My question is much more directly
>> pragmatic.  If I actually made a silicon based replacement for your brain
>> that had the same input/output would you consciousness be different if the
>> replacement processed the information differently...and how could you or we
>> know?
>>
>
> I have made the argument that your consciousness would not be different if
> the replacement processed the information differently provided that the I/O
> behaviour at the interface with the neural tissue was the same. This
> follows from consideration of what it means to be conscious, which we can
> do even if we can't verify or even define consciousness. If consciousness
> were changed but the outputs to the rest of the brain unchanged, the
> subject would either not notice any change (and what significance could
> consciousness have if a an arbitrarily large change in it would go
> unnoticed) or, if he did notice a change, it could not be communicated and
> there would be a decoupling between consciousness and behaviour from that
> point on.
>
>
> Well he might not notice the change because he can't remember how it was
> before the substitution (maybe it's memory and recall processes that were
> changed) and so the change *could be* communicated...if it were noticed,
> but it isn't.
>

There would be no point in the experiment if the change in conscious would
not have been noticed anyway. People lose neurones every day as part of
ageing and don't notice anything, because the change is too subtle. The
experiment would have to be something very obvious, such as swapping out
the subject's entire visual cortex and switching between the original, the
new, and nothing, asking the subject: "I am now switching between A, B and
C, tell me if you notice any difference as you look at the pictures on the
screen". The subject would answer, “With B, I am blind. With A and C, I see
exactly the same thing, and all the associated thoughts, emotions and so on
are the same. If one is an electronic implant, I can only conclude that it
works just as well as the original in generating visual experiences.”


> It seems to me there's something fishy about making behavior and conscious
> thought functionally equivalent so neither can change without a
> corresponding change in the other.  My intuition is that there is a lot of
> my thinking that doesn't show up as observable behavior.  No doubt it's
> observable at the micro-level in my brain; but not at the external level.
>

There are thoughts that may not affect external behaviour in most
circumstances but they could affect it given the appropriate input: for
example, a question about what the subject is thinking. If there are
thoughts that could never, under any circumstances, affect behaviour then
they cannot be conscious thoughts, and they can’t be what is called “the
subconscious” either, since that bubbles up occasionally and affects
consciousness.


-- 
Stathis Papaioannou
-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to