On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote:
>
>  On 5/24/2015 4:09 AM, Pierz wrote:
>  
>
>
> On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote: 
>>
>>
>>
>> On Sun, May 24, 2015 at 12:40 AM, Pierz <pie...@gmail.com> wrote:
>>
>>>
>>>
>>> On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote: 
>>>>
>>>>
>>>>
>>>> On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <mar...@ulb.ac.be> 
>>>> wrote:
>>>>  
>>>>>
>>>>>   On 19 May 2015, at 15:53, Jason Resch wrote:
>>>>>
>>>>>  
>>>>>
>>>>> On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou <
>>>>> stat...@gmail.com> wrote:
>>>>>
>>>>>>  On 19 May 2015 at 14:45, Jason Resch <jason...@gmail.com> wrote:
>>>>>> >
>>>>>> >
>>>>>> > On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou <
>>>>>> stat...@gmail.com>
>>>>>> > wrote: 
>>>>>>
>>>>>> >>
>>>>>> >> On 19 May 2015 at 11:02, Jason Resch <jason...@gmail.com> wrote:
>>>>>> >>
>>>>>> >> > I think you're not taking into account the level of the 
>>>>>> functional
>>>>>> >> > substitution. Of course functionally equivalent silicon and 
>>>>>> functionally
>>>>>> >> > equivalent neurons can (under functionalism) both instantiate 
>>>>>> the same
>>>>>> >> > consciousness. But a calculator computing 2+3 cannot substitute 
>>>>>> for a
>>>>>> >> > human
>>>>>> >> > brain computing 2+3 and produce the same consciousness.
>>>>>> >>
>>>>>> >> In a gradual replacement the substitution must obviously be at a 
>>>>>> level
>>>>>> >> sufficient to maintain the function of the whole brain. Sticking a
>>>>>> >> calculator in it won't work.
>>>>>> >>
>>>>>> >> > Do you think a "Blockhead" that was functionally equivalent to 
>>>>>> you (it
>>>>>> >> > could
>>>>>> >> > fool all your friends and family in a Turing test scenario into 
>>>>>> thinking
>>>>>> >> > it
>>>>>> >> > was intact you) would be conscious in the same way as you?
>>>>>> >>
>>>>>> >> Not necessarily, just as an actor may not be conscious in the same 
>>>>>> way
>>>>>> >> as me. But I suspect the Blockhead would be conscious; the 
>>>>>> intuition
>>>>>> >> that a lookup table can't be conscious is like the intuition that 
>>>>>> an
>>>>>> >> electric circuit can't be conscious.
>>>>>> >>
>>>>>> >
>>>>>> > I don't see an equivalency between those intuitions. A lookup table 
>>>>>> has a
>>>>>> > bounded and very low degree of computational complexity: all 
>>>>>> answers to all
>>>>>> > queries are answered in constant time.
>>>>>> >
>>>>>> > While the table itself may have an arbitrarily high information 
>>>>>> content,
>>>>>> > what in the software of the lookup table program is there to
>>>>>> > appreciate/understand/know that information?
>>>>>>
>>>>>>    Understanding emerges from the fact that the lookup table is 
>>>>>> immensely
>>>>>> large. It could be wrong, but I don't think it is obviously less
>>>>>> plausible than understanding emerging from a Turing machine made of
>>>>>> tin cans.
>>>>>>  
>>>>>>
>>>>>>     
>>>>>  The lookup table is intelligent or at least offers the appearance of 
>>>>> intelligence, but it makes the maximum possible advantage of the 
>>>>> space-time 
>>>>> trade off: http://en.wikipedia.org/wiki/Space–time_tradeoff
>>>>>
>>>>>  The tin-can Turing machine is unbounded in its potential 
>>>>> computational complexity, there's no reason to be a bio- or 
>>>>> silico-chauvinist against it. However, by definition, a lookup table has 
>>>>> near zero computational complexity, no retained state. 
>>>>>    
>>>>>
>>>>>    But it is counterfactually correct on a large range spectrum. Of 
>>>>> course, it has to be infinite to be genuinely counterfactual-correct. 
>>>>>  
>>>>>     
>>>>  But the structure of the counterfactuals is identical regardless of 
>>>> the inputs and outputs in its lookup table. If you replaced all of its 
>>>> outputs with random strings, would that change its consciousness? What if 
>>>> there existed a special decoding book, which was a one-time-pad that could 
>>>> decode its random answers? Would the existence of this book make it more 
>>>> conscious than if this book did not exist? If there is zero information 
>>>> content in the outputs returned by the lookup table it might as well 
>>>> return 
>>>> all "X" characters as its response to any query, but then would any 
>>>> program 
>>>> that just returns a string of "X"'s be conscious?
>>>>
>>>>     I really like this argument, even though I once came up with a 
>>> (bad) attempt to refute it. I wish it received more attention because it 
>>> does cast quite a penetrating light on the issue. What you're suggesting is 
>>> effectively the cache pattern in computer programming, where we trade 
>>> memory resources for computational resources. Instead of repeating a 
>>> resource-intensive computation, we store the inputs and outputs for later 
>>> regurgitation. 
>>>  
>>
>>  How is this different from a movie recording of brain activity (which 
>> most on the list seem to agree is not conscious)? The lookup table is just 
>> a really long recording, only we use the input to determine to which 
>> section of the recording to fast-forward/rewind to.
>>
>>    It isn't different to a recording. But here's the thing: when we ask 
> if the lookup machine is conscious, we are kind of implicitly asking: is it 
> having an experience *now*, while I ask the question and see a response. 
> But what does such a question actually even mean? If a computation is 
> underway in time when the machine responds, then I assume it is having a 
> co-temporal experience. But the lookup machine idea forces us to the 
> realization that different observers' subjective experiences (the pure 
> qualia) can't be mapped to one another in objective time. The experiences 
> themselves are pure abstractions and don't occur in time and space. How 
> could we ever measure the time at which a quale occurs? 
>  
>
> By having the quale of "looking at my watch" before and after the quale in 
> question.
>
> Yes, but if qualia are number relations, or abstractions, those relations 
don't exist in time, though they can be instantiated in time. Jason's 
lookup table thought experiment points out that a highly co-ordinated and 
enormous recording could look indistinguishable from consciousness - so we 
either have to believe in zombies (because we're committed to the idea that 
real-time computations are required in order to instantiate consciousness), 
or we have to believe that recordings are conscious. Now my point is that 
if qualia are based on number relations, those qualia do not exist in time 
and space anyway, but purely in Platonia. Therefore it is meaningless to 
ask "when" the consciousness occurred. Perhaps the consciousness was only 
"there" when the recording was made, and now it isn't. Perhaps it is 
"there" (again) when the recording is replayed - though given it that it 
would be completely indistinguishable from the original experience, then it 
would be difficult not to conclude it was one and the same experience. 
Given that it is impossible to measure the absence or presence of qualia, 
and that the subjective is incommensurable with the objective, then the 
question falls into the moot category. We should also recall that the 
subjective time has really nothing to do with the time of the computing 
device. Our virtual Einstein's experiences could have been pieced together 
in parallel or in any sequence whatsoever - it doesn't matter at all how or 
when the computation occurs. Your remark above to me is frankly naive and 
question-begging. 
 

>  Sure we could measure brain waves and map them to reported experiences 
> and so conclude that the brain waves and experiences occurred "at the same 
> time", but the experience itself might have occurred at any time and just 
> happen to correlate to those neuronal firing patterns. 
>  
>
> Isn't this another one of those "suppose the extremely improbable".  I'd 
> say the way you relate these things, time, quale, brain activity, is by a 
> theory - the same way you relate other things.  One such theory is that the 
> quale is part of the brain's physical activity.  Another is Bruno's the 
> quale are a proof relation between numbers. 
>

>  Perhaps I experience the moment I think of as "now" exactly 100 years 
> after it actually happened - except course such an assertion is meaningless 
> because the subjective and the objective can't be mapped to one another at 
> all. I've said before that a recording *is* conscious to the extent that 
> it is a representation of a conscious moment, just like the original 
> "event" was (as seen perhaps by those who were there). I mean to say, how 
> is a recording different from an observation? It's just a delayed or echoed 
> observation. Again, *when* is an experience? Is it happening as the 
> neurones fire? Even Dennett - hardly a Platonist - has critiqued this naive 
> idea, pointing out how sequence and timing of experience are really a 
> construction. Qualia are not *in* time and space. 
>  
>
> Time and space are constructions too.  We use "constructions" to remind 
> ourselves that they are theory laden and might be different under another 
> theory.  But it doesn't necessarily mean it is wrong, that is *only* a 
> construction.  Science generally advances by taking its best theories 
> seriously and pushing them to find their limit.
>
> Brent
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to