On Tuesday, May 26, 2015 at 6:49:51 AM UTC+10, Brent wrote: > > On 5/25/2015 5:16 AM, Pierz wrote: > > > > On Monday, May 25, 2015 at 4:58:53 AM UTC+10, Brent wrote: >> >> On 5/24/2015 4:09 AM, Pierz wrote: >> >> >> >> On Sunday, May 24, 2015 at 4:47:12 PM UTC+10, Jason wrote: >>> >>> >>> >>> On Sun, May 24, 2015 at 12:40 AM, Pierz <[email protected]> wrote: >>> >>>> >>>> >>>> On Sunday, May 24, 2015 at 1:07:15 AM UTC+10, Jason wrote: >>>>> >>>>> >>>>> >>>>> On Tue, May 19, 2015 at 12:44 PM, Bruno Marchal <[email protected]> >>>>> wrote: >>>>> >>>>>> >>>>>> On 19 May 2015, at 15:53, Jason Resch wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Tue, May 19, 2015 at 12:06 AM, Stathis Papaioannou < >>>>>> [email protected]> wrote: >>>>>> >>>>>>> On 19 May 2015 at 14:45, Jason Resch <[email protected]> wrote: >>>>>>> > >>>>>>> > >>>>>>> > On Mon, May 18, 2015 at 9:21 PM, Stathis Papaioannou < >>>>>>> [email protected]> >>>>>>> > wrote: >>>>>>> >>>>>>> >> >>>>>>> >> On 19 May 2015 at 11:02, Jason Resch <[email protected]> wrote: >>>>>>> >> >>>>>>> >> > I think you're not taking into account the level of the >>>>>>> functional >>>>>>> >> > substitution. Of course functionally equivalent silicon and >>>>>>> functionally >>>>>>> >> > equivalent neurons can (under functionalism) both instantiate >>>>>>> the same >>>>>>> >> > consciousness. But a calculator computing 2+3 cannot substitute >>>>>>> for a >>>>>>> >> > human >>>>>>> >> > brain computing 2+3 and produce the same consciousness. >>>>>>> >> >>>>>>> >> In a gradual replacement the substitution must obviously be at a >>>>>>> level >>>>>>> >> sufficient to maintain the function of the whole brain. Sticking a >>>>>>> >> calculator in it won't work. >>>>>>> >> >>>>>>> >> > Do you think a "Blockhead" that was functionally equivalent to >>>>>>> you (it >>>>>>> >> > could >>>>>>> >> > fool all your friends and family in a Turing test scenario into >>>>>>> thinking >>>>>>> >> > it >>>>>>> >> > was intact you) would be conscious in the same way as you? >>>>>>> >> >>>>>>> >> Not necessarily, just as an actor may not be conscious in the >>>>>>> same way >>>>>>> >> as me. But I suspect the Blockhead would be conscious; the >>>>>>> intuition >>>>>>> >> that a lookup table can't be conscious is like the intuition that >>>>>>> an >>>>>>> >> electric circuit can't be conscious. >>>>>>> >> >>>>>>> > >>>>>>> > I don't see an equivalency between those intuitions. A lookup >>>>>>> table has a >>>>>>> > bounded and very low degree of computational complexity: all >>>>>>> answers to all >>>>>>> > queries are answered in constant time. >>>>>>> > >>>>>>> > While the table itself may have an arbitrarily high information >>>>>>> content, >>>>>>> > what in the software of the lookup table program is there to >>>>>>> > appreciate/understand/know that information? >>>>>>> >>>>>>> Understanding emerges from the fact that the lookup table is >>>>>>> immensely >>>>>>> large. It could be wrong, but I don't think it is obviously less >>>>>>> plausible than understanding emerging from a Turing machine made of >>>>>>> tin cans. >>>>>>> >>>>>>> >>>>>>> >>>>>> The lookup table is intelligent or at least offers the appearance >>>>>> of intelligence, but it makes the maximum possible advantage of the >>>>>> space-time trade off: >>>>>> http://en.wikipedia.org/wiki/Space–time_tradeoff >>>>>> >>>>>> The tin-can Turing machine is unbounded in its potential >>>>>> computational complexity, there's no reason to be a bio- or >>>>>> silico-chauvinist against it. However, by definition, a lookup table has >>>>>> near zero computational complexity, no retained state. >>>>>> >>>>>> >>>>>> But it is counterfactually correct on a large range spectrum. Of >>>>>> course, it has to be infinite to be genuinely counterfactual-correct. >>>>>> >>>>>> >>>>> But the structure of the counterfactuals is identical regardless of >>>>> the inputs and outputs in its lookup table. If you replaced all of its >>>>> outputs with random strings, would that change its consciousness? What if >>>>> there existed a special decoding book, which was a one-time-pad that >>>>> could >>>>> decode its random answers? Would the existence of this book make it more >>>>> conscious than if this book did not exist? If there is zero information >>>>> content in the outputs returned by the lookup table it might as well >>>>> return >>>>> all "X" characters as its response to any query, but then would any >>>>> program >>>>> that just returns a string of "X"'s be conscious? >>>>> >>>>> I really like this argument, even though I once came up with a >>>> (bad) attempt to refute it. I wish it received more attention because it >>>> does cast quite a penetrating light on the issue. What you're suggesting >>>> is >>>> effectively the cache pattern in computer programming, where we trade >>>> memory resources for computational resources. Instead of repeating a >>>> resource-intensive computation, we store the inputs and outputs for later >>>> regurgitation. >>>> >>> >>> How is this different from a movie recording of brain activity (which >>> most on the list seem to agree is not conscious)? The lookup table is just >>> a really long recording, only we use the input to determine to which >>> section of the recording to fast-forward/rewind to. >>> >>> It isn't different to a recording. But here's the thing: when we ask >> if the lookup machine is conscious, we are kind of implicitly asking: is it >> having an experience *now*, while I ask the question and see a response. >> But what does such a question actually even mean? If a computation is >> underway in time when the machine responds, then I assume it is having a >> co-temporal experience. But the lookup machine idea forces us to the >> realization that different observers' subjective experiences (the pure >> qualia) can't be mapped to one another in objective time. The experiences >> themselves are pure abstractions and don't occur in time and space. How >> could we ever measure the time at which a quale occurs? >> >> >> By having the quale of "looking at my watch" before and after the quale >> in question. >> >> Yes, but if qualia are number relations, or abstractions, those > relations don't exist in time, though they can be instantiated in time. > Jason's lookup table thought experiment points out that a highly > co-ordinated and enormous recording could look indistinguishable from > consciousness - so we either have to believe in zombies (because we're > committed to the idea that real-time computations are required in order to > instantiate consciousness), or we have to believe that recordings are > conscious. Now my point is that if qualia are based on number relations, > those qualia do not exist in time and space anyway, but purely in Platonia. > Therefore it is meaningless to ask "when" the consciousness occurred. > Perhaps the consciousness was only "there" when the recording was made, and > now it isn't. Perhaps it is "there" (again) when the recording is replayed > - though given it that it would be completely indistinguishable from the > original experience, then it would be difficult not to conclude it was one > and the same experience. Given that it is impossible to measure the absence > or presence of qualia, and that the subjective is incommensurable with the > objective, then the question falls into the moot category. We should also > recall that the subjective time has really nothing to do with the time of > the computing device. Our virtual Einstein's experiences could have been > pieced together in parallel or in any sequence whatsoever - it doesn't > matter at all how or when the computation occurs. Your remark above to me > is frankly naive and question-begging. > > > My remark points out that "time" is a construct from the order of > experience. You note that if the world exists as a computation in Platonia > it's meaningless to ask when something happens. > But I'm saying that's wrong. It's meaningless to ask when something is in > Platonia. But it's meaningful to ask when things happen relative to one > another as computed. Not that the order of computation is the same as the > time order, but the computation must include the same relations as my > looking at my watch and noting what the watch says. There has to be > something in those experiences that produces the perceived time order. >
Yes- but go back to the example of a recorded Einstein - not just the recorded historical Einstein, but the recording of all possible Einsteins responding to all possible inputs. Absurd yes, but the point is about pre-recorded responses, not the feasibility of the scenario. If I expose the recorded Einstein machine to some stimulus, I will observe a response that appears entirely conscious and thoughtful and indeed identical to that which I would have seen if he'd "computed" his response in real time. Say at 1pm I start my Turing test interview with him and at 1:05 I poke him painfully with a stick. I then ask him, "Say Albert, that hurt, right?" Yes, he admits it. "So at what time did you experience the pain?" "At 1:05, when you poked me with the damn stick!" There is absolutely no way for me to convince him that he "actually" experienced this event at some time in the past when his original response to this prod was recorded and therefore he is not having any experience right now at all. A conscious moment (assuming the Platonic, Bruno-flavoured version of computationalism) is an abstract computation that might be represented at more than one point in time and space, but there is only one underlying abstraction. So your example of having a quale of "my watch says 1:05" begs the question. I might tell you that your experience is actually a recording and that you are a zombie. You, like Einstein, would have no idea if you're "really" experiencing that quale at 1:05 or not. It can never be determined even in principle, so I think one has to conclude it is not a meaningful question. > Bruno's theory is that consciousness and the physical world are all just > relations between numbers and things like brains, computers, recordings, > and lookup tables and just ways for manifesting consciousness and 2 and II > are ways of manifesting the number two. However, I think I'm agreeing with > you when I say that this "manifesting" is just the flip side of Chalmers > hard problem. Chalmers sees that it's hard to consciousness out of a > theory of physics. Bruno thinks he has consciousness explained in terms of > modal logic (though I'm not so sure), so his problem is to explain it's > "manifestation" always as physical. > > Brent > > > >> Sure we could measure brain waves and map them to reported experiences >> and so conclude that the brain waves and experiences occurred "at the same >> time", but the experience itself might have occurred at any time and just >> happen to correlate to those neuronal firing patterns. >> >> >> Isn't this another one of those "suppose the extremely improbable". I'd >> say the way you relate these things, time, quale, brain activity, is by a >> theory - the same way you relate other things. One such theory is that the >> quale is part of the brain's physical activity. Another is Bruno's the >> quale are a proof relation between numbers. >> > >> Perhaps I experience the moment I think of as "now" exactly 100 years >> after it actually happened - except course such an assertion is meaningless >> because the subjective and the objective can't be mapped to one another at >> all. I've said before that a recording *is* conscious to the extent that >> it is a representation of a conscious moment, just like the original >> "event" was (as seen perhaps by those who were there). I mean to say, how >> is a recording different from an observation? It's just a delayed or echoed >> observation. Again, *when* is an experience? Is it happening as the >> neurones fire? Even Dennett - hardly a Platonist - has critiqued this naive >> idea, pointing out how sequence and timing of experience are really a >> construction. Qualia are not *in* time and space. >> >> >> Time and space are constructions too. We use "constructions" to remind >> ourselves that they are theory laden and might be different under another >> theory. But it doesn't necessarily mean it is wrong, that is *only* a >> construction. Science generally advances by taking its best theories >> seriously and pushing them to find their limit. >> >> Brent >> > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] <javascript:>. > To post to this group, send email to [email protected] > <javascript:>. > Visit this group at http://groups.google.com/group/everything-list. > For more options, visit https://groups.google.com/d/optout. > > > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

