On Wednesday, February 12, 2014 9:30:25 PM UTC-5, stathisp wrote: > > On 12 February 2014 23:47, Craig Weinberg <[email protected]<javascript:>> > wrote: > > >> > I don't think that my experience can be replaced with a copy though. > >> > >> So how would you know you were a copy? > > > > > > It has nothing to do with whether or not I would know, it's because in > my > > understanding, copying is not primitively real, but rather is a > consequence > > of low level insensitivity. As awareness approaches the limits of its > > sensitivity, everything seems more and more the same. From an absolute > > perspective, awareness cannot be substituted, because substitution is > the > > antithesis of awareness. > > That's your theory of why you don't think your experience could be > replaced with a copy, but you haven't explained what you think would > happen. >
It depends on what method was being used to try to copy my experience. The common theme would be that the copy would fall short aesthetically and functionally from the outside view, and that it would have no inside view. > > >> Here you are today, incredulous > >> about the story of your destruction last night, but we produce > >> witnesses and videotapes and whatever other proof you need. What are > >> you going to say to that? > > > > > > Your question is "If you were wrong about awareness being > non-transferable, > > would you still think you were right?". I'm not even sure what that > fallacy > > is called...a loaded non-question? > > No, it's a simple question. You could answer something like, "If I > were replaced by a copy last night then my copy would tell you today > that he is not Craig Weinberg". > I don't have a problem with the logic that once you accept the false premise of copyable experience, then the copy would be unable to detect that they were a copy (although even that makes unscientific assumptions about the limits of sense). The problem is that being replaced by a copy is like a circle and square becoming the same thing. > >> >> If it were possible to have a change in mental state without a > change > >> >> in brain state that would be evidence that we don't think with our > >> >> brain. > >> > > >> > > >> > Some claim that NDEs are such changes, and that their experiences > have > >> > occurred during periods without brain activity. Certainly there is > >> > evidence > >> > that correlates decreased brain activity with increased perception > with > >> > psilocybin uses, which would suggest at the very least that a > one-to-one > >> > correspondence of mental to neurological activity is an > >> > oversimplification. > >> > >> Obviously, since maximal brain activity occurs during an epileptic > >> fit, during which there may be no consciousness. > >> > >> > I would not deny that we think with our brain, in the sense that the > >> > human > >> > experience of thought corresponds with the appearance of human brain > >> > activity, but that doesn't mean that our consciousness and experience > of > >> > living is part of our brain or can be located through our brain. > >> > >> No, I would not use those terms. But I don't believe that an > >> experience can occur in the absence of all brain activity, for example > >> if the brain is frozen in liquid nitrogen. > > > > > > I don't believe that either, but that doesn't mean that thought and > feeling > > can be frozen. > > They wouldn't be frozen, they would just stop, at least temporarily if > there were no permanent damage to the brain. > They would stop locally to the person, but what that holds the brain together undamaged on the microphysical level does so because it supervenes on microphenomenal aesthetic experiences to to so. If the person's life did not end, then their super-personal and sub-personal levels of experience did not stop. They include the sense of the material and circumstantial interactions on every level. It's fully integrated. > >> The software differences are still encoded as > >> physical differences in the computer, for example different electrical > >> charges at different physical locations on a memory chip. Similarly, > >> language is encoded differently in the fine structure of the synaptic > >> connections even if the brains belong to identical twins raised in > >> different countries. > > > > > > The physical differences are only encoded as software if there is a > human > > user who is interpreting it as meaningful. Without the user who cares > about > > the difference, and for whom the software is designed to interface with, > > there is only unencoded physical differences in the computer. The same > goes > > for the brain. Without us, the brain is just a complex piece of coral, > > storing and repeating meaningless configurations of electrical, > molecular, > > and cellular interactions that have nothing to do with human > consciousness. > > If the "meaningless configurations of of electrical, molecular and > cellular interactions" occur then consciousness also occurs, and they > aren't meaningless any more. That is, we know that these physical > processes are *sufficient* for consciousness, since we know that (a) > we are conscious, and (b) as far as we know there is no additional > ingredient other than these physical processes. <http://multisenserealism.files.wordpress.com/2013/08/telicdynamic.jpg> That's circular. We do not know that the physical processes are sufficient, because we can describe them exhaustively without having to describe consciousness at all. Consciousness, therefore, to paraphrase Chalmers, must be a further fact about the world. You are only looking at it from the Retrospective view of consciousness, the modus tollens view where consciousness is assumed to be attached to physics instead of the other way around. When we use the modus ponens view instead, and recognize that just because our personal phenomenal access is limited (just as our body's physical access is limited) does not mean that phenomenology itself is limited, and could not, in fact contain physics as phenomenal local appearances instead. We are the additional ingredient. It is the physical processes which rely on deeper phenomenal self-evident affective texts to generate local participatory effects. As organisms, and vertebrates, and homo sapiens, we re nested again and again, so that the alphabets of our affective texts, our personal level phenomenology, builds on both sub-personal (microphenomenology) and super-personal (metaphenomenology), so that our embeddedness within is mirrored in an anomalous and unexpected way to the sub-impersonal (microphysical) and super-impersonal (metaphysical-theoretical). > This does not mean > that these physical processes are *necessary* for consciousness, and I > believe that consciousness can occur in different substrates. These > processes are meaningful to external observers and they are also > meaningful to the internal observer, the conscious self, to whom they > give rise. > > >> >> There are drugs > >> >> which have the same effect on species as far apart as humans and > >> >> bacteria. > >> > > >> > > >> > Which is why I say that it should be the same case for language if it > >> > was a > >> > product of brain change. There should be words with mean the same > thing > >> > on > >> > species as far apart as humans and bacteria, or at least as far apart > as > >> > humans on the other side of the continent. > >> > >> Not at all. > > > > > > Because? > > Because it's a non sequitur. > Why would it be a non sequitur? We embed firmware in hardware all the time. We could have an OS or whatever software we want hard coded. Why not language also? Craig > > > -- > Stathis Papaioannou > -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/groups/opt_out.

