Bruno Marchal wrote: > > > On 08 Oct 2011, at 21:00, benjayk wrote: > >>>>> >>>> >>>> I'm not saying that arithmetic isn't an internally consistent logic >>>> with unexpected depths and qualities, I'm just saying it can't turn >>>> blue or taste like broccoli. >>> >>> Assuming non-comp. >> There is no assumption needed for that. It is a category error to say >> arithmetics turns into a taste. It is also a category error to say >> that >> arithmetic has an internal view. > > If by arithmetic you mean some theory/machine like PA, you *are* using > non comp. The point is that we don't need any assumptions for that. It is just an observation. There is only the internal view viewing into itself, and it belongs to no one. It is just not possible to find an owner, simply because only objects can be owned. It is a category error to say subjectivity (consciousness) can be owned, just like, for example, numbers can't be owned.
Bruno Marchal wrote: > > If by arithmetic you mean arithmetical truth then I can see some sense > in which it is a category error. I think what you call arithmetical truth has nothing to do with arithmetical truth in particular and thus doesn't deserve its name. You can use arithmetic to point towards truth, but you can use anything for it. Thus it doesn't really make sense to call it arithmetical truth, except if you only mean the part that is provably true within arithmetic. As soon as you use Gödel, you go beyond arithmetic, making the label "arithmetical truth" close to meaningless. Bruno Marchal wrote: > >> It makes as much sense to say that a >> concept has an internal view. >> nternal view just applies to the only thing >> that can have/is a view, namely consciousness. > > It applies to person. No. There is no person to find that has consciousness. The is just a belief that is not validated by experience. The experience of a person having consciousness is just the experience of consciousness trying to make itself an object that belongs to someone (because consciousness first starts to learn to be conscious in terms of objects, as this is seemingly requiring less introspective ability). Actually consciousness just is (aware of itself) and objects appear in that, including the object "the person as relative subject". Treating the relative subject, the person, as having the absolute subject (consciousness) is the illusion of ego, that creates samsara, suffering. The absolute subject can't suffer, as it has nothing to suffer from, nor any notion of difference that is required to suffer (suffering vs suffering ceasing). Bruno Marchal wrote: > > It might be a category error to say that > consciousness has consciousness. Consciousness is not a person, even > cosmic consciousness. Right, consciousness doesn't really "have" consciousness, this is just a manner of speaking that I borrowed from "a person having consciousness", I think the former is more accurate than the latter. Actually consciousness just is (and through that it knows itself). Bruno Marchal wrote: > >> This is not a belief, this is >> just the obvious reality right now. > > Obvious for you. Obvious for anyone (as there is only one that can be consciousness of obviousness, namely consciousness). Right now the only absolute thing you find in your experience is consciousnes, without any owner. Only the intellect makes it possible for anything to "have" consciousness. In actuality there is no such thing to find. It can be non-obvious to a person, not to consciousness. Consciousness can't even conceive of an owner of itself, actually it can't directly conceive of anything. Conceiving of something appears in it (and as it). Bruno Marchal wrote: > > But is it obvious that PA is conscious: I don't think > so. Nevertheless, in case it is conscious, it is obvious from her > point of view. It is that obviousness we are looking a theory for. PA is just an object within consciousness. It can't have a point of view. Nothing has a point of view in the sense you mean it. Points of views are just relative manifestations inside/of consciousness. PA could have a point of view in a relative sense, if you choose to indentify with PA and then defend its position. But one could as well say that a triangle has a point of view, if I identify with it and defend its "position" (imagining it has any). Bruno Marchal wrote: > >> Can you find any number(s) flying around >> that has any claim to an internal view right now? > > Yes. Although the number per se, like programs and brains, will refer > only to the relations that the 1-person associated with that number > can have. Or, to put it another way, the 1-person will not feel to be a number at all; and thus will not be a number(s), for all intents and purposes, contradicting the very premise (maybe not logically, but it doesn't really make sense to bet on being a machine if the conclusion says that for all intents and purposes you are not a machine at all). Anyway, I doubt that you can find any number having a claim to an internal view other than in you imagination. Bruno Marchal wrote: > >> The only thing that you >> can find is consciousness being conscious of itself (even an person >> that >> consciousness belongs to is absent, the person is just an object in >> consciousness). > > Here you present a theory like if it was a fact. This is not a theory. It is not even a fact, it is just observation. There is consciousness, that is it. There is no person to find here, except as certain forms in consciousness (feeling seperate, thinking of "I", feeling to be in control, thinking of past and future,etc...). Bruno Marchal wrote: > > If that was obvious, we would not even discuss it. Even though it is obvious, it can be overlooked. Obvious is relative. Bruno Marchal wrote: > > Consciousness, despite being an obvious > fact for conscious person, is a concept. As you say, concept does not > think. Consciousness is as much of a concept as everything we can talk about. This doesn't say much. You can form any sentence with "... is a concept" and it will be true. Of course I am talking not of consciousness as a concept but of consciousness itself, which is just the obviousness of experiencing. Indeed, consciousness does not think. It doesn't do anything, really. Thinking is being witnessed within consciousness. Bruno Marchal wrote: > >> You abstract so much that you miss the obvious. > > In interdisciplinary researches it is better to avoid the term > "obvious". Why? If nothing is obvious we really have no point of reference at all. At least it is obvious that anything at all is obvious. We can agree that it is obvious that what is obvious is obvious. That is what I am talking about. Why shouldn't we talk about that? Bruno Marchal wrote: > > I do agree that consciousness is obvious from the first person point > of view of a conscious person, but do you agree that a silicon machine > can emulate a conscious person, indeed yourself (little ego)? I think a person has no first person point of view that could perceive consciousness. It's point of view consists of relative perceptions and emotions, etc... but it is within consciousness and thus can't be aware of it as an object. Yes, a silicon machine can, (in principle at least) emulate this person, I have little doubt about that. Yet this emulation will not be accurate, as this person itself cannot be divorced from its transcendent source. That is, it'll miss the part that is transcendent of emulability (yet still within realms of what one could call "matter"). I don't know what'll come out of this emulation, if it is allowed to express itself. Funnily I dreamt being emulated, and my immediate response was "I have to get out of here", and so my soul left the (supposed) emulation. Maybe this will be how it is, that the emulation will be completely dysfunctional, because consciousness immediatly realizes it is not a suitable vessel. Maybe it will act like a human, but without emotional capability. Maybe an emulation will never be possible for reasons of self-consistency (if the emulation would be possible it may infer an world where there isn't an emulation, making the emulation not an emulation at all, but just an unrealizable theoretical possibility). It may be possible that the emulation works, if consciousness creates the necessary transcendent interpretative intelligence around the emulation device, but I see that as unlikely. It doesn't sound plausible to me that this comes out of nowhere. It may be possible if the brain is partly replaced by digital devices and the rest of the brain accommodates by learning to interpret the output, and give the right input to the device. But there most probably is going to be a point where this doesn't work anymore, as there is no space for further neurons (or no possibility of further enhancing their efficiency) that would be needed for interpretation, for example. Bruno Marchal wrote: > > I don't know the answer to that question, but I can show that if that > is the case (that you can survive without any conscious change with > such a silicon prosthesis), then we have to come back to the > Platonician theologies, and naturalism and weak materialism, despite > being a fertile simplifying assumption (already done by nature) is > wrong. I don't buy your argument, even though I agree with part of the conclusion. (better read the rest before responding to this, it may be unecessary): [Why I don't buy your argument? It is a thought experiment that can't be carried out in practice, and the implications of thoughts experiments don't necessarily apply in the real world, so none of the conclusions are necessarily valid. For example a substitution level is a theoretical construct. In reality all substitution levels blur into each other via quantum interference. Also there is no such thing as a perfect digital machine, also due to quantum mechanics. It might be the case that some digital machines work, and some don't.] Actually if you are strict in the interpretation of COMP, like you want it (so what I said above doesn't apply, because you assume quantum stuff doesn't matter), your whole reasoning is tautological. The "yes" you speak of is really a yes towards being an immaterial machine, because you assume that just the digital functioning of the actual device matters (and digital functioning is not something that can be defined in terms of matter). And if you (and everybody else) are *only* an immaterial machine, and thus you have no world to be in, necessarily pysical reality has to come from that and can't be primary. How could it if you assume that you are an *immaterial* machine. You just say "yes" if you buy your reasoning, because if the reasoning is wrong you can't be an immaterial machine, contradicting your "yes". So in this case, you really just prove that if you say "yes", you say "yes", which, well, is sort of obvious in the first place. The problem is that no materialst is going to say yes in the precise way you want it. They will have to argue the particular instantiation of the digital machine matter, making them say "NO", as they don't agree with a digital substitution in the way you mean it. For them a digital substitution means a particular digital machine, which is actually not *purely* digital, making them say "NO". benjayk -- View this message in context: http://old.nabble.com/COMP-is-empty%28-%29-tp32569717p32619924.html Sent from the Everything List mailing list archive at Nabble.com. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

