Bruno - once again I find myself awaiting your response... let me know if you are uninterested in continuing this line of discussion. Otherwise, I look forward to what you have to say.
Thanks, Terren terren wrote: > > Hi Bruno, thanks for your comments... see below. > > On Tue, Jun 21, 2011 at 11:17 AM, Bruno Marchal <[email protected]> wrote: >> Comp requires only that you can imagine surviving with an artificial >> digital >> brain. Then a reasoning shows that your consciousness is "more attached" >> to >> all the possible 'implementations' of that digital brain in the >> arithmetical >> truth (or just the sigma_1 tiny part, from inside this changes nothing). >> Then, if you allow thought experiences with amnesia, you can understand >> that >> a non trivial form of consciousness can be attached to the universal >> machine >> or relatively universal number. > > Isn't it reasonable that only certain kinds of 'programs' have a 1st > person consciousness? That it depends on the details of how the > 'program' is constructed? I mean, the UD executes an infinity of > nonsensical algorithms that might correspond metaphorically to "rocks" > and other inanimate phenomena. Again the idea would be that it is a > particular organization (or class of organization) that is realized by > a particular universal number (or class of universal numbers) that > gives rise to the 1st person experience. If this is the case, I'm not > sure you need the (virgin) universal machine to be conscious. > >> That is why the machine should not be just a virgin universal machine, >> but a >> Löbian machine. >> Both are virtually in all possible environments/computational histories. >> Both are conscious (I think currently), but only the Löbian one has the >> cognitive ability to introspect and to give sense to other >> machines/environment. >> So, as examples, the Robinson arithmetic theory (basically logic + laws >> of >> addition and multiplication) is a Turing universal machines, and thus is >> conscious, but not self-conscious. The Peano arithmetic theory, which is >> the >> same as Robinson + the axioms of inductions (which are very powerful) is >> self-conscious. But, without further programs/instructions, their first >> person indeterminacy bears on all state of consciousness. Our own >> consciousness is their consciousness, somehow. >> Reasonably, self-consciousness grows a lot and get much more intricate >> when >> meeting other selves. > > To me, the abstraction implied by "without further > programs/instructions" renders the notion of self-consciousness > obsolete. What I can accept is that the Löbian machine represents the > minimum logical framework to *support* self-consciousness as > "embodied" by the relations of *particular* universal numbers... > otherwise we dilute the meaning of the term "self-conscious", which at > a bare minimum requires some kind of distinction between an embodied > self and the 'other' in which it is situated. What would that 'other' > be? How would it interact with it? > >> A rock is not a person. In fact a rock or any piece of matter is a >> pattern >> *we* make from a infinite sum of computational histories. That exists >> only >> as a stable appearances. It might eventually "contains" universal >> dovetailing, and thus, trivially, all consciousness of all persons. But >> the >> rock is none of those person, so it makes no sense to say that a rock is >> conscious. The same for the whole physical universe: it is a projection >> that >> *we*, or all Löbian machines are making. Thus, comp is quite the opposite >> to >> panpsychism. Only person, incarnated by relations among natural numbers >> (or >> combinators, java program, etc.) can be conscious or self conscious. > > But couldn't you make the same argument to say that the 'virgin' > universal machine is not conscious, because it is none of those > persons in particular? > >>>> In your case, we are left wondering how the >>>> consciousness of the virgin universal machine "interfaces" with >>>> specific universal numbers, and what would explain the differences in >>>> consciousness among them. >> >> The difference will come from their different experiences relatively to >> the >> different computational histories which supports them. This will entail >> different memories, personalities, characters, etc. > > Sure, but I was talking less about the content of individual > consciousnesses, and more about the quality of such... e.g. what it's > like to be a bat. How would you distinguish between a creature that > (most of us believe) is conscious, like a cat, and a creature most of > us believe is not, like a bacterium? It seems to me that if you have > an answer to that question, you have the makings of a theory of > consciousness that does not depend on the attribution of some "source > consciousness" of the virgin universal machine. > >>>> >>>> That's why I favor the idea that consciousness arises from certain >>>> kinds of cybernetic (autopoeitic) organization (which is consistent >>>> with comp). >> >> Sure. Given that everything is defined through self-reference, comp >> should >> have friendly relationship with autopoiesis. Self-reference and >> self-organization is crucial for the development of consciousness and >> self-consciousness. I talked to Varela and he was aware and interested by >> the work of Judson Webb on mechanism, and very open to comp and comp's >> consequences. > > Cool! > >>>> In fact I think it is still consistent with much of what >>>> you're saying... but it is your assertion that comp denies strong AI >>>> that implies you would find fault with that idea. >> >> The only fault is related to the idea that we can build an AI , *AND* >> give >> some proof that it is an AI. The same for an artificial brain. You need >> to >> do some act of faith. Most pausibly, we and nature do instinctively or >> automatically such act of faith, for example in believing in other >> people. >> The real question is not "can a machine think", the real question is "are >> you OK if your son or daughter decides to marry a machine?". > > haha, well said... so far as that goes. But the real issue here is > your original assertion - the one I responded to initially - where you > said "Actually, comp prevents > "artificial intelligence". > > But it sounds like what you really meant to say is "Actually, comp > prevents us from proving AI" which is a very different statement. > >>>> I think I understand your point here with regard to consciousness - >>>> given that you're saying it's a property of the platonic 'virgin' >>>> universal machine. But if you assert that about intelligence, aren't >>>> you saying that intelligence isn't computable (i.e. comp denies strong >>>> ai)? >> >> Comp implies strong AI (but not vice versa: machine can think does not >> entail that only machine can think). >> Comp => STRONG AI: If I am a machine, then some machine can think >> (assuming >> that I can think). >> But comp denies that "we can prove that a machine can think". Of course >> we >> can prove that some machine has this or that competence. But for >> intelligence/consciousness, this is not possible. (Unless we are not >> machine. Some non-machine can prove that some machine are intelligent, >> but >> this is purely academical until we find something which is both a person >> and >> a non-machine). > > With you here... > >> I use "intelligence" is the large sense (it is close to being conscious). >> So >> it is related to the first person indeterminacy, which is infinite. You >> need >> this to stabilize consciousness, and attach it to a notion of normal >> computational history. You don't need this for one instant of >> intelligence, >> but you need it for two instants, so to speak. > > but lost me here. > >>>> That creativity is sourced in subjective indeterminacy? >> >> I don't think so. The universal machine is already creative, but its >> creativity needs some histories to bring stable results. >> Note that the machine can lose its creativity in some histories, like bad >> education can discourage students. But at the start, both consciousness >> and >> creativity are "maximal" in some way. The more we are aware of our >> universality (like Löbian machines/numbers already are), the more we can >> use >> our initial creativity (if society and contingencies allow it). >> Creativity >> might be encouraged, and some heuristics can be taught (like with de >> Bono), >> but creativity per se is at the heart of universality. I think that the >> Mandelbrot set is creative, and that Emil Post "creative sets" are too. >> That >> is why he called those set creative, and it has been proved that >> creativity >> in the sense of Post is just a set theoretical mathematical >> characterization >> of (Turing) universality or sigma_1 completeness. >> > > To the extent I buy into your mathematical formulations of such heavy > concepts like consciousness, intelligence, and creativity, then that > makes sense to me. But I am left wondering if your logics-based > definitions are the best way to make sense of those concepts, assuming > comp of course. But I don't want to give you the wrong impression here > either, because I am deeply impressed by your thoughts on this > forum... thanks for taking the time to articulate them and to respond > to folks like myself. > > Terren > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To post to this group, send email to [email protected]. > To unsubscribe from this group, send email to > [email protected]. > For more options, visit this group at > http://groups.google.com/group/everything-list?hl=en. > > > -- View this message in context: http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31964320.html Sent from the Everything List mailing list archive at Nabble.com. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

