On 4 July 2014 19:21, meekerdb <[email protected]> wrote: >>> If the latter, simple reductive >>> analogies like house-bricks, or society-people, can sometimes help to >>> convey the idea that any exhaustively reductive material schema >>> necessarily *eliminates* its ontological composites > > That's just your definition of eliminates. Mountains are made of rocks, > therefore mountains don't exist.
I can't help feeling that you're leaning rather too heavily on "just" here. A contradiction is not an argument (at least according to Monty Python). However, you've said nothing so far to make me relinquish this definition, in the *ontological* sense. For some reason you ignore the distinction I've repeatedly drawn between the ontological and epistemological aspects of a theory. Do you wish to say that mountains have *ontological* significance *in addition* to the rocks that comprise them? We accept of course that they exist *epistemologically* (i.e. as objects of knowledge from the point of view of a knower), but we can't adduce that fact, a posteriori, in support of their having any *ontological* purchase independent of their components. To remind you why I suppose this to be of interest, what is true for mountains must hold for any other derivative of "physically-primitive" entities and relations. Hence it must hold for any physical "computer", whether that be a PC or (putatively) a brain. On this analysis, a PC or a brain are *ontologically* (i.e. in terms of the target theory) nothing more than physically-primitive entities in primary relation. We have already agreed that, ex hypothesi, nothing further is required (or could be allowed) in accounting for their physical evolution. Physical systems of any description are hypothesised to transition from state to state entirely in terms of the relations of their physical primitives. What then is "physical computation" in this schema? It can only be a second-order relational concept involving what are already composites of the physical primitives in which such putative relata are grounded. Hence, a fortiori, it can have no claim to independent ontological (i.e. "physical") significance. It merely degenerates to the self-sufficient micro-evolution of some aggregation of physical primitives; whatever is not entirely "micro-physical" is a further attribution *from the perspective of some implicit theory of knowledge*. To put it baldly, computation, in terms of any theory grounded in physically-primitive relations, isn't a "further physical fact"; it just *looks* as if it is. Consequently it can hardly be a viable candidate for a "physical correlate" of consciousness, since such correlation can be defined only in terms of what is to be explained. > And isn't that just a confirmation of my point that engineering > consciousness is possible, but the "hard problem" is asking a question such > that the asker will never be satisfied with any answer. You were responding to Bruno rather than me here, but I must say I can't see that you've really said anything to justify this assertion. ISTM at least as much a case of your own distaste for certain kinds of question. David > On 7/4/2014 8:56 AM, Bruno Marchal wrote: >> >> >> On 03 Jul 2014, at 18:39, David Nyman wrote: >> >>> On 3 July 2014 14:22, Bruno Marchal <[email protected]> wrote: >>> >>>> And perhaps most interestingly, >>>> its central motivation originates in, and simultaneously strikes at >>>> the heart of, the tacit assumption of its rivals that perception and >>>> cognition are (somehow) second-order relational phenomena attached to >>>> some putative "virtual level" of an exhaustively "material" reduction. >>>> >>>> The problem of the exhaustively material reduction is that it does use >>>> comp, >>>> more or less explicitly, without being aware that it does not work when >>>> put >>>> together with with materialism. >>> >>> >>> Yes, and I was roused from my customary torpor specifically to have >>> another stab at a thoroughgoing reductio of this position (or else, of >>> course, learn where I am in error). But, frustratingly, it does seem >>> to be extraordinarily hard to get across for the first time, because >>> of the tacit question-begging almost unavoidably consequent on the >>> difficulty of vacating the very perceptual position whose all too >>> manifest "entities" are undergoing ontological deconstruction. Once >>> seen, however, the error may then strike one as having been obvious. >>> >>> The commonest response, in my experience, after describing the >>> mind-body problem to someone for the first time, is "I don't see the >>> problem". On further probing, the default assumptions usually turn out >>> to be either straightforward mind-brain "identity", or "mind = >>> simulation, brain = computer". If the former, I point, in the first >>> place, to the completely non-standard and unjustified use of the >>> identity relation that this entails. If the latter, simple reductive >>> analogies like house-bricks, or society-people, can sometimes help to >>> convey the idea that any exhaustively reductive material schema >>> necessarily *eliminates* its ontological composites > > > That's just your definition of eliminates. Mountains are made of rocks, > therefore mountains don't exist. > >>> (difficult to see >>> precisely because *epistemological* composition manifestly remains and >>> the distinction is thereby elusive). Anyway, if the point is grasped >>> it becomes possible to see the disturbing consequences that such a >>> reduction has for the standard conjunction of "material computation" >>> and consciousness. >> >> >> I think so. Both the MGA and UDA1-7 were developed with the goal to >> explain a *part* of the mind-body problem in a way such a rationalist can >> say "OK, I see a problem". >> >> That worked well, but I did not expect *some* scientists ("diplomed such") >> would ask a Romane-philosopher (branch of literature) to say "I am not >> convinced", justifying a non-dialog, not even a debate. >> >> It is not the whole problem. It is the fact that if we believe in >> consciousness, and if we believe that the brain works like a digital >> machine, eventually, with or without a primitively existing physical >> universe, we have to justify the appearances of matter entirely from >> computer science, indeed from arithmetic (or any Turing-complete theory). >> >> As such, the hard problem of consciousness is not yet approached, nor >> used. Even if we eliminate consciousness, matter must be explain from a >> statistics on machine's discourses. >> At that point, the mind-body problem is only shown two times more >> difficult than usual, as we have both the hard problem of consciousness >> together with a new, conceptually less hard but technically very hard, >> problem of matter. >> >> Now animals are programmed to take matter for granted, as it is easier to >> eat and avoid being eaten. that's why I think "modern science" is really >> born with the platonists, which is notably the idea that what we see might >> result from simpler general relation, and that may be we might find first >> principles. >> >> Now, computer science provides the tools, and in some sense, offers the >> solution of the "hard problem" of consciousness on a plate. Indeed, it >> provides the non trivial mathematics of what ideally correct machine can >> prove, bet, infer, conceive, measure, observe know, believe about >> themselves. Accepting definitions on those, in the Arithmetical FPI >> contexts, and translating the definition in arithmetic by constraints which >> *all* makes sense due to the real bomb: Gödel's second incompleteness >> theorem, and the fact that (Löbian) machines proves their own incompleteness >> theorem. >> >> Then the solution of the hard problem is given by a disambiguation between >> []p (the 3p virtual body or its "Gödel number", or its "Gödel biochemical >> relation" that's not important) with []p & p, which is the knower, the first >> person, the soul if you want, and which is NOT a machine, and no machines >> can correctly justify a "[]" such that []p <-> []p & p, despite we, the >> theoricians on correct machine, know that their G* proves it. >> >> To bet that we are machine, in the "yes doctor" quasi operational sense, >> means that we bet on some identification between []p and []p & p at some >> level (defining the "[]"), but only "a God" (here the arithmetical Noůs G* >> of that "[]p") can know that "[]" are correct (in case it is correct). > > > And isn't that just a confirmation of my point that engineering > consciousness is possible, but the "hard problem" is asking a question such > that the asker will never be satisfied with any answer. > > Brent > > > -- > You received this message because you are subscribed to the Google Groups > "Everything List" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at http://groups.google.com/group/everything-list. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

