On 7/4/2014 8:56 AM, Bruno Marchal wrote:

On 03 Jul 2014, at 18:39, David Nyman wrote:

On 3 July 2014 14:22, Bruno Marchal <[email protected]> wrote:

And perhaps most interestingly,
its central motivation originates in, and simultaneously strikes at
the heart of, the tacit assumption of its rivals that perception and
cognition are (somehow) second-order relational phenomena attached to
some putative "virtual level" of an exhaustively "material" reduction.

The problem of the exhaustively material reduction is that it does use comp,
more or less explicitly, without being aware that it does not work when put
together with with materialism.

Yes, and I was roused from my customary torpor specifically to have
another stab at a thoroughgoing reductio of this position (or else, of
course, learn where I am in error). But, frustratingly, it does seem
to be extraordinarily hard to get across for the first time, because
of the tacit question-begging almost unavoidably consequent on the
difficulty of vacating the very perceptual position whose all too
manifest "entities" are undergoing ontological deconstruction. Once
seen, however, the error may then strike one as having been obvious.

The commonest response, in my experience, after describing the
mind-body problem to someone for the first time, is "I don't see the
problem". On further probing, the default assumptions usually turn out
to be either straightforward mind-brain "identity", or "mind =
simulation, brain = computer". If the former, I point, in the first
place, to the completely non-standard and unjustified use of the
identity relation that this entails. If the latter, simple reductive
analogies like house-bricks, or society-people, can sometimes help to
convey the idea that any exhaustively reductive material schema
necessarily *eliminates* its ontological composites

That's just your definition of eliminates. Mountains are made of rocks, therefore mountains don't exist.

(difficult to see
precisely because *epistemological* composition manifestly remains and
the distinction is thereby elusive). Anyway, if the point is grasped
it becomes possible to see the disturbing consequences that such a
reduction has for the standard conjunction of "material computation"
and consciousness.

I think so. Both the MGA and UDA1-7 were developed with the goal to explain a *part* of the mind-body problem in a way such a rationalist can say "OK, I see a problem".

That worked well, but I did not expect *some* scientists ("diplomed such") would ask a Romane-philosopher (branch of literature) to say "I am not convinced", justifying a non-dialog, not even a debate.

It is not the whole problem. It is the fact that if we believe in consciousness, and if we believe that the brain works like a digital machine, eventually, with or without a primitively existing physical universe, we have to justify the appearances of matter entirely from computer science, indeed from arithmetic (or any Turing-complete theory).

As such, the hard problem of consciousness is not yet approached, nor used. Even if we eliminate consciousness, matter must be explain from a statistics on machine's discourses. At that point, the mind-body problem is only shown two times more difficult than usual, as we have both the hard problem of consciousness together with a new, conceptually less hard but technically very hard, problem of matter.

Now animals are programmed to take matter for granted, as it is easier to eat and avoid being eaten. that's why I think "modern science" is really born with the platonists, which is notably the idea that what we see might result from simpler general relation, and that may be we might find first principles.

Now, computer science provides the tools, and in some sense, offers the solution of the "hard problem" of consciousness on a plate. Indeed, it provides the non trivial mathematics of what ideally correct machine can prove, bet, infer, conceive, measure, observe know, believe about themselves. Accepting definitions on those, in the Arithmetical FPI contexts, and translating the definition in arithmetic by constraints which *all* makes sense due to the real bomb: Gödel's second incompleteness theorem, and the fact that (Löbian) machines proves their own incompleteness theorem.

Then the solution of the hard problem is given by a disambiguation between []p (the 3p virtual body or its "Gödel number", or its "Gödel biochemical relation" that's not important) with []p & p, which is the knower, the first person, the soul if you want, and which is NOT a machine, and no machines can correctly justify a "[]" such that []p <-> []p & p, despite we, the theoricians on correct machine, know that their G* proves it.

To bet that we are machine, in the "yes doctor" quasi operational sense, means that we bet on some identification between []p and []p & p at some level (defining the "[]"), but only "a God" (here the arithmetical Noůs G* of that "[]p") can know that "[]" are correct (in case it is correct).

And isn't that just a confirmation of my point that engineering consciousness is possible, but the "hard problem" is asking a question such that the asker will never be satisfied with any answer.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to