On 26 Apr 2012, at 23:34, Craig Weinberg wrote:

On Apr 26, 1:52 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 26 Apr 2012, at 16:47, Craig Weinberg wrote:

On Apr 25, 11:44 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

This means only that you have a reductionist conception of machine.

I think that reductionism is mechanistic by definition.

I guess you mean mechanism is reductionist by definition.

No, I am saying that reductionist thinking is mechanistic. They are
inseparable. How can you have a reductionist approach which is not
also mechanistic?

By reducing something into a theory involving non mechanistic element.

In logic, digital mechanism can be considered, at least for the ontology, as being Sigma_1 reductionism, which means you reduce the ontology to what can be proved from some universal arithmetical proposition having the form ExP(x) with P decidable. But you can imagine forms of reductionism into any sigma_i complete arithmetical proposition, that is having the form ExAyEzAtEu ....P(x), with P decidable.
Or worst like set theoretical reductionism, etc.
Note that ontological reductionism might not imply epistemological reductionism, also.

But that is
the old pregodelian conception of mechanism.
Today we know more. we know that we can only scratch the surface of
the machine's possibilities. And if we assume mechanism, we know, for
all machine looking inward can know (bt not necessarily) that she can
only scratch the subject.

Whatever the potential for mechanism, I think the potential for non-
mechanism will always be much greater.

For reality, yes. for the mind, possible, but there are no evidences.

The idea that mechanism has any
potential at all I think already presumes a non-mechanistic valuation
of the realization of such potential.

There is something true. Machine cannot avoid a non computable reality, and most theories on machine evade the computational, by logic. The propositional hypostases are decidable, but their well defined first order logical extension are sigma_2 complete, that is well beyond the Sigma_1 completeness of the universal machines. This makes machine bounded to develop "theologies".

Machines don't care about their
own possibilities.

You don't know that.

All of their potentials are worthwhile only because
they interest us or other organic entities.

Well, trivially in your non mechanist theory. But trivially not in comp.

What does it mean to behave like a machine or to be robotic? Why
should it mean that? This doesn't prove that all machines must all
behave like early machines that we have manufactured thus far, but I
think that taken with the other clues that we have about
inauthenticity in digital simulation, trouble with speech synthesis
and emotion with AI, symbol grounding problem, etc.. I think there is a clear basis to presume that in fact there is something fundamentally
different about assembled machines and autopoietic living organisms
which may in fact limit their potential.

Then you have to find something non-Turing emulable,

It's circular reasoning, because reducing things to the level of
Planck-Turing digitization already flattens all qualia to quanta,
leaving meaningless quanta as the only possibility.

In the Aristotelian theology, no in Plato's one.

Nothing other than
numbers are Turing emulable. Emulation is entirely subjective and
perspective-driven. Emulation is not an objective possibility.

So if we implement a computer on the moon, and then the earth is destroyed, you would say that such a computer will stop functionning? Emulation, as defined in computer science is an arithmetical reality. If you say emulation is not objective, you are saying that being a prime number is not objective. But then I will ask you to explain how the notion of prime depends on humans.

and non "first
person indeterminacy Turing recoverable" in Nature. But you might also
have to explain why such feature would be better to explain emotion,
speech, etc.  It really looks like explaining the difficult by adding
more difficulties.

I don't have to explain anything. Turing has to explain me.

But that is what he did, what me and many others continues, either with AI or theoretical computer science.

What you take as evidence is what the theory already explain. The
theory of machine (computer science) already explain why a machine
cannot feel to be machine, and indeed cannot even know which machines
she is.

And I have already explained that computer science theories can only
prove that computation is provable.

Yes. But it proves also that many things *about* machines are not provable by those machines, and that machine can know propositions that they cannot prove, etc. In fact computability helps us to classify the whole hierarchy of the non computable things, including many which have an impact for machines.

Awareness cannot be detected
objectively by evidence, so refinements to logical processes related
to evidence take us further and further from awareness, even while
discovering ever more sophisticated patterns which remind us of our
own awareness.

Good intuition ... shared by machines, if you would just listen to them already.



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to