On 27 Sep 2011, at 20:18, Craig Weinberg wrote:
It's interesting that he goes from showing how neurons plausibly have
micro-agency, to then insisting in part 7 that we must reduce
He explains this already in his oldest book "brainstorm". I agree with
him. To explain something consists in reducing it to something we
The mystery (which just shows how much he is aristotelian) is that he
does not apply this principle to matter. To explain matter, you have
also to reduce it to something which does not refer to matter. Yet he
believes that there is no hard problem for matter, like most scientists.
But mechanism explains conceptually bot matter and (99,9% of)
consciousness with resect that criteria. It is not 100%, because our
beliefs in number cannot be explained in that way. But then machine
can already explain why it has to be like that.
To me, all it takes is to realize that it's not only what the neurons
are doing physically that matters, but what the neuronal agents
themselves are perceiving, and how perceptions scale up differently
than material objects relating across space do.
Then all interaction are conscious.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at