On Aug 14, 1:39 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 14 Aug 2011, at 16:38, Craig Weinberg wrote:
> > Does the idea of this machine solve the Hard Problem of Consciousness,
> > or are qualia something more than ideas?
> Quite cute little physical implementation of a Turing machine.
So good. Wow.
> Read Sane04, it explains how a slight variant of that machine, or how
> some program you can give to that machine, will develop qualia, and
> develop a discourse about them semblable to ours, so that you have to
> treat them as zombie if you want have them without qualia. They can
> even understand that their solution is partial, and necessary partial.
> Their theories are clear, transparent and explicit,
They aren't clear to me at all. I keep trying to read it but I don't
get why feeling should ever result from logic, let alone be an
inevitable consequence of any particular logic.
>unlike yours where
> it seems to be hard to guess what you assume, and what you derive.
> But then you admit yourself not trying to really convey your
> intuition, and so it looks just like "racism": "you will not tell me
> that this (pointing on silicon or a sort of clock) can think?" I don't
> take such move as argument.
It might think, but you can't tell me that it thinks it's a clock or
that it's telling time, let alone that it has feelings about that or
free will to change it. I'm open to being convinced of that, but it
doesn't make sense that we would perceive a difference between biology
and physics if there weren't in fact some kind of significant
difference. I don't see that comp provides for such a difference.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at