Ben Goertzel wrote:
Richard,
My first response to this is that you still don't seem to have taken
account of what was said in the second part of the paper - and, at
the same time, I can find many places where you make statements that
are undermined by that second part.
To take the most significant example: when you say:
> But, I don't see how the hypothesis
>
> "Conscious experience is **identified with** unanalyzable mind-atoms"
>
> could be distinguished empirically from
>
> "Conscious experience is **correlated with** unanalyzable mind-atoms"
... there are several concepts buried in there, like [identified
with], [distinguished empirically from] and [correlated with] that
are theory-laden. In other words, when you use those terms you are
implictly applying some standards that have to do with semantics and
ontology, and it is precisely those standards that I attacked in
part 2 of the paper.
However, there is also another thing I can say about this statement,
based on the argument in part one of the paper.
It looks like you are also falling victim to the argument in part 1,
at the same time that you are questioning its validity: one of the
consequences of that initial argument was that *because* those
concept-atoms are unanalyzable, you can never do any such thing as
talk about their being "only correlated with a particular cognitive
event" versus "actually being identified with that cognitive event"!
So when you point out that the above distinction seems impossible to
make, I say: "Yes, of course: the theory itself just *said* that!".
So far, all of the serious questions that people have placed at the
door of this theory have proved susceptible to that argument.
Well, suppose I am studying your brain with a super-advanced
brain-monitoring device ...
Then, suppose that I, using the brain-monitoring device, identify the
brain response pattern that uniquely occurs when you look at something
red ...
I can then pose the question: Is your experience of red *identical* to
this brain-response pattern ... or is it correlated with this
brain-response pattern?
I can pose this question even though the "cognitive atoms" corresponding
to this brain-response pattern are unanalyzable from your perspective...
Next, note that I can also turn the same brain-monitoring device on
myself...
So I don't see why the question is unaskable ... it seems askable,
because these concept-atoms in question are experience-able even if not
analyzable... that is, they still form mental content even though they
aren't susceptible to explanation as you describe it...
I agree that, subjectively or empirically, there is no way to distinguish
"Conscious experience is **identified with** unanalyzable mind-atoms"
from
"Conscious experience is **correlated with** unanalyzable mind-atoms"
and it seems to me that this indicates you have NOT solved the hard
problem, but only restated it in a different (possibly useful) way
There are several different approaches and comments that I could take
with what you just wrote, but let me focus on just one; the last one.
When you make a statement such as "... it seems to me that .. you have
NOT solved the hard problem, but only restated it", you are implicitly
bringing to the table a set of ideas about what it means to "solve" this
problem, or "explain" consciousness.
Fine so far: everyone uses the rules of explanation that they have
acquired over a lifetime - and of course in science we all roughly agree
on a set of ideas about what it means to explain things.
But what I am trying to point out in this paper is that because of the
nature of intelligent systems and how they must do their job, the very
concept of *explanation* is undermined by the topic that in this case we
are trying to explain. You cannot just go right ahead and apply a
standard of explanation right out of the box (so to speak) because
unlike explaining atoms and explaining stars, in this case you are
trying to explain something that interferes with the notion of
"explanation".
So when you imply that the theory I propose is weak *because* it
provides no way to distinguish:
"Conscious experience is **identified with** unanalyzable mind-atoms"
from
"Conscious experience is **correlated with** unanalyzable mind-atoms"
You are missing the main claim that the theory tries to make: that such
distinctions are broken precisely *because* of what is going on with the
explanandum.
You have got to get this point to be able to understand the paper.
I mean, it is okay to disagree with the point and say why (to talk about
what it means to explain things' to talk about the connection between
the explanandum and the methods and basic terms of the thing that we
call "explaining things"). That would be fine.
But at the moment it seems to me that you have been through several
passes at simply re-stating your position that you do not think the
theory succeeds in explaining the subject, whereas I cannot bring you
round to talking about what is the most important idea in the paper:
that such simple statements as the ones you are making are just using a
concept of explanation without examining it.
So we still have not addressed the content of part 2 of the paper. I
did try to say all of the above in the last post, but you didn't mention
that bit in your reply ;-)
Richard Loosemore
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com