Ben Goertzel wrote:
Richard,

In your paper you say

"
The argument does not
say anything about the nature of conscious experience, qua
subjective experience, but the argument does say why it
cannot supply an explanation of subjective experience. Is
explaining why we cannot explain something the same as
explaining it?
"

I think it isn't the same...

The problem is that there may be many possible explanations for why we can't explain consciousness. And it seems there is no empirical way to decide among these explanations. So we need to decide among them via some sort of metatheoretical criteria -- Occam's Razor, or conceptual consistency with our scientific ideas, or some such. The question for you then is, why is yours the best explanation of why we can't explain consciousness?

Hmmmm.... that question of mine, which you quote above, was the
introduction to part 2 of the paper, which then specifically supplied an
answer to your above question.

In other words, I accept your question, but the words that came
immediately after the above quote did actually answer it in detail. ;-)

Short summary of that later answer:  we do indeed need an
"occams-razor-like" reason for believing the solution I propose, but
there are different versions of how you understand occams razor, and I
argue that you decide among *those* things by having a fundamental
theory of semantics (not a superficial theory, but a fundamental,
ontologically deep theory).

What I then effectively do is to point to a spectrum of semantic/
ontological theories, ranging from something as extremely "formalist" as
Hutter (though I do not mention him by name) to something as extremely
empirical and emergentist as Loosemore (the idea of Extreme Cognitive
Semantics) ... and I argue that the only self-consistent position is the
Extreme Cognitive Semantics position.

The implication of that argument is, then, that the very best we can do
to decide between my theory and any other in the same vein, is to apply
the rules of science to it:  this will then be a mixture of all the
usual processes, and among those processes will be the main criterion,
which is:  "Does a majority of people find that this theory makes more
sense than any other, and does it make novel predictions that can be
falsified?"

I am happy to be judged on those criteria.






But I have another confusion about your argument. I understand the idea that a mind's analysis process has eventually got to "bottom out" somewhere, so that it will describe some entities using descriptions that are (from its perspective) arbitrary and can't be decomposed any further. These bottom-level entities could be sensations or they could be sort-of arbitrary internal tokens out of which internal patterns are constructed....

But what do you say about the experience of being conscious of a chair, then? Are you saying that the consciousness I have of the chair is the *set* of all the bottom-level unanalyzables into which the chair is decomposed by my mind?

ben


Well, let us distinguish two kinds of answer to your question

When a philosopher says that there is a "mystery" about what the
conscious experience of red actually is, and that they could imagine a
machine that was able to talk about red, etc etc, without actually
having that mysterious experience, then we have the beginning of a
philosophical quandary that demands explanation.

But when you say that you are "conscious" of a chair, I don't know of
any philosophers who would say that there is a profound mystery there,
which is over and above the mystery of the qualia of all the
chair-parts.  Philosophers don't ever say (at least, I don't recall)
that "chairs" and other objects contain a deep mystery that seems to be
unanalyzable. From that point of view, I would have to ask for extra information about what you wanted explained: do you feel that [chair] has a conscious phenomenology that is independent of the sum of its parts-qualia?

Second answer:

Now, I could give a much deeper answer to your question, which would
start talking about our general awareness of the things around us ... and this may have been what you meant by your consciousness of the chair.

This is a little tricky, because now what I think is happening is that you first have to think about the idea of your consciousness, and what happens then, I believe, would be a kind of mental summing of the qualia - forming a new concept-atom to encode [all of the component qualia of [chair]]. You can then see how this summed concept would still be "fairly" unanalyzable, because it was just one step removed from a host of others that were dead ends.

This needs more thinking, but I believe that it can be worked out properly, in a way consistent with the original argument.

I am especially interested in the fact that there are some "vague" consciousness feelings we get: things that are "kinda" mysterious. Perhaps they are just these atoms that are one step removed from dead-end concept-atoms.





Richard Loosemore






On Fri, Nov 14, 2008 at 11:44 PM, Matt Mahoney <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    --- On Fri, 11/14/08, Richard Loosemore <[EMAIL PROTECTED]
    <mailto:[EMAIL PROTECTED]>> wrote:
     >
    http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

    Interesting that some of your predictions have already been tested,
    in particular, synaesthetic qualia was described by George Stratton
    in 1896. When people wear glasses that turn images upside down, they
    adapt after several days and begin to see the world normally.

    http://www.cns.nyu.edu/~nava/courses/psych_and_brain/pdfs/Stratton_1896.pdf
    
<http://www.cns.nyu.edu/%7Enava/courses/psych_and_brain/pdfs/Stratton_1896.pdf>
    http://wearcam.org/tetherless/node4.html

    This is equivalent to your prediction #2 where connecting the output
    of neurons that respond to the sound of a cello to the input of
    neurons that respond to red would cause a cello to sound red. We
    should expect the effect to be temporary.

    I'm not sure how this demonstrates consciousness. How do you test
    that the subject actually experiences redness at the sound of a
    cello, rather than just behaving as if experiencing redness, for
    example, claiming to hear red?

    I can do a similar experiment with autobliss (a program that learns
    a 2 input logic function by reinforcement). If I swapped the inputs,
    the program would make mistakes at first, but adapt after a few
    dozen training sessions. So autobliss meets one of the requirements
    for qualia. The other is that it be advanced enough to introspect on
    itself, and that which it cannot analyze (describe in terms of
    simpler phenomena) is qualia. What you describe as "elements" are
    neurons in a connectionist model, and the "atoms" are the set of
    active neurons. "Analysis" means describing a neuron in terms of its
    inputs. Then qualia is the first layer of a feedforward network. In
    this respect, autobliss is a single neuron with 4 inputs, and those
    inputs are therefore its qualia.

    You might object that autobliss is not advanced enough to ponder its
    own self existence. Perhaps you define "advanced" to mean it is
    capable of language (pass the Turing test), but I don't think that's
    what you meant. In that case, you need to define more carefully what
    qualifies as "sufficiently powerful".


    -- Matt Mahoney, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>





    -------------------------------------------
    agi
    Archives: https://www.listbox.com/member/archive/303/=now
    RSS Feed: https://www.listbox.com/member/archive/rss/303/
    Modify Your Subscription: https://www.listbox.com/member/?&;
    <https://www.listbox.com/member/?&;>
    Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>

"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." -- Robert Heinlein


------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to