On 11/8/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:

> Jeff,
>
> In your below flame you spent much more energy conveying contempt than
> knowledge.

I'll readily apologize again for the ineffectiveness of my
presentation, but I meant no contempt.


> Since I don't have time to respond to all of your attacks,

Not attacks, but (overly) terse pointers to areas highlighting
difficulty in understanding the problem due to difficulty framing the
question.


> MY PRIOR POST>>>> "...affect the event's probability..."
>
> JEF'S PUT DOWN 1>>>>More coherently, you might restate this as "...reflect
> the event's likelihood..."

I (ineffectively) tried to highlight a thread of epistemic confusion
involving an abstract observer interacting with and learning from its
environment.  In your paragraph, I find it nearly impossible to find a
valid base from which to suggest improvements.  If I had acted more
wisely, I would have tried first to establish common ground
**outside** your statements and touched lightly and more
constructively on one or two points.


> MY COMMENT>>>> At Dragon System, then one of the world's leading speech
> recognition companies, I was repeatedly told by our in-house PhD in
> statistics that "likelihood" is the measure of a hypothesis matching, or
> being supported by, evidence.  Dragon selected speech recognition word
> candidates based on the likelihood that the probability distribution of
> their model matched the acoustic evidence provided by an event, i.e., a
> spoken utterance.

If you said Dragon selected word candidates based on their probability
distribution relative to the likelihood function supported by the
evidence provided by acoustic events I'd be with you there.  As it is,
when you say "based on the likelihood that the probability..." it
seems you are confusing the subjective with the objective and, for me,
meaning goes out the door.


> MY PRIOR POST>>>> "...the descriptive length of sensations we receive..."
>
> JEF'S PUT DOWN 2>>>> Who is this "we" that "receives" sensations?  Holy
> homunculus, Batman, seems we have a bit of qualia confusion thrown into the
> mix!
>
> MY COMMENT>>>> Again I did not know that I would be attacked for using such
> a common English usage as "we" on this list.  Am I to assume that you, Jef,
> never use the words "we" or "I" because you are surrounded by "friends" so
> kind as to rudely say "Holy homunculus, Batman" every time you do.

Well, I meant to impart a humorous tone, rather than to be rude, but
again I offer my apology; I really should have known it wouldn't be
effective.

I highlighted this phrasing, not for the colloquial use of "we", but
because it again demonstrates epistemic confusion impeding
comprehension of a machine intelligence interacting (and learning
from) its environment.  Too conceptualize any such system as
"receiving sensation" as opposed to "expressing sensation", for
example, is wrong in systems-theoretic terms of stimulus, process,
response.  And this confusion, it seems to me, maps onto your
expressed difficulty grasping the significance of Solomonoff
induction.


> Or, just perhaps, are you a little more normal than that.
>
> In addition, the use of the word "we" or even "I" does not necessary imply a
> homunculus.  I think most modern understanding of the brain indicates that
> human consciousness is most probably -- although richly interconnected -- a
> distributed computation that does not require a homunculus.  I like and
> often use Bernard Baars' Theater of Consciousness metaphor.

Yikes!  Well, that goes to my point.  Any kind of Cartesian theater in
the mind, silent audience and all -- never mind the experimental
evidence for gaps, distortions, fabrications, confabulations in the
story putatively shown --  has no functional purpose.  In
systems-theoretical terms, this would entail an additional processing
step of extracting relevant information from the essentially whole
content of the theater which is not only unnecessary but intractable.
 The system interacts with 'reality' without the need to interpret it.


> But none of this means it is improper to use the words "we" or "I" when
> referring to ourselves or our consciousnesses.

I'm sincerely sorry to offend you.  It takes even more time to attempt
to repair, it impairs future relations, and clearly it didn't convey
any useful understanding -- evidenced by your perception that I was
criticizing your use of English.



> And I think one should be allowed to use the word "sensation" without being
> accused of "qualia confusion."  Jeff, do you ever use the word "sensation,"
> or would that be too "confusing" for you?

"Sensation" is a perfectly good word and concept.  My point is that
sensation is never "received" by any system, that it smacks of qualia
confusion, and that such a misconception gets in the way of
understanding how a machine intelligence might deal with "sensation"
in practice.


> So, Jeff, if Solomonoff induction is really a concept that can help me get a
> more coherent model of reality, I would really appreciate someone who had
> the understanding, intelligence, and friendliness...

Again I apologize for my clearly counter-productive post, and assure
you that I will not interfere (or attempt to contribute) while others
with understanding, intelligence, and friendliness post their truly
helpful responses.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63095553-def223

Reply via email to