Jef,

ED ########>> (I have switched  “########>>” to make it even easier to
quickly see each change in speaker)

Thank you for your two posts seeking to clear up the misunderstanding
between us.  I don’t mind disagreements if they seek to convey meaningful
content, not just negativity.  You post of Thu 11/8/2007 4:22 PM post has
a little more explanation for its criticisms.  Let me respond to some of
the language in that post as follows;

JEF ########>> I...tried to highlight a thread of epistemic confusion
involving an abstract observer interacting with and learning from its
environment....

...it seems you are confusing the subjective with the objective...

Too conceptualize any such system as "receiving sensation" as opposed to
"expressing sensation", for example, is wrong in systems-theoretic terms
of stimulus, process, response.  And this confusion, it seems to me, maps
onto your expressed difficulty grasping the significance of Solomonoff
induction.

ED ########>> Most importantly you say my alleged confusion between
subjective and objective maps into my difficulty to grasp the significance
of Solomonoff induction. If you could do so, please explain what you mean.
I would really like to better understand why so many smart people seem the
think its the bee’s knees.

You say “’sensation is never received’ by any system” and yet the word is
commonly used to describe information received by the brain from sensory
organs, not just in common parlance but also in brain science literature.
You have every right to use your “stimulus, process, response” model, in
which I assume you define “sensation” as process, but I hope you realize
many people as intelligent as you do not make that distinction.  Since
these are definitional issues, there is no right or wrong, unless,
perhaps, one usage is much more common than the other.  I don’t think your
strictly limited usage is the most common.

With regard to confusing subjective and objective, I assume it is clear,
without the need for explicit statement, to most readers on this list that
everything that goes on in a brain is derived pretty much either from what
has been piped in from various sensors, including chemical and emotional
sensors, or built into in by hardware or software.

One can argue that there is no objective reality, but if people are
allowed to believe in God, please excuse me if I believe in external
reality.  Even if there isn’t one, it sure as hell seems (at least to me)
like there is, and it is one hell of a good simplifying assumption.  But I
think every reader on this list knows that what goes on in our heads are
just shadows cast on our cranial cave walls by something outside.  And
that as much as many of us might believe in an objective reality, none of
us know exactly what it is.

JEF ########>> Any kind of Cartesian theater in the mind, silent audience
and all -- never mind the experimental evidence for gaps, distortions,
fabrications, confabulations in the story putatively shown --  has no
functional purpose.  In systems-theoretical terms, this would entail an
additional processing step of extracting relevant information from the
essentially whole content of the theater which is not only unnecessary but
intractable.  The system interacts with 'reality' without the need to
interpret it.

ED ########>> I don’t know what your model of mind theater is, but mine
has a lot of purpose.  I quoted Baar’s Theater of the Mind, but to be
honest the model I use is based on one I started decades before I ever
heard of Baar’s model in 1969-70, and I have only read a very brief
overview of his model.

With regard to “gaps, distortions, fabrications, confabulations”, all
those things occur in human minds, so it is not necessarily inappropriate
that they occur in a model of the mind.

It is not necessary for the whole content of the theater to have its
information extracted, any more than it is necessary for the whole
activation state of your brain to be extracted.

The audience is not silent.  If you think of audiences as necessarily
silent, I guess you have never gone to any good shows, games, or concerts.
If you have ever listened to a really important tense baseball game on the
radio, you have a sense for just how dynamic and alive an audience can be.
It can seem to have a life of its own.  But that is just 50,000 people.
The cortex has probably 300 million cortical columns and 30 Billion
neurons.  Evidence shows that conscious awareness tends to be associated
with synchronicity in the brain.  That is somewhat the equivalent of
people in an audience clapping together, or doing a wave, or arguably
turning their heads toward the same thing.  The brain is full of neurons
that can start spiking at a milliseconds notice.  It can be one hell of a
lively house.

The mind’s sense of awareness comes not from a homunculous, but rather
from millions of parts of the brain watching and responding to the mind’s
own dynamic activation state, including the short term and long term
memory of such states.  A significant percent of these viewer can all
respond at once in their own way to what is in the spot light of the
consciousness, and there is a mechanism for rapidly switching the
spotlight in response to audience reactions, reactions which can include
millions of dynamic dimension.

With regard to your statement that “The system interacts with 'reality'
without the need to interpret it.”  That sounds even more mind-denying
than Skinners Behaviorism.  At least Skinner showed enough respect for the
mind to honor it with a black box.  I guess we are to believe that
perception, cognition, planning, and understanding happen without any
interpretation.  They are all just direct look up.

Even Kolmogorov and Solomonoff at least accord it the honor of multiple
program, and ones that can be quite complex at that, complex enough to
even do “interpretation.”

Ed Porter


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Jef Allbright [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 08, 2007 4:22 PM
To: agi@v2.listbox.com
Subject: Re: [agi] How valuable is Solmononoff Induction for real world
AGI?


On 11/8/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:

> Jeff,
>
> In your below flame you spent much more energy conveying contempt than
> knowledge.

I'll readily apologize again for the ineffectiveness of my presentation,
but I meant no contempt.


> Since I don't have time to respond to all of your attacks,

Not attacks, but (overly) terse pointers to areas highlighting difficulty
in understanding the problem due to difficulty framing the question.


> MY PRIOR POST>>>> "...affect the event's probability..."
>
> JEF'S PUT DOWN 1>>>>More coherently, you might restate this as
> "...reflect the event's likelihood..."

I (ineffectively) tried to highlight a thread of epistemic confusion
involving an abstract observer interacting with and learning from its
environment.  In your paragraph, I find it nearly impossible to find a
valid base from which to suggest improvements.  If I had acted more
wisely, I would have tried first to establish common ground
**outside** your statements and touched lightly and more constructively on
one or two points.


> MY COMMENT>>>> At Dragon System, then one of the world's leading
> speech recognition companies, I was repeatedly told by our in-house
> PhD in statistics that "likelihood" is the measure of a hypothesis
> matching, or being supported by, evidence.  Dragon selected speech
> recognition word candidates based on the likelihood that the
> probability distribution of their model matched the acoustic evidence
> provided by an event, i.e., a spoken utterance.

If you said Dragon selected word candidates based on their probability
distribution relative to the likelihood function supported by the evidence
provided by acoustic events I'd be with you there.  As it is, when you say
"based on the likelihood that the probability..." it seems you are
confusing the subjective with the objective and, for me, meaning goes out
the door.


> MY PRIOR POST>>>> "...the descriptive length of sensations we
> receive..."
>
> JEF'S PUT DOWN 2>>>> Who is this "we" that "receives" sensations?
> Holy homunculus, Batman, seems we have a bit of qualia confusion
> thrown into the mix!
>
> MY COMMENT>>>> Again I did not know that I would be attacked for using
> such a common English usage as "we" on this list.  Am I to assume that
> you, Jef, never use the words "we" or "I" because you are surrounded
> by "friends" so kind as to rudely say "Holy homunculus, Batman" every
> time you do.

Well, I meant to impart a humorous tone, rather than to be rude, but again
I offer my apology; I really should have known it wouldn't be effective.

I highlighted this phrasing, not for the colloquial use of "we", but
because it again demonstrates epistemic confusion impeding comprehension
of a machine intelligence interacting (and learning
from) its environment.  Too conceptualize any such system as "receiving
sensation" as opposed to "expressing sensation", for example, is wrong in
systems-theoretic terms of stimulus, process, response.  And this
confusion, it seems to me, maps onto your expressed difficulty grasping
the significance of Solomonoff induction.


> Or, just perhaps, are you a little more normal than that.
>
> In addition, the use of the word "we" or even "I" does not necessary
> imply a homunculus.  I think most modern understanding of the brain
> indicates that human consciousness is most probably -- although richly
> interconnected -- a distributed computation that does not require a
> homunculus.  I like and often use Bernard Baars' Theater of
> Consciousness metaphor.

Yikes!  Well, that goes to my point.  Any kind of Cartesian theater in the
mind, silent audience and all -- never mind the experimental evidence for
gaps, distortions, fabrications, confabulations in the story putatively
shown --  has no functional purpose.  In systems-theoretical terms, this
would entail an additional processing step of extracting relevant
information from the essentially whole content of the theater which is not
only unnecessary but intractable.  The system interacts with 'reality'
without the need to interpret it.


> But none of this means it is improper to use the words "we" or "I"
> when referring to ourselves or our consciousnesses.

I'm sincerely sorry to offend you.  It takes even more time to attempt to
repair, it impairs future relations, and clearly it didn't convey any
useful understanding -- evidenced by your perception that I was
criticizing your use of English.



> And I think one should be allowed to use the word "sensation" without
> being accused of "qualia confusion."  Jeff, do you ever use the word
> "sensation," or would that be too "confusing" for you?

"Sensation" is a perfectly good word and concept.  My point is that
sensation is never "received" by any system, that it smacks of qualia
confusion, and that such a misconception gets in the way of understanding
how a machine intelligence might deal with "sensation" in practice.


> So, Jeff, if Solomonoff induction is really a concept that can help me
> get a more coherent model of reality, I would really appreciate
> someone who had the understanding, intelligence, and friendliness...

Again I apologize for my clearly counter-productive post, and assure you
that I will not interfere (or attempt to contribute) while others with
understanding, intelligence, and friendliness post their truly helpful
responses.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63189687-c92f10

Reply via email to