On Nov 13, 2007 2:37 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

>
> Ben,
>
> Unfortunately what you say below is tangential to my point, which is
> what happens when you reach the stage where you cannot allow any more
> vagueness or subjective interpretation of the qualifiers, because you
> have to force the system to do its own grounding, and hence its own
> interpretation.



I don't see why you talk about "forcing the system to do its own grounding"
--
the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately
grounded in in-built
feeling-evaluation routines, measuring stuff like "amount of novelty
observed",
"amount of food in system" etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like "Bob_Yifu was observed
at time-stamp 599933322", or a fact "Command 'wiggle ear' was sent
at time-stamp 544444".  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.



> What you gave below was a sketch of some more elaborate 'qualifier'
> mechanisms.  But I described the process of generating more and more
> elaborate qualifier mechanisms in the body of the essay, and said why
> this process was of no help in resolving the issue.
>

So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it
gathered
via its perceived experience -- why do you think it has a problem?

I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional
probability
is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional
probabilities
based on the system's own knowledge-base, i.e.

Inheritance A B <.8>

means

"If A and B were reasoned about a lot, then A would (as measred by an
weighted
average) have 80% of the relationships that B does"

But apparently you were making some other point, which I did not grok,
sorry...

Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you
seemed
to be assuming in your post.

Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64644318-8bbdee

Reply via email to