Richard,

>
> So here I am, looking at this situation, and I see:
>
>   ---- AGI system intepretation (implicit in system use of it)
>   ---- Human programmer intepretation
>
> and I ask myself which one of these is the real interpretation?
>
> It matters, because they do not necessarily match up.


That is true, but in some cases they may approximate each other well..

In others, not...

This happens to be a pretty simple case, so the odds of a good
approximation seem high.



>  The human
> programmer's intepretation has a massive impact on the system because
> all the inference and other mechanisms are built around the assumption
> that the probabilities "mean" a certain set of things.  You manipulate
> those p values, and your manipulations are based on assumptions about
> what they mean.



Well, the PLN inference engine's treatment of

ContextLink
    home
    InheritanceLink Bob_Yifu friend

is in no way tied to whether the system's implicit interpretation of the
ideas of "home" or "friend" are humanly natural, or humanly comprehensible.

The same inference rules will be applied to cases like

ContextLink
    Node_66655
    InheritanceLink Bob_Yifu Node_544

where the concepts involved have no humanly-comprehensible label.

It is true that the interpretation of ContextLink and InheritanceLink are
fixed
by the wiring of the system, in a general way (but what kinds of properties
are referred to by them may vary in a way dynamically determined by the
system).


> In order to completely ground the system, you need to let the system
> build its own symbols, yes, but that is only half the story:  if you
> still have a large component of the system that follows a
> programmer-imposed interpretation of things like probability values
> attached to facts, you have TWO sets of symbol-using mechanisms going
> on, and the system is not properly grounded (it is using both grounded
> and ungrounded symbols within one mechanism).



I don't think the system needs to learn its own probabilistic reasoning
rules
in order to be an AGI.  This, to me, is too much like requiring that a brain
needs
to learn its own methods for modulating the conductances of the bundles of
synapses linking between the neurons in cell assembly A and cell assembly B.

I don't see a problem with the AGI system having hard-wired probabilistic
inference rules, and hard-wired interpretations of probabilistic link
types.  But
the interpretation of any **particular** probabilistic relationship inside
the system, is relative
to the concepts and the empirical and conceptual relationships that the
system
has learned.

You may think that the brain learns its own uncertain inference rules based
on a
lower-level infrastructure that operates in terms entirely unconnected from
ideas
like uncertainty and inference.  I think this is wrong.  I think the brain's
uncertain
inference rules are the result, on the cell assembly level, of Hebbian
learning and
related effects on the neuron/synapse level.  So I think the brain's basic
uncertain
inference rules are wired-in, just as Novamente's are, though of course
using
a radically different infrastructure.

Ultimately an AGI system needs to learn its own reasoning rules and
radically
modify and improve itself, if it's going to become strongly superhuman!  But
that is
not where we need to start...

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64998317-8c4281

Reply via email to