Hi Pei,
I partly agree with you....
What my line of thinking, presented in the prior email, was about was
what happens when you
* take a mind M1 that is **not** necessarily operating according to
an assumption of a consistent probability distribution across all its
experience (and, as you know, Novamente does not make an assumption
of a consistent prob. distribution across all its experience)
and
* study it from the perspective of a hypothetical mind M2 that
**does** have sufficient computational resources to study M2's
behaviors from the perspective of an overall consistent probability
distribution
This is a theoretical exercise which does not imply that M1 must
itself operate from the perspective of a consistent overall
probability distribution. The question is whether looking at M1 from
the perspective of M2 can shed light on M1.
And my conjecture is that it can. Because my conjecture is that even
if M1 is not explicitly maintaining probabilistic consistency in its
knowledge base, if it is going to achieve its goals effectively it
has got to approximately maintain probabilistic consistency in its
actions.
Put simplistically, this means that it has got to manage its internal
inconsistencies effectively from a pragmatic, goal-achievement
perspective.
I don't think this is especially profound or surprising, though.
In fact, the main reason I moved away from doing this kind of
theorizing years ago, is that it seemed that AGI theory was still at
the stage of proving very simple, sorta intuitively obvious things
(like Hutter's theoretical work, which is technically brilliant but
doesn't arrive at any conceptually nonobvious conclusion) ... and the
really interesting questions about concrete AGI designs remain
totally unaddressable by the mathematical theory (not in principle,
but in practice).
For instance, the Novamente design relies on the capability of
several AI algorithms such as
-- probabilistic logical inference
-- probabilistic evolutionary learning
-- economics-based attention allocation
-- statistical pattern mining
to all assist each other in appropriate ways, so as to recursively
dampen each others' intrinsic combinatorial explosions. I can
formulate the hypothesis that this dampening will happen as a series
of mathematical conjectures. But proving any of these conjectures
would be a lot of work, and would stretch the capability of current
mathematical tools.
This is quite unlike the situation in, say, civil engineering, where
you can design a new bridge and then use mathematical tools to
estimate its effectiveness. Or, nuclear weapons enginering, where
they are considering deploying new kinds of nuclear weapons that have
never been tested except using mathematical-theory-based computer
simulations!
So, where theory-of-AGI goes, one winds up thinking about stuff that
can tractably be proved given the current mathematical tools, rather
than stuff that's really critical to achieving AGI in the near term.
Which is a bummer for those of us who spent our long-lost childhoods
getting PhDs in math ;-)
ben
On Feb 4, 2007, at 7:52 AM, Pei Wang wrote:
Ben,
I have no problem with any of the points you made in the following.
However, the axioms of probability theory and interpretations of
probability (frequentist, logical, subjective) all take a consistent
probability distribution as precondition. Therefore, this assumption
is and will be behind any "proof" that AGI systems must be based on
probability theory to be optimal. If such a consistency can never be
achieved by any concrete AGI system, I don't see the value of such a
proof. It cannot even be taken as a useful upper bound or
approximation in the design process, because accepting it and
rejecting it leads to very different designs. It is just like
designing an AGI under the assumption of infinite resources while
saying that resources restrictions can be introduced gradually later
--- it will never work, unless the old design is almost completely
discarded.
Pei
On 2/4/07, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Again, to take consistency as an ultimate goal (which is never
fully
> achievable) and as a precondition (even an approximate one) are two
> very different positions. I hope you are not suggesting the
latter ---
> at least your posting makes me feel that way.
Hi,
In the Novamente system, consistency is just one among many goals
that are balanced internally by the system as it decides how to
allocate
its attention and how to prune its knowledge base.
I happened to be thinking about consistency from a theoretic point of
view lately, but, not because I think it's the sole key to
intelligence or
anything like that...
It happens to be easier to think about mathematically than many of
the
other important properties of intelligence, however ;-)
ben
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303