Edward W. Porter wrote:
Richard,
Geortzel claims his planning indicates it is rougly 6 years x 15
excellent, hard-working programmers, or 90 man years to getting his
architecture up an running. I assume that will involve a lot of “hard”
mental work.
By “hard problem” I mean a problem for which we don’t have what seems --
within the Novemente model -- to be a way for handling it at, at least,
a roughly human-level. We won’t have proof that the problem is not hard
until we actually get the part of the system that deals with that
problem up and running successfully.
Until then, you have every right to be skeptical. But you also have the
right, should you so choose, to open your mind up to the tremendous
potential of the Novamente approach.
RICHARD####> What would be the solution of the grounding problem?
ED####> Not hard. As one linguist said “Words are defined by the company
they keep”. Kinda like I am guessing Google sets work, but at more
different levels in the gen/comp pattern hierarchy and with more cross
inferencing between different google-set seeds. The same goes not only
for words, but for almost all concepts and sub-concepts. Grounding is
made out of a life-time of experience recording such associations and
the dynamic reactivation of those associations both in the subconscious
and conscious in response to current activations.
RICHARD####> What would be the solution of the problem of autonomous,
unsupervised learning of concepts?
ED####> Not hard! Read Novamente (or for a starter my prior summaries of
it). That’s one of its main focus.
RICHARD####> Can you find proofs that inference control engines will not
show divergent behavior under heavy load (i.e. will they degrade
gracefully when forced to provide answers in real time)?
ED####> Not totally clear. Brain level hardware will really help here,
but what is six orders of magnitude to the potential of combinatorial
explosion in dynamic activations of something as large and
high-dimensional as world knowledge?.
This issue falls under the
getting-it-all-to-work-together-well-automatically heading, which I said
is non-trivial. But Novamente directs a lot of attention to this
problems, by among other approaches (a) using long and short term
importance metrics to guide computational resource allocation, (b)
having a deep memory of which computational patterns have proven
appropriate in prior similar circumstances, (c) having a gen/comp
hierarchy of such prior computational patterns which allows them to be
instantiated in a given case in a context appropriate way, and (d)
providing powerful inferencing mechanisms that go way beyond those
commonly used in most current AIs.
I am totally confident we could get something very useful out of the
system even if it was not as well tuned as a human brain. There as all
sorts of ways you could dampen the potential not only for combinatorial
explosion, but also for instability. We probably would start it out
with a lot of such damping, but over time give it more freedom to
control its own parameters.
RICHARD####> Are there solutions to the problems of flexible, abstract
analogy building?
Language learning?
ED####> Not hard! A Novamente class machine would be like Hofstadter’s
CopyCat on steroids when it comes to making analogies.
The gen/comp hierarchy of patterns would not only apply to all the
concepts that fall directly within what we think of as NL, but also to
the system’s world-knowledge, itself, of which such NL concepts and
their contexts would be a part. This includes knowledge about its own
life-history, behavior, and the feedback it has received. Thus, it
would be fully capable of representing and matching concepts at the
level humans do when understanding and communicating with NL. The deep
contextual grounding contained within such world knowledge and the
ability to make inferences from it in real time would largely solve the
hard disambiguation problems in natural language recognition, and allow
language generation to be performed rapidly in a way that is appropriate
to all the levels of context that humans use when speaking.
RICHARD####> Pragmatics?
ED####> Not hard! Follows from the above answer. Understanding of
pragmatics would result from the ability to dynamically generalize from
prior similar statements in prior similar contexts, of what those prior
contexts contained.
RICHARD####> Ben Goertzel wrote:
>Goertzel####> This practical design is based on a theory that is
fairly complete, but not easily verifiable using current technology.
The verification, it seems, will come via actually getting the AGI built!
ED####> You and Ben are totally correct. None of this will be proven
until it has actually been shown to work. But significant pieces of it
have already been shown to work.
I think Ben believes it will work, as do I, but we both agree it will
not be “verifiable” until it actually does.
As I wrote to Robin Hanson earlier today, the fact you don’t agree with
what we view as the relatively high probability of success for our
approach does not reflect poorly on either your intelligence or your
knowledge of AI. If you haven’t spent a lot of time thinking about a
Novamente-like approach there is no reason, no matter how bright you are
that you should be able to understand its promise.
You are right. I have only spent about 25 years working on this
problem. Perhaps, no matter how bright I am, this is not enough to
understand Novamente's promise.
I am sure you are smart enough to understand its promise if you wanted
to. Do you?
I did want to.
I did.
I do.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64078210-64d0b9