Benjamin Goertzel wrote:
Richard,
Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:
The above paragraph is a distortion of what I said, and misrepresents my
own thoughts and beliefs.
I think that, after the Novamente design and the ideas underlying it are
carefully studied by a suitably trained individiual, the hypothesis that
it will
lead to a human-level AI comes to seem plausible. But, there is no
solid proof, it's in part a matter of educated intuition.
The following quote which you gave is accurate:
Ben Goertzel wrote:
> This practical design is based on a theory that is fairly
complete, but not
> easily verifiable using current technology. The verification, it
seems, will
> come via actually getting the AGI built!
This is a million miles short of a declaration that there are "no hard
problems left in AI".
Whether there are "hard problems left in AI", conditional on the
assumption that
the Novamente design is workable, comes down to a question of semantic
interpretation.
In the completion of the detailed-design and implementation of the
Novamente system,
there are around a half-dozen "research problems" on the "PhD thesis"
level to be solved.
This means there is some hard thinking left, yet if the Novamente design
is correct, it
pertains some well-defined and well-delimited technical questions, which
seem very likely
to be solvable.
As an example, there is the task of generalizing the MOSES algorithm
(see metacog.org <http://metacog.org>)
to handle general programmatic constructs at the nodes of its internal
program trees. Of
course this is a hard problem, yet it's a well-defined computer science
problem which
(after a lot of things) doesn't
seem likely to be hiding any deep gotchas.
But this is research and development -- not pure development -- so one
never knows for sure...
Ben
I realized, too late last night, that I *did* actually say something
that was not what I intended, so you are right: my statement did
misrepresent your position.
The message that I was trying to deliver when I mistakenly said:
> Even Ben Goertzel, in a recent comment, said something to the effect
> that the only good reason to believe that his model is going to
> function as advertised is that *when* it is working we will be able
> to see that it really does work:
was actually:
Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to
function as advertised, OTHER THAN THE INTUITIONS THAT HE AND OTHERS OF
LIKE MIND HAVE ABOUT THE VIABILITY OF THE DESIGN (AND INTUITIONS FALL
SHORT OF WHAT I WOULD REALLY CALL A "GOOD REASON" TO TRUST THE DESIGN)
is that *when* it is working we will be able to see that it really does
work.
This is not equivalent to what I originaly said (which gave the
impression that you had nothing, not even intuitions, to believe in the
design.
My apologies for the confusion.
I should add, Ben, that this was not meant as an attack on the Novamente
design per se: I believe that all AI/AGI systems have essentially been
built on the same appeals to intuition.
I have more to say about the general topic, but will take that up
separately.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64167196-7a20c5