Benjamin Goertzel wrote:


Richard,


    Even Ben Goertzel, in a recent comment, said something to the effect
    that the only good reason to believe that his model is going to function
    as advertised is that *when* it is working we will be able to see that
    it really does work:


The above paragraph is a distortion of what I said, and misrepresents my
own thoughts and beliefs.

When pressed, you always resort to a phrase equivalent to the one you give below: "I think that, after the Novamente design and the ideas underlying it are carefully studied by a suitably trained individiual, the hypothesis that it will lead to a human-level AI comes to seem plausible"

When you look carefully at this phrasing, its core is a statement that the best reason to believe that it will work is the *intuition* of someone who studies the design ... and you state that you "believe" that anyone who is suitably trained, who studies it, will have the same intuition that you do. This is all well and good, but it contains no metric, no new analysis of the outstanding problems that we can all scrutinize and assess.

I would consider an appeal to the intuition of "suitably trained individuals" to be very much less than a "good reason to believe that the model is going to function as advertised".

Thus: if someone wanted volunteers to fly in their brand-new aircraft design, but all they could do to reassure people that it was going to work were the intuitions of suitably trained individuals, then most rational people would refuse to fly - they would want more than intuitions.

In this light, my summary would not be a distortion of your position at all, but only a statement about whether an appeal to intuition counts as a good reason to believe.

And, of course, there are some suitably trained individuals who do not share your intuitions, even given the limited access they have to your detailed design.

I respect your optimism, and applaud your single-minded commitment to the project: if it is going to work, that is the way to get it done. I certainly wish you luck with it.




Richard Loosemore

I think that, after the Novamente design and the ideas underlying it are
carefully studied by a suitably trained individiual, the hypothesis that it will
lead to a human-level AI comes to seem plausible.  But, there is no
solid proof, it's in part a matter of educated intuition.
The following quote which you gave is accurate:


    Ben Goertzel wrote:
     > This practical design is based on a theory that is fairly
    complete, but not
     > easily verifiable using current technology.  The verification, it
    seems, will
     > come via actually getting the AGI built!

    This is a million miles short of a declaration that there are "no hard
    problems left in AI".



Whether there are "hard problems left in AI", conditional on the assumption that
the Novamente design is workable, comes down to a question of semantic
interpretation.

In the completion of the detailed-design and implementation of the Novamente system, there are around a half-dozen "research problems" on the "PhD thesis" level to be solved. This means there is some hard thinking left, yet if the Novamente design is correct, it pertains some well-defined and well-delimited technical questions, which seem very likely
to be solvable.

As an example, there is the task of generalizing the MOSES algorithm (see metacog.org <http://metacog.org>) to handle general programmatic constructs at the nodes of its internal program trees. Of course this is a hard problem, yet it's a well-defined computer science problem which
(after a lot of things) doesn't
seem likely to be hiding any deep gotchas.

But this is research and development -- not pure development -- so one never knows for sure...

-- Ben
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&; <http://v2.listbox.com/member/?&;>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64076724-00fae4

Reply via email to