Matt,

I'm not going to respond to your distorted trashing of my and OpenCog's
record in detail, point-by-point, as I have other more useful stuff to do...

But just to respond to a couple quasi-random points:

> But I am intrigued at least by the possibility of natural language
> understanding. A more careful inspection shows that one of the papers,
>
> http://goertzel.org/ICAI_CogSyn_paper.pdf
>
> mentions that RelEx was used to extract semantic information and make
> inferences from biomedical research abstracts. I go to the reference:
>
> http://acl.ldc.upenn.edu/W/W06/W06-3317.pdf
>
> but I am disappointed to find that all of the work is done with
> hand-coded rules and that the results are anecdotal.

Hand-coded rules were used alongside statistical text analysis, for that work.

Regarding the results being anecdotal: Indeed, that was preliminary work.
 It was done using a $15K pilot grant from the NIH,
and we didn't end up securing additional $$ for that sort of work, so we went in
a different direction.

Had someone funded us to do it, we could have made a human-marked-up corpus
for evaluating that work, and run statistical tests, and then tweaked the system
to improve the accuracy....

But as I didn't see that fine-tuning as core to our AGI initiative, I
had little motivation
to devote scant resources to it.

However, I did think that little piece of work was pointful; because I
think it was interesting
to observe, qualitatively, that the software was indeed able to perform some
interesting reasoning based on information extraction from natural language ...

I understand how to obtain quantitative accuracy results by applying
algorithms to datasets.  I'm doing that
now in my commercial work in computational finance; and have done so
in my published work on
applying machine learning to genomics.   I just don't think that this
paradigm is particularly useful
for AGI.

>So really, the
> puppy demo isn't doing anything more sophisticated than SHRDLU did
> around 1968-1970, except that the graphics are better.

The MOSES algorithm, and the distributed multi-start local-search framework,
underlying that "virtual world reinforcement/imitation learning" work
we did in 2008-2009,
is in fact much more sophisticated than anything that was possible in
the time period of
SHRDLU.

I explained in an email a few minutes ago, that the "puppy demo" was
created as part of a specific
collaboration that ended shortly after production of that demo, due to
economic issues on the part of
our collaborator unrelated to the Novamente/OpenCog project....

It is possible that OpenCog would be much more advanced as an AGI
system now if I were personally much
better at business development, or at grant-writing....  Just as it's
possible that Babbage would have finished
his Analytical Engine, if he had been better at people-management or
sweet-talking investors, etc.; or if he'd
managed to become a business tycoon and fund his engineering amply
with his own cash....

However, I strongly disagree that OpenCog would be more advanced now
if we had focused the project's
attention on some highly specific quantitative benchmark, like
information extraction from natural language, or
learning better and better pet tricks. This approach would merely lead
to us building a narrow-AI system good at
doing those particular things.  This is the crux of our scientific,
conceptual disagreement -- which your commentary
blurs via making fallacious claims that the lack of focus on narrow-AI
quantitative metrics, is the reason why
this or that prior conditional prediction of mine was not met.

-- Ben G


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to