Jim Bromer,

On Thu, Dec 27, 2012 at 5:19 PM, Jim Bromer <[email protected]> wrote:

> On Tue, Dec 25, 2012 at 3:30 PM, Ben Goertzel <[email protected]> wrote:
> Once we can make a compelling demonstration that genuinely showcases
> synergetic interaction of various cognitive processes in OpenCog, we will
> do so; and that will indeed be quite satisfying. The fact that you, or
> other skeptics, think this is taking longer than it should, isn't
> particularly important to me.... You understand neither the underlying
> concepts nor the practical obstacles...
>
> ----------------------------
> Why would you make a petty personal comment when you were handling the
> criticism so well up until then?
> Let's see: I understand neither the underlying concepts nor the practical
> obstacles...
>

That's not a petty personal comment.  I really don't think you understand
the concepts underlying OpenCog (as a particular AGI design), nor the
practical obstacles that we face in implementing it.


This isn't Ben Goertzel Bashing.
>

Jim: There are loads of other AGI researchers in the world, but you and the
other small crew of AGI list regulars like to repetitiously criticize me
and my work, simply because unlike most of the other AGI researchers in the
world, I have chosen to remain on this list and occasionally pay attention
to it ;p

If I signed off this list (as I have often felt inclined to do), then
picking on my work and ideas would get boring to you -- just as you folks
seem to be bored with picking on the work of Joscha Bach, Itamar Arel, Stan
Franklin, Dileep George, Ray Kurzweil, Nick Cassimatis and loads of other
AGI researchers out there who are working on their own proto-AGI systems
and have published some of their ideas....

I have learned nothing substantial, ever, from the criticisms of my work on
this email list.  I have learned plenty from critiques of my work and
related constructive suggestions, delivered to me privately by other AGI
researchers in the world who do not have patience for this list.



> I am trying to help you deal with the obvious fact that your approach to
> AGI hasn't worked yet and therefore there is probably is a major problem
> that you haven't worked out.
>

That's pretty stupid reasoning, Jim.   OpenCog is a fairly large design,
and only a portion of it is implemented.  The theory underlying OpenCog
quite clearly suggests that, given the portion that has been implemented so
far, we should not expect any dramatically intelligent functionality.
Maybe the underlying theory is wrong.  But the work done so far, does not
suffice to falsify it -- because the portion of OpenCog implemented so far
is too partial.


You make the claim that you will be able to show a compelling
> demonstration.  Ok, when is this going to happen?  What will it show?  If
> you aren't able to make a deadline then what were you able to show in your
> demonstration?
>
>
The quality and potential of the OpenCog AGI design is quite independent of
my capability to predict the progress rate of the current group of humans
working on the OpenCog code...

The compelling demonstrations I am working toward are:

1) A video game agent that holds intelligent, interesting, creative English
conversations about what it's doing and seeing in the video game world

2) A mobile robot that holds intelligent, interesting, creative English
conversations about what it's doing and seeing in its robot lab

The fact that I don't care about boiling these down to quantitative
metrics, doesn't eliminate their potential compellingness as demonstrations.



> I will keep this message as a draft and post it again next year and every
> year until you make your compelling demonstration.
> Jim Bromer


Be my guest , good sir ;)

But remember this: if OpenCog **never** produces any compelling
demonstration, this will not disprove the underlying design or ideas
whatsoever.  The failure to produce a highly intelligent OpenCog system,
does not disambiguate between the options

1) The OpenCog design is not workable

2) Ben failed to gather enough human or financial resources to get the
OpenCog system sufficiently implemented and tested and taught, to make it
do highly intelligent things

...

-- Ben G



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to