On Mon, Dec 24, 2012 at 5:10 PM, Ben Goertzel <[email protected]> wrote:
I fully realize there is no skeptic-proof demonstration that OpenCog
will work for human-level AGI, at this stage.... So choosing to
proceed with OpenCog requires a fair bit of faith in one's intuition,
regarding whether it's a reasonable path or not.... I hope you
realize, though, that your anti-OpenCog arguments are also not solidly
demonstrated, and only will be bought into by folks who share some of
your own intuitions.... As with all other major breakthroughs in
history, it will be broadly clear only in hindsight, which people had
the right intuitions in foresight, and which did not...


But does your faith include the possibility that your specific ideas might
be wrong?  Does it include the possibility that many of us might have
be looking in the right direction but that it would only become apparent
if someone else presents us with a new technology that we could not invent
ourselves?  The second question is important because if true then it could
indicate that most of the research that is going on is not the key to
breaking the major obstacle of our time and that all the 500Ks in the world
would not let you solve the problem until the other thing was figured
out. The second case could be expressed this way: Once we were given some
new technology that would solve the difficult problem many of us could
muddle our way to discovering our own methods to create AGI.

What we need is some way to evaluate the ideas that are being floated and
tried. The only evaluation that we have is to look at the promotions and
predictions that researchers are making and compare them with the results
that they are able to produce. We can also look at how people's agi models
have improved over the years.

Can you make a better AGI program than your second life dog?  If you can't,
even after all these years, then your prognostications are wrong (and maybe
you should try another approach).
Jim Bromer


On Mon, Dec 24, 2012 at 5:10 PM, Ben Goertzel <[email protected]> wrote:

> Hi Matt,
>
> As I lack your passion for repeated re-hashing of the same issues in
> online forums, I'll just respond by pointing to some places where I've
> addressed these critical arguments of yours in the past:
>
> "Why is evaluating partial progress toward human-level AGI so hard?"
>
> http://multiverseaccordingtoben.blogspot.com/2011/06/why-is-evaluating-partial-progress.html
>
> "The real reasons we don't have AGI yet"
> http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
>
> "Mapping the Landscape of Human-Level Artificial General Intelligence"
> http://hrilab.tufts.edu/publications/adamsetal12aimag.pdf
>
> In the old post of mine, that you quote, I note that our progress is
> slow due to lack of funding, and that
>
> "To really do the Novamente project right, at a reasonable rate of speed,
> I'd
> need something like $500K/year for something like 3 years."
>
> Unfortunately, though a lot of time has passed since then, I still
> have not managed to acquire this amount of funds solely for AGI work,
> able to be spent unrestrictedly on pursuing AGI.   I have managed to
> divert bits and pieces of $$ from various places to OpenCog, and have
> gotten some HK gov't $$ for OpenCog (which is considerably less than
> US$500K/year and has somewhat tricky restrictions on how it can be
> spent...)....
>
> I agree with you that our current pace of progress toward a really
> smart OpenCog system is disappointingly slow.  As noted before, my
> hope/aim is that, once we have achieved a sufficient level of
> functionality, demonstration of the system will suffice to bring in
> dramatically more funding, which then will accelerate progress.
>
> IMO the focus on narrow-functionality tests such as you suggest, would
> push the project down the wrong path, one of building a collection of
> narrow-AI systems.  There are plenty of people working on that sort of
> thing already, as you know....  This point is discussed in more depth
> in the first link given above.
>
> An advantage of narrow-AI is that it's easier to measure incremental
> progress.  A disadvantage of narrow AI, in my view, is that it's very
> unlikely to lead to human-level AGI....  I realize your opinion is
> different.
>
> I fully realize there is no skeptic-proof demonstration that OpenCog
> will work for human-level AGI, at this stage....  So choosing to
> proceed with OpenCog requires a fair bit of faith in one's intuition,
> regarding whether it's a reasonable path or not....   I hope you
> realize, though, that your anti-OpenCog arguments are also not solidly
> demonstrated, and only will be bought into by folks who share some of
> your own intuitions....  As with all other major breakthroughs in
> history, it will be broadly clear only in hindsight, which people had
> the right intuitions in foresight, and which did not...
>
> -- Ben G
>
> On Mon, Dec 24, 2012 at 3:43 PM, Matt Mahoney <[email protected]>
> wrote:
> > Ben's tweet of Eliezer Yudkowsky's dire forecast made 8 years ago
> > seems rather humorous today.
> >
> > http://acceleratingfuture.com/sl4/archive/0501/10611.html
> >
> > As does Ben's response.
> >
> > http://acceleratingfuture.com/sl4/archive/0501/10613.html
> >
> > Really, OpenCog (then Novamente) is going to recursively self improve
> > and kill us all?
> >
> > So what went wrong?
> >
> > I've been lurking on the OpenCog mailing list for a couple of years.
> > There is a lot of software development being done. But it is hard to
> > tell if any real progress is being made because there is still no (nor
> > has there ever been) a test set by which progress could be measured.
> > Ben has mentioned a few ideas for tests, like getting an online
> > university degree, or playing with a box of toys. But we aren't there
> > yet. I can imagine a potential investor asking when will we get there,
> > and the answer will either be some made-up date or "I don't know". And
> > we know what happens with made-up dates.
> >
> > Why don't we know? Because there are no tests of incremental progress.
> > So as far as anyone can tell, there has been no progress since 2005.
> > During the 3 years it took to build Watson, the team tested it on
> > Jeopardy games and watched its precision (at 50% recall rate)
> > gradually improve from 15% accuracy to 90%, the level they needed to
> > beat the best humans. Every 3 months, they saw a 10% increase and knew
> > they were on the right track and could even forecast a completion
> > date. What does OpenCog have that is equivalent to this?
> >
> > Here is another example. How much more knowledge does Cyc need to add
> > to its sea of assertions to "break the software brittleness
> > bottleneck"? That was the goal in 1984 when the project was started.
> > Of course, nobody knows. Why not? Because there is no test for
> > measuring progress.
> >
> > I've made a rough draft of the cost of AGI which many of you have
> already read.
> >
> >
> https://docs.google.com/document/d/1cQiaH81rB5l9eLRYZFSi_tOLzRzOsY8wVruimPUWybg/edit
> >
> > If you think this is wrong, then please come up with some tests to
> > prove it. Here are some simple ones for now:
> >
> > - Fill in missing words in text, with the goal of human level
> > accuracy. (Can RelEx or MOSES do this)?
> >
> > - Recognize printed words or common objects in images with human level
> > accuracy (can DeSTIN do this)?
> >
> > - Teach a robot to throw a ball.
> >
> > These are nice, simple tests that give a numeric answer. But you will
> > notice in preparing the test set, that even this step is not trivial.
> > Then maybe you can tell me what it will cost to solve these problems,
> > in terms of hardware, software, and training data.
> >
> >
> > --
> > -- Matt Mahoney, [email protected]
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "opencog" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to
> [email protected].
> > For more options, visit this group at
> http://groups.google.com/group/opencog?hl=en.
> >
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/10561250-470149cf
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to