On 27 December 2012 08:12, Matt Mahoney <mattmahone...@gmail.com> wrote:
> On Wed, Dec 26, 2012 at 1:10 AM, Ben Goertzel <b...@goertzel.org> wrote:
>> At this moment, Watson is certainly a  more impressively demonstrable
>> system than OpenCog.  It also is the result of massively more
>> man-years (and even massively more dollars) of effort, of course...
>
> Watson is 20 people for 3 years and $3 million for hardware. What is
> it for OpenCog?

*AND* 3 million just for hardware? I dream of $3 million funds just
for hardware!

For OpenCog the numbers are much much less. Ben knows the exact
numbers, and I'd guess they can't be shared completely publicly, but I
think it's less than 12 people (across the range of OpenCog-related
projects, a lot less than that for AGI specific development). There is
very little for generally available hardware, although we got USD 12k
at the beginning of the HK project for desktop machines.

I moved on from the HK project for a combination of reasons, not least
of which is that I could get paid literally twice as much elsewhere.
That, and frustrations in dealing with the constraints of our HK
funding, meant I preferred to go off to earn money where I'd then be
able to put aside savings for a period of unconstrained AGI research.
Not to mention I've also gotten more experience in leading/managing
practical software engineering projects.

Me leaving the HK project is one of the stories Ben mentioned that
makes predictions hard. I felt bad for leaving, but also knew it was
the best long term option for myself personally, and also for AGI
development (since when I return to AGI research, with OpenCog or
otherwise, I'll be better equiped for managing teams and designing
distributed fault-tolerant architecture).

Ben's doing an amazing job of balancing everything he's involved with,
but the odds are stacked. Especially when the funding for researchers
is scarce. Ben is hopeful that setting up in geographies where
software engineers can be employed relatively cheaply will get the
most output for funding amounts, however I am less convinced. I'd
rather have 4-5 amazingly talented and experienced developers than a
dozen reasonably good developers (which is to say nothing of the
people who are already involved in the project, there *are* amazingly
talented individuals involved, but I'd prefer if that was the focus
for future team building rather than building up numbers).

Of course, the above is all from random discussions in the past, and
the situation might have changed in the last year, so Ben should feel
free to correct me ;-)

>> I would imagine that if one formulated a highly precise test for
>> "flexible, common-sensical conversation about the experiences of a
>> virtual or robotic agent", then some Watson-like approach might well
>> work for passing that test ------ even though this approach would not
>> be effectively generalizable to human-level AGI.....  But if we got to
>> this level with an OpenCog system, I believe we would be well on the
>> path to human-level AGI
>
> Do you think that rule based systems like RelEx and NatGen are a
> viable approach to the problem?
>
> Not that I am ruling it out. I think Watson uses rule based parsing in
> many of its 100 or so techniques. The input to Watson generally lacks
> spelling and grammar errors. OTOH, Google uses statistical methods to
> deal with noise, but lacks the ability to analyze complex sentence
> structures.
>
> I'm interested in what approach you plan to take and how you plan to
> evaluate the results (or not).

>From my involvement, I understand that RelEx and NLGen/NatGen are rule
based only out of necessity for quick results in demonstrations and
other funding dependent deadlines.

In terms of the general direction, I think most of the people involved
in OpenCog want to move toward statistical methods based on
observational experience.

The problem with experiments based on observations is that experiments
become a lot more computationally expensive.

However, there were also discussions of seeding the statistical rules
with the hand coded ones, and then letting experience/observation
shape the application of those rules... this would be particularly
helpful in parse ranking and semantic disambiguation.

Joel


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to