Matt Mahoney,

Wow, what a vicious and dishonest email !!!  Exactly what I expect
from you based on your prior history ;-p ...

I don't to rehash our prior arguments about OpenCog, but for the
benefit of any newbies here, I will briefly
point out those portions of your email that are flatly factually incorrect
(and comment on a couple of the portions that are merely misleading...)

You say:

> To summarize, OpenCog consists of the following components:
> - DeSTIN, a neural vision system.
> - MOSES, an evolutionary learner that generates programs to fit training data.
> - ReLex and Natgen, a rule based parser and sentence generator.
> - AtomSpace - a complex knowledge representation structure with about
> 100 different types.

These components exist, but many others do too, such as PLN and attention
allocation (ECAN), for example...

> AtomSpace is supposed to tie all of the components together. Right now
> they exist as separate programs written in different languages. That's
> been the case for several years now. There is some work being done now
> in integrating RelEx (Java) into AtomSpace (C++).

PLN for instance is in python, but works fine with the C++ Atomspace

RelEx is Java, but has been feeding information into the C++ Atomspace for
almost a decade now...

> DeSTIN and MOSES appeared very promising when they were graduate
> research projects several years ago, but to this day they only work on
> toy problems.

That is false, MOSES has been used for many commercial machine learning
projects....

> There is currently no research being done on language learning or
> statistical modeling.

MOSES is being used for statistical modeling of time series.   But not of
language, if that's what you mean.

>There is no work being done in robotics.

While that is currently true, we are beginning a collaboration with Hanson
Robotics aimed at using OpenCog to help control a Hanson Robokind...

>There
> has been some work done on distributing AtomSpace across multiple
> processors, which was mostly a failure. AtomSpace does not scale
> usefully beyond a single thread.

True.  MOSES scales effectively to multiple machines and threads, though...

We have a pretty good idea how to make the Atomspace scale up across
many threads and machines, but haven't done the work, as we've been focusing
on other things...

> There is no knowledge base, nor any plan to build one as far as I
> know.

If you mean there is no hand-coded set of knowledge rules, like in Cyc or
SOAR, that is true....   OpenCog is intended as a learning system not a
GOFAI type knowledge based system.

> Most of the development work in the last couple of years has
> been on fixing things that break when a dependency is changed, like
> new libraries or new compiler versions.

That is a complete, bald-faced lie.  Why do you keep repeating this
lie on email lists?   It's a pretty repellent behavior...

Some things are matters of opinion, and you're welcome to your opinion
that OpenCog is a dead-end project.

However, I'm sitting here in Hong Kong with a bunch of smart people who
are actually coding all day on OpenCog stuff.  And then you, who know nothing
about OpenCog really, are spouting off BS about how they are not doing anything
but fixing stuff that breaks?

What is your motivation for emailing this garbage
over and over?  You're a successful researcher and developer in your own domain,
Matt, so why do you feel the need to waste your time spouting lies
publicly about
other peoples' projects???

>Simply installing it and
> getting it to run is a bear.

It used to be hard, but not so hard anymore thanks to some nice tutorials
Alex van der Peet made and put on the OpenCog wiki

>And then there really isn't much you can
> do with it. It is really just a programming language for a program
> that they plan to write someday.

OpenCog is a research software system....  It's a pre-alpha version,
admittedly...

It is currently useful only to folks who want to participate in
proto-AGI research... or who have the knowledge and guts to build
products incorporating appropriately tuned research software

> People are really bad at predicting costs of major projects,
> especially software. The way that most people estimate lines of code
> is to take their available funding and divide by $100 per line. Maybe
> you can do better if you compare your program to something of similar
> complexity already written. For AGI, that would be your DNA. If you
> compress it, and compare it to the compressed size of typical code, it
> comes out to 300 million lines, or $30 billion.
>
> Ben, of course, does not believe any of my estimates, especially my
> claim that the software is such a tiny fraction of the total cost as
> to be insignificant. Code is easy to copy. The big costs will be
> hardware and human knowledge collection, the bits that make each of us
> unique. I've argued all that before and don't need to rehash it.
> OpenCog is going nowhere.

According to your view, all AGI projects are worthless, not just OpenCog...

As for your argument that human knowledge collection will cost more than
AGI software, it seems rather silly to me, and always has.  I mean: if someone
builds an AGI **learning** system, this system can pick up the knowledge about
how to flip burgers at McDonald's and fix car engines and process
mortgage applications
and run PCR machines, by watching people do it.....   Yes, the AGIs
will consume energy
while watching people do it, but the cost of this energy will then be
recompensed by the savings
from having the AGIs do the jobs more efficiently...

You consistently, and bizarrely, confuse

A -- the cost of building the first AGI learning machine

B -- the cost of having AGI learning  machines take over the economy

Only item A needs to be funded as a research project.  Item B will
just be rolled into the overall
activity of the global economy....

As for our ambitious timeline for OpenCog (human-level AI by the early
2020s, etc.) -- I have
ALWAYS been very explicit that this depends upon funding.  At our
current level of funding, we will NOT
succeed according to that timeline....  The only way we will succeed
according to that timeline, is if in
the next few years we make a sufficiently interesting capability
demonstration that we can pull in tens of millions
of US dollars in OpenCog R&D funding.   If we fail at doing this, it
will NOT prove the non-workability of the
underlying ideas and design....  It will just prove that we failed to
fully execute on a large R&D project that
we started, and attempted to pursue under conditions of insufficient funding...

-- Ben G


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to