On Wed, Apr 3, 2013 at 2:49 AM, Ben Goertzel <[email protected]> wrote:
>> DeSTIN and MOSES appeared very promising when they were graduate
>> research projects several years ago, but to this day they only work on
>> toy problems.
>
> That is false, MOSES has been used for many commercial machine learning
> projects....

Is there a web page describing these applications? Sorry if I was unaware of it.

My knowledge of MOSES is limited to an experiment last year in
learning the 4 bit parity problem that was discussed on the OpenCog
mailing list. I showed that off the shelf data compressors
outperformed it.

>> Most of the development work in the last couple of years has
>> been on fixing things that break when a dependency is changed, like
>> new libraries or new compiler versions.
>
> That is a complete, bald-faced lie.  Why do you keep repeating this
> lie on email lists?   It's a pretty repellent behavior...

Sorry if that was the impression I got on the OpenCog mailing list. I
see a lot of posts along the lines of "help, I tried to install
OpenCog and got these errors...". Maybe people don't say anything when
everything goes right.

> According to your view, all AGI projects are worthless, not just OpenCog...

OpenCog is far more ambitious than any of the other projects that are
discussed on this list. I do give it a much better chance of producing
something useful, although not AGI.

> As for your argument that human knowledge collection will cost more than
> AGI software, it seems rather silly to me, and always has.  I mean: if someone
> builds an AGI **learning** system, this system can pick up the knowledge about
> how to flip burgers at McDonald's and fix car engines and process
> mortgage applications
> and run PCR machines, by watching people do it.....   Yes, the AGIs
> will consume energy
> while watching people do it, but the cost of this energy will then be
> recompensed by the savings
> from having the AGIs do the jobs more efficiently...

Not just watching, but also asking questions, which has a cost in
human time. The cost of collecting knowledge per replaced worker will
still be lower than the cost of training humans, and certainly lower
than writing the learning software once. But when you add it up over
all workers, knowledge collection becomes the dominant cost because
every job is different.

And that is assuming we solve the hardware problem. Human level vision
integrated with language and robotics seems like an intractable
problem. Neural algorithms seem to be the best approach, but a human
cortex sized neural network would need a 10 petaflop computer with a
petabyte of RAM. It would need to be trained on a decade's worth of
high resolution video. Just the electricity to run the computer would
cost $1000 per hour, certainly higher than its potential earnings as a
house cleaning robot or whatever. Maybe you think I am overestimating,
but neither evolution nor 60 years of AI research has come up with
anything better.

How do you plan to address the hardware problem?

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to