Ben,

On Mon, Dec 24, 2012 at 2:32 PM, Ben Goertzel <[email protected]> wrote:

> Specifically what I'm aiming for is interesting general-intelligence
> behavior
> in animated agents and (assuming a planned collaboration with Hanson
> Robotics goes well)
> humanoid robots...
>

Reminder: I am sitting on a technology to implement digital incremental
transmissions, one of which would replace many motors. These are variable
ratio devices capable of "gearing down" without limit, limited only by
their structural strength. These would be way smaller, lighter, and more
efficient than any motor-based technology.

>
> It's true that getting to the point of such a qualitatively exciting
> demonstration is taking longer than I've hoped....  But ultimately,
> whether my task duration estimations are off by a small integer
> multiple or not is not the main point.  The main point is whether the
> core AGI design can work or not...
>

My own point is that it only takes one fundamental misstep to doom a
project. Unless/until debunked, my dP/dt example stands as an example. I
suspect that there are a dozen or so of other such undiscovered principles
that now lie between you and AGI.

The weakness I (erroneously?) see in OpenCog is that there are several
questionable assumptions built into its very structure, any one of which
could doom its future. What I personally think is necessary to succeed in
the direction you have chosen is a more enlightened structure that avoids
the built in assumptions.

*Steve's AGI Architectural Test:*

Everyone (including even me) agrees that AGIs won't need to simulate the
inner working of neurons. However, any prospective AGI platform absolutely
**MUST** be capable of performing substantially all of the information
processing functions that have been observed in neurons. Sure, some of
these may prove to be unnecessary, but we can't now determine which are
essential, and which are superfluous. My dP/dt observation is just one of
many such information processing functions, which also includes other
basics, like the retrograde flow of information. Once your platform is
"playing with a full deck" then clever design might conceivably succeed.
However, until then, you can never ever rationally hope to succeed, not in
a few years, and not in a few centuries.

Right? If not, then why not?

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to