On Fri, Oct 17, 2008 at 11:00 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I don't claim that Ben's OpenCog design is flawed or that it could not 
> produce a "smarter than human" artificial scientist. I do claim that this 
> step would not launch a singularity. You cannot produce a seed AI.

It's claims like that which annoy me.  Can you at least prefix it with
"in my very limited and not so humble opinion..."  Because, frankly,
you don't know any better than the rest of us what is possible.

Whenever this "how do you measure intelligence?" question comes up I
just shake my head.  I don't really care about "how intelligent" an
AGI is.. what I *care* about is how *productive* it is.  If it needs
to be "more intelligent" in order to be more productive then so be it.
 Maybe it doesn't.  Maybe all it has to be is faster.. if it takes the
"long way" to get some result but it can do all that long way
reasoning at 10x the speed that it was previously, then it is 10x as
productive.  On the other hand, if it can find some "shortcut" and
come up with the solution 10x as fast as it did previously, whilst the
basic reasoning operations have gotten no faster, then it is still 10x
as productive.

Maybe you want to focus on making it find the shortcuts more often.
Maybe you want to focus on making the basic operations faster.  Maybe
you want both.  But that's all academic.. what matters is that it can
do the job you have assigned it and if you assign it the job of making
itself better at doing what you assign it then you get a seed AI.

Trent


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to