Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Matt Mahoney
--- On Sat, 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: Anyway, I think it's reasonable to doubt my story about how RSI will be achieved. All I have is a plausibility argument, not a proof. What got my dander up about Matt's argument was that he was claiming to have a debunking of

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Ben Goertzel
Matt wrote: I think the source of our disagreement is the I in RSI. What does it mean to improve? From Ben's OpenCog roadmap (see http://www.opencog.org/wiki/OpenCogPrime:Roadmap ) I think it is clear that Ben's definition of improvement is Turing's definition of AI: more like a human. In

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread William Pearson
2008/10/18 Ben Goertzel [EMAIL PROTECTED]: 1) There definitely IS such a thing as a better algorithm for intelligence in general. For instance, compare AIXI with an algorithm called AIXI_frog, that works exactly like AIXI, but inbetween each two of AIXI's computational operations, it

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-18 Thread Matt Mahoney
--- On Sat, 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote: The limitations of your imagination are striking ;-p I imagine a future where AGI sneaks past us, like where Google can understand 50% of 8 word long natural language questions this year, and 60% next year. Where they gradually

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-17 Thread William Pearson
2008/10/17 Ben Goertzel [EMAIL PROTECTED]: The difficulty of rigorously defining practical intelligence doesn't tell you ANYTHING about the possibility of RSI ... it just tells you something about the possibility of rigorously proving useful theorems about RSI ... More importantly, you

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-17 Thread Ben Goertzel
1) There definitely IS such a thing as a better algorithm for intelligence in general. For instance, compare AIXI with an algorithm called AIXI_frog, that works exactly like AIXI, but inbetween each two of AIXI's computational operations, it internally produces and then deletes the word frog one

[agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Tim Freeman
From: Ben Goertzel [EMAIL PROTECTED] On the other hand, if you insist on mathematical definitions of intelligence, we could talk about, say, the intelligence of a system as the total prediction difficulty of the set S of sequences, with the property that the system can predict S during a period

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Ben Goertzel
Yes, of course this is true ... systems need to have a certain minimum level of intelligence in order to self-improve in a goal-directed way!! I said I didn't want to take time to formulate my point (which to me is extremely intuitively obvious) as a theorem with all conditions explicitly stated,

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Matt Mahoney
--- On Thu, 10/16/08, Ben Goertzel [EMAIL PROTECTED] wrote: If some folks want to believe that self-modifying AGI is not possible, that's OK with me.  Lots of folks believed human flight was not possible also, etc. etc. ... and there were even attempts at mathematical/theoretical proofs of

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Ben Goertzel
The difficulty of rigorously defining practical intelligence doesn't tell you ANYTHING about the possibility of RSI ... it just tells you something about the possibility of rigorously proving useful theorems about RSI ... More importantly, you haven't dealt with my counterargument that the

Re: [agi] Twice as smart (was Re: RSI without input...) v2.1))

2008-10-16 Thread Trent Waddington
On Fri, Oct 17, 2008 at 11:00 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I don't claim that Ben's OpenCog design is flawed or that it could not produce a smarter than human artificial scientist. I do claim that this step would not launch a singularity. You cannot produce a seed AI. It's