--- On Sat, 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Anyway, I think it's reasonable to doubt my story about how RSI will be
achieved. All I have is a plausibility argument, not a proof. What got my
dander up about Matt's argument was that he was claiming to have a
debunking of
Matt wrote:
I think the source of our disagreement is the I in RSI. What does it
mean to improve? From Ben's OpenCog roadmap (see
http://www.opencog.org/wiki/OpenCogPrime:Roadmap ) I think it is clear
that Ben's definition of improvement is Turing's definition of AI: more
like a human. In
2008/10/18 Ben Goertzel [EMAIL PROTECTED]:
1)
There definitely IS such a thing as a better algorithm for intelligence in
general. For instance, compare AIXI with an algorithm called AIXI_frog,
that works exactly like AIXI, but inbetween each two of AIXI's computational
operations, it
--- On Sat, 10/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:
The limitations of your imagination are striking ;-p
I imagine a future where AGI sneaks past us, like where Google can understand
50% of 8 word long natural language questions this year, and 60% next year.
Where they gradually
2008/10/17 Ben Goertzel [EMAIL PROTECTED]:
The difficulty of rigorously defining practical intelligence doesn't tell
you ANYTHING about the possibility of RSI ... it just tells you something
about the possibility of rigorously proving useful theorems about RSI ...
More importantly, you
1)
There definitely IS such a thing as a better algorithm for intelligence in
general. For instance, compare AIXI with an algorithm called AIXI_frog,
that works exactly like AIXI, but inbetween each two of AIXI's computational
operations, it internally produces and then deletes the word frog one
From: Ben Goertzel [EMAIL PROTECTED]
On the other hand, if you insist on mathematical definitions of
intelligence, we could talk about, say, the intelligence of a system
as the total prediction difficulty of the set S of sequences, with
the property that the system can predict S during a period
Yes, of course this is true ... systems need to have a certain minimum
level of intelligence in order to self-improve in a goal-directed way!!
I said I didn't want to take time to formulate my point (which to me is
extremely intuitively obvious) as a theorem with all conditions explicitly
stated,
--- On Thu, 10/16/08, Ben Goertzel [EMAIL PROTECTED] wrote:
If some folks want to believe that self-modifying AGI is not possible,
that's OK with me. Lots of folks believed human flight was not possible
also, etc. etc. ... and there were even attempts at
mathematical/theoretical
proofs of
The difficulty of rigorously defining practical intelligence doesn't tell
you ANYTHING about the possibility of RSI ... it just tells you something
about the possibility of rigorously proving useful theorems about RSI ...
More importantly, you haven't dealt with my counterargument that the
On Fri, Oct 17, 2008 at 11:00 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
I don't claim that Ben's OpenCog design is flawed or that it could not
produce a smarter than human artificial scientist. I do claim that this
step would not launch a singularity. You cannot produce a seed AI.
It's
11 matches
Mail list logo