1)
There definitely IS such a thing as a "better algorithm for intelligence in
general".  For instance, compare AIXI with an algorithm called AIXI_frog,
that works exactly like AIXI, but inbetween each two of AIXI's computational
operations, it internally produces and then deletes the word "frog" one
billion times.  Clearly AIXI is better than AIXI_frog, according to many
reasonable quantitative intelligence measures.

2)
More relevantly, there is definitely such a thing as a "better algorithm for
intelligence about, say, configuring matter into various forms rapidly."  Or
you can substitute any other broad goal here.

3)
Anyway, I think it's reasonable to doubt my story about how RSI will be
achieved.  All I have is a plausibility argument, not a proof.  What got my
dander up about Matt's argument was that he was claiming to have a
"debunking" of the RSI ... a proof that it is impossible or infeasible.  I
do not think he presented any such thing; I think he presented an opinion in
the guise of a proof....  It may be a reasonable opinion but that's very
different from a proof.

-- Ben G

On Fri, Oct 17, 2008 at 6:26 PM, William Pearson <[EMAIL PROTECTED]>wrote:

> 2008/10/17 Ben Goertzel <[EMAIL PROTECTED]>:
> >
> >
> > The difficulty of rigorously defining practical intelligence doesn't tell
> > you ANYTHING about the possibility of RSI ... it just tells you something
> > about the possibility of rigorously proving useful theorems about RSI ...
> >
> > More importantly, you haven't dealt with my counterargument that the
> posited
> > AGI that is "qualitatively intellectually superior to humans in every
> way"
> > would
> >
> > a) be able to clone itself N times for large N
> >
> > b) have the full knowledge-base and infrastructure of human society at
> its
> > disposal
> >
> > Surely these facts will help it to self-improve far more quickly than
> would
> > otherwise be the case...
> >
> > I'm not thinking about this so abstractly, really.  I'm thinking,
> > qualitatively, that
> >
> > 1-- The members of this list, collectively, could solve algorithmic
> problems
> > that a team of one million people with IQ 100 would not be able to solve
> in
> > a feasible period of time
> >
> > 2-- an AGI that was created by, say, the members of this list, would be
> > architected based on **our** algorithms
> >
> > 3-- so, if we could create an AGI that was qualitatively intellectually
> > superior to **us** (even if only moderately so), this AGI (or a team of
> > such) could probably solve algorithmic problems that one million of
> **us**
> > would not be able to solve in a feasible period of time
> >
> > 4--thus, this AGI we created would be able to create another AGI that was
> > qualitatively much smarter than **it**
> >
> > 5--etc.
> >
>
> I don't buy the 5 step plan, either. For a few reasons. Apologies for
> the rather disjointed nature of this message, it is rather late, and I
> want to finish it before I am busy again.
>
> I don't think there is such thing as an better algorithm for
> intelligence, there are algorithms suited to certain problems. Human
> intelligences seem to adapt their main reasoning algorithms in an
> experimental self-changing fashion at a sub concious level. Different
> biases are appropriate for different problems, including at the
> meta-level. See deceptive functions from genetic algorithms for
> examples. And deceptive functions can always appear in the world, as
> humans can create whatever problems are needed to fool the other
> agents around them.
>
> What Intelligence generally measures in day to day life is the ability
> to adopt other peoples mental machinery, for your own purposes. It
> gives no guarantee of finding new solutions to problems. The search
> spaces are so huge that you can easily lose yourself, trying to hit a
> tiny point. You might have the correct biases to get to point A, but
> that doesn't mean you have the right biases to get to point B. True
> innovation is very hard.
>
> It is not hard to be bayesian optimal if you know what data you should
> be looking at to solve a problem, it is knowing what data is
> pertinent. This is not always obvious and requires trial, error and
> the correct bias to limit this to reasonable time scales.
>
> Copying yourself doesn't get you different biases. You would all try
> the same approach to start with, or if you purposefully set it so that
> you didn't you would all still rate certain things/approaches as very
> unlikely to be any good, when they might well be what you need to do.
>
>  Will
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to