On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

Pei,

A key point is that, unlike a human, a well-architected AGI should be able
to easily increase its intelligence via adding memory, adding faster
processors, adding more processors, and so forth.  As well as by analyzing
its own processes and their flaws with far more accuracy than any near-term
brain scan...

Sure, these factors will increase the system's capability, though not
change its working principle.

 However, to say "intelligence will continue to
> evolve" and "there will be a moment after which things will completely
> go beyond our understanding" are not the same.


True, they're not the same....

It is a reasonable hypothesis that AGIs created by humans will find
themselves unable -- even after a lot of self-study and a lot of hardware
improvement augmentation -- to dramatically transcend the human level of
intelligence. I.e., the idea of human-created algorithms bootstrapping
beyond the human level could be infeasible.  This seems highly unlikely to
me, but I can't see it's an idiotic hypothesis.

Is the above the hypothesis you're making?

Not exactly.

My points are:

(1) AGI can be more intelligent than human in certain sense, but it
should still be understandable in principle.

(2) Intelligence in AGI will continue to improve, both by human and by
AGI, but it will still take time. There is no reason to believe that
the time will be infinitely short.

Or are you doubting that a massively superhuman intelligence would be beyond
the scope of understanding of ordinary, unaugmented humans?

It depends on what you mean by "understanding" --- the general
principle or concrete behaviors.

Pei

Ben





 ________________________________

 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to