Pei,

A key point is that, unlike a human, a well-architected AGI should be able
to easily increase its intelligence via adding memory, adding faster
processors, adding more processors, and so forth.  As well as by analyzing
its own processes and their flaws with far more accuracy than any near-term
brain scan...

However, to say "intelligence will continue to

evolve" and "there will be a moment after which things will completely
go beyond our understanding" are not the same.



True, they're not the same....

It is a reasonable hypothesis that AGIs created by humans will find
themselves unable -- even after a lot of self-study and a lot of hardware
improvement augmentation -- to dramatically transcend the human level of
intelligence. I.e., the idea of human-created algorithms bootstrapping
beyond the human level could be infeasible.  This seems highly unlikely to
me, but I can't see it's an idiotic hypothesis.

Is the above the hypothesis you're making?

Or are you doubting that a massively superhuman intelligence would be beyond
the scope of understanding of ordinary, unaugmented humans?

Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to