My points are:

(1) AGI can be more intelligent than human in certain sense, but it
should still be understandable in principle.



The AGI systems humans create will be understandable by humans in principle.

Agreed.

But let's call these AGI0

Then, AGI0 will create AGI1, which will be understandable by AGI0 in
principle...

And, AGI1 will create AGI2, which will be understandable by AGI1 in
principle...

etc.

At what point will AGI_n be no longer understandable by humans in
principle?? --
where by "understand in principle", I mean "understand in principle, given
realistic
bounds on the time and memory resources used to
carry out this understanding"


(2) Intelligence in AGI will continue to improve, both by human and by
AGI, but it will still take time. There is no reason to believe that
the time will be infinitely short.



Not infinitely short, unless current physics is badly wrong in certain
relevant
respects.

But if AGI1 can think 1000 times faster than a human,
maybe AGI2 will be able to think 10000 times as fast, etc.

Infinite rate is not necessary for the result to be incomprehensibly rapid
as
compared to the human brain.


> Or are you doubting that a massively superhuman intelligence would be
beyond
> the scope of understanding of ordinary, unaugmented humans?

It depends on what you mean by "understanding" --- the general
principle or concrete behaviors.


My hypothesis is that for large n, AGI_n as defined above will likely obey
general
principles that humans are not able to understand assuming reasonable time
and memory constraints on their understanding process.

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to