On Sat, Aug 30, 2008 at 8:54 AM, David Hart <[EMAIL PROTECTED]> wrote:
> > I suspect that there's minimal value in thinking about mundane 'self > improvement' (e.g. among humans or human institutions) in an attempt to > understand AGI-RSI, > Yes. To make a somewhat weak analogy, it's somewhat like thinking about people jumping up and down in the air, in order to understand interstellar travel ;-p > and that thinking about 'weak RSI' (e.g. in a GA system or some other > non-self-aware system) has value, but only insofar as it can contribute to > an AGI-RSI system (e.g. the mechanics of Combo in OpenCog). Drawing the > conclusion that strong RSI is impossible because it has not yet been > observed is absurd, because there's no known system in existence today that > is capable of strong RSI. A system capable of strong RSI must have broad > abilities to deeply understand, reprogram and recompile its constituent > parts before it can strongly recursively self improve, that is, before it > can create improved versions of itself (potentially heavily modified > versions that must demonstrate their superior fitness in a competitive > environment) where the unique creations repeat the process to yield yet > greater improvements ad infinitum. > > -dave > ------------------------------ > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be attempted if all possible objections must be first overcome " - Dr Samuel Johnson ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
