hi,

>
> What I am interested in is if someone gives me a computer system that
> changes its state is some fashion, can I state how powerful that
> method of change is likely to be? That is what the exact difference
> between a traditional learning algorithm and the way I envisage AGIs
> changing their state.
>

I'm sure this question is unsolvable in general ... so the interesting
question may be: Is there a subset of the class of possible AGI's, which
includes systems of an extremely (and hopefully unlimitedly) high level of
intelligence, and for which it *is* tractable to usefully probabilistically
predict the consequences of the system's self-modifications...


>
> Also can you formalise the difference between a humans method of
> learning how to learn, and boot strapping language off language (both
> examples of a strange loop), and a program inspecting and changing its
> source code.
>

Suppose one has a program of size N that has some self-reprogramming
capability.   There's a question of: for a certain probability p, how large
is the subset of program space that the program has probability > p of
entering (where the probability is calculated across possible worlds, e.g.
according to an occam distribution).





>
> I'm also interested in recursive self changing systems and whether you
> can be sure they will stay recursive self changing systems, as they
> change.



I'm almost certain there is no certainty in this world, regarding empirical
predictions like that ;-)

ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to