On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:

Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a way as to preserve its existing motivational priorities.


How could the system anticipate whether on not significant RSI would lead it to question or modify its current motivational priorities? Are you suggesting that the system can somehow simulate an improved version of itself in sufficient detail to know this? It seems quite unlikely.


That means: the system would *not* choose to do any RSI if the RSI could not be done in such a way as to preserve its current motivational priorities: to do so would be to risk subverting its own most important desires. (Note carefully that the system itself would put this constraint on its own development, it would not have anything to do with us controlling it).


If the improvements were an improvement in capabilities and such improvement led to changes in its priorities then how would those improvements be undesirable due to showing current motivational priorities as being in some way lacking? Why is protecting current beliefs or motivational priorities more important than becoming presumably more capable and more capable of understanding the reality the system is immersed in?


There is a bit of a problem with the term "RSI" here: to answer your question fully we might have to get more specific about what that would entail.

Finally: the usefulness of RSI would not necessarily be indefinite. The system could well get to a situation where further RSI was not particularly consistent with its goals. It could live without it.


Then are its goal  more important to it than reality?

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to