I wrote: > If we do not give arbitrary access to the mind model itself or its > implementation, it seems safer than if we do -- this limits the > extent that RSI is possible: the efficiency of the model implementation > and the capabilities of the model do not change. An obvious objection to this is that if the "capabilities of the model" include the ability to simulate a turing machine then the capabilities actually include everything computable. However, this issue being addressed is a practical one referring to what actually happens, and there are enormous practical issues involving resource limits of processing time and memory space that should be considered. Such consideration is part of a model-specific safety analysis.
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=49457514-fe026f