Stefan, Thanks, but it seems that "ensuring that AGI is human-friendly problem" is not really a problem that we need to solve at the moment.
Currently it is sufficient to test if whatever system we develop is: 1) Is useful for us. 2) Is not too harmful for us. On the later stages of AGI development it may become useful to find some algorithms that would constrain advanced AGI behavior, but currently such constraint would simply kill AGI prototype. Still if you had short list of tips about how to design & apply such "AGI safety constraint" -- that would be useful, but your article is considerably longer and way more abstract than that. Saturday, November 17, 2007, 11:51:13 PM, you wrote: > On Nov 18, 2007 2:30 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote: > Stefan, > Could you please explain, how could I apply your research paper: > http://rationalmorality.info/wp-content/uploads/2007/11/practical-benevolence-2007-11-17_isotemp.pdf > to something useful? > It's a little too abstract for me, so some introduction that binds > this research to practical research/development goals would be quite > helpful. > For example one could apply the gained insights to develop a > strategy for ensuring a positive transcension. Due to the intrinsic > moral nature of reality the term positive singularity becomes > tautological, as anything that desires to exit has to act in a moral > way to prevent its self annihilation. Bringing about the singularity > thus becomes rather simple and can be achieved in the following way: > create an environment allowing for the existence of units of self > replicating units of informationensure that the units of information > can be acted upon by the forces of evolutionplant an arbitrary self > replicator wait > This could be realized by using the BOINC architecture for > distributed computing for creating a fuzzy copying environment to > realize above plan. The copying 'fuzziness' i.e. error rate per > copied bit, would have to be roughly proportional to the maximally > complex self replicator in the system to allow for a gradual > expansion of the system's complexity boundary and thus for the > emergence of ever more rational agents. > Once the rationality of the emerging agents would approach human > levels they would realize M! and thus never become a threat to > humanity. > I am currently thinking further about how to ground the > simulated reality in our reality and belief that could be achieved > by providing an interface to the internet that the evolving rational > agents can interact with using a browser like interface. > More soon under www.guidoborner.org ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=66312571-304c15
