There was a subtle difference in the original scenario - I didn't say the unit would "act in your best interests," only that it would have the constant opinion that you were not doing your best. I suppose it could simply ignore you, but the idea is that we go through a honeymoon period where we live together peacefully.

My direction of thinking was more about how I would want the honeymoon to go. What would I like about this super unit? For me, it would be interesting to have the unit reveal to me what it might do if it were me, and especially if it explained the superior insights that it possessed. Sure, it might tell me to go get a brain prosthesis, but personally I don't trust it and wouldn't do it. I prefer to live my own "varied" existence.

By getting a feel for what this intelligent unit could be like, we might be able to better talk design and define the design of AGI. In my opinion, implementation language isn't where we are stuck, rather we are not communicating in a useful design language.




On 10/25/2014 07:00 PM, Matt Mahoney via AGI wrote:
The original question was how I would react to this agent that always knew what I was thinking and could anticipate my actions, but always acted in my best interests. Well, I'm not sure. Suppose it suggested that I could become immortal and have greatly enhanced mental capabilities (1000x more) by pretending to be me. All I have to do kill myself to complete the upload. I mean, logically it certainly seems to be in my best interest to take its advice. Well, I'm not sure I would trust it, since it would know I would initially refuse, and I would know it would take action anyway based on my extrapolated volition.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to