On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam <[EMAIL PROTECTED]> wrote: > > If Friendliness is an algorithm, it ought to be a simple matter to express > what the goal of the algorithm is. How would you define Friendliness, Vlad? >
Algorithm doesn't need to be simple. The actual Friendly AI that started to incorporate properties of human morality in it is a very complex algorithm, and so is the human morality itself. Original implementation of Friendly AI won't be too complex though, it'll only need to refer to the complexity outside in a right way, so that it'll converge on dynamic with the right properties. Figuring out what this original algorithm needs to be, not to count the technical difficulties of implementing it, is very tricky though. You start from the question "what is the right thing to do?" applied in the context of unlimited optimization power, and work on extracting a technical answer, surfacing the layers of hidden machinery that underlie this question when *you* think about it, translating the question into a piece of engineering that answers it, and this is Friendly AI. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
