On Thu, Aug 7, 2008 at 3:53 PM, Charles Hixson <[EMAIL PROTECTED]> wrote:
> At this point I think it relevant to bring in an assertion from Larry Niven > (Protector): > Paraphrase: When you understand all the consequences of an act, then you > don't have free will. You must choose the best decision. (For some value > of best.) > > If this is correct, then Free Will is either an argument over probabilities, > or over "best"...which could reasonably be expected to differ from entity to > entity. That is interesting, I never considered that before. I think that free-will has to be defined relatively. So even though we cannot transcend anyway we want to we still have free-will relative to the range of possibilities that we do have. And this range is too great to be comprehended except in the terms of broad generalizations. So the choices that an future AGI program can make should not be and cannot be dismissed before hand. Free will can differ from entity to entity but I do not think a working definition can be limited to probabilities or over what is 'best'. Jim Bromer ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
