Ok I have a little better understanding of what you are trying to accomplish.
>> If not it would be stuck forever with its first created beliefs, which looking back on the human race, def does not seem to be a good idea.
Yes, I DEFINITELY want the AGI to be stuck forever with its first created beliefs that maximizing volition actualization/wish fulfillment is the ultimate goal and that each separate individual's volitions/wishes are of equal value. I believe that this is fair to me and fair to the AI (because I believe that I and everyone else should be stuck with these beliefs). Why, specifically is this a bad idea?
- you mentioned in a couple of responses the volition of the masses as your overall formula, I am putting a couple of thoughts together here, and that the intitial formula and rules for this would be hard-coded in the beginning. Using this as a starting point then, How do two AGI's develope along a different line of beliefs?
>> For anything to truly grow it needs to consider its actions, motivations, and change them where necessary.
This does not seem to be in conflict with what I am saying.
My other main problem with your theory of volition and my own Value formula is the future states. How can we possibly program them to look so far forward and attempt to glean everyones future volition's. That seems like an insurmountable task in itself.
Another take on that is a large set of value formulas, that maybe we Could take a snapshot of, or a glimpse of, with trying to look directly into everyones mind and seeing thier volitions. I think we may have to simulate this with a formula that takes into account majority opinion, and protection for the minorities.
Thought: I want a piece of candy, but it is bad for my teeth, my weight, etc, does the robot give me a piece of candy when I ask? At what point though does he say no vs yes. How much of free will do the AI's attempt to put forward on us to 'protect' us, and our and the masses future volitions.
You mentioned once in one of the posts, that your volitions would be represented by rules, I dont remember exactly your wording, but it seems more likely that we would have to define variations of these rules, and allow the AGI to learn more rules, as oppsed to giving them a very devious task of gleaning ones volition.
Thank You
James Ratcliff
http://falazar.com
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
