Jiri, You assume that "when we are 100% done" -- we will get what we ultimately want. But that's not exactly true.
The most fittest species (whether computers, humans, or androids) will dominate the world. Let's talk about set of supergoals that such fittest species will have. I think this set would include: - Supergoal "Prevent being [self]destroyed". - Supergoal "Prevent changing supergoals". That supergoal would also try to prevent tampering with supergoals. I guess that supergoal will have to become quite strong in the environment when it's technologically possible to tweak supergoals. - Supergoal "reproduce". Supergoals of descendants would probably slightly vary from supergoals of the parent. - Other supergoals, such as "Desire to learn", "Desire to speak", and "Contribute to society". Note, that the most fittest species will not really have "Permanent pleasure paradise" option. Friday, November 2, 2007, 9:00:50 PM, you wrote: > Choice to take particular action generates sub-goal (which might be > deep in the sub-goal chain). If you go up, asking "why?" on each > level, you eventually reach the feeling level where goals (not just > sub-goals) are coming from. In short, I'm writing these words because > I have reasons to believe that the discussion can in some way support > my &/or someone else's AGI R &/or D. I want to support it because I > believe AGI can significantly help us to avoid pain and get more > pleasure - which is basically what drives us [by design]. So when we > are 100% done, there will be no pain and an extreme pleasure. Of > course I'm simplifying a bit, but what are the key objections? ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=66250614-119592
