On 6/23/08, William Pearson <[EMAIL PROTECTED]> wrote: > The base beliefs shared between the group would be something like > > - The entities will not have goals/motivations inherent to their > form. That is robots aren't likely to band together to fight humans, > or try to take over the world for their own means. These would have > to be programmed into them, as evolution has programmed group loyalty > and selfishness into humans. > - The entities will not be capable of fully wrap around recursive > self-improvement. They will improve in fits and starts in a wider > economy/ecology like most developments in the world * > - The goals and motivations of the entities that we will likely see in > the real world will be shaped over the long term by the forces in the > world, e.g. evolutionary, economic and physics. > > Basically an organisation trying to prepare for a world where AIs > aren't sufficiently advanced technology or magic genies, but still > dangerous and a potentially destabilising world change. Could a > coherent message be articulated by the subset of the people that agree > with these points. Or are we all still too fractured?
What you propose sounds reasonable, but I'm more interested in how to make AGI developers collaborate, which is more urgent to myself. YKY ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 Powered by Listbox: http://www.listbox.com
