On Mon, Oct 28, 2013 at 4:24 AM, Steve Richfield <[email protected]>wrote:
> Matt, Camel, et al, > > I think you both have it wrong. Humans appear compelled to commit > society's efforts and resources in counter-productive and self-destructive > directions. It certainly doesn't take an AGI to see this, and no AGI could > ever overcome this without exerting SO much force that it would clearly be > viewed as "unfriendly". The sorts of things that are needed are EXACTLY > what people seek to keep AGIs from doing. > Well, apparently it could indeed take AGI for the masses to see this ... or to not just see but to react accordingly. We have to deal with the "really existing world" (as Chomsky likes to put it). Explaining what ought/could be does not change the fact that the world looks very different. Repeating "we don't need AGI to solve socioeconomic problems" will not solve them either. I agree that many people would consider it as unfriendly but we will probably need some superintelligent entity in order to create an ergodic environment for humanity. (Ergodic environment just sounds better than AGI-nanny ... thanks to Marcus Hutter for that phrase ... http://www.youtube.com/watch?v=vUUeHZJFN2Q ) Also what about dealing with degenerating genes due to a lack of natural selection within a technological society? What about increasing redundancy in terms of habitats? I suppose fixing our genes or becoming substrate independent is an AGI-hard problem? ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
