On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote: > > This is different from what I replied to (comparative advantage, which > J Storrs Hall also assumed), although you did state this point > earlier. > > I think this one is a package deal fallacy. I can't see how whether > humans conspire to weed out wild carrots or not will affect decisions > made by future AGI overlords. ;-) >
There is a lot more reason to believe that the relation of a human to an AI will be like that of a human to larger social units of humans (companies, large corporations, nations) than that of a carrot to a human. I have argued in peer-reviewed journal articles for the view that advanced AI will essentially be like numerous, fast human intelligence rather than something of a completely different kind. I have seen ZERO considered argument for the opposite point of view. (Lots of unsupported assumptions, generally using human/insect for the model.) Note that if some super-intelligence were possible and optimal, evolution could have opted for fewer bigger brains in a dominant race. It didn't -- note our brains are actually 10% smaller than Neanderthals. This isn't proof that an optimal system is brains of our size acting in social/economic groups, but I'd claim that anyone arguing the opposite has the burden of proof (and no supporting evidence I've seen). Josh ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b Powered by Listbox: http://www.listbox.com