> From: Vladimir Nesov [mailto:[EMAIL PROTECTED] > On Mon, Mar 17, 2008 at 4:48 PM, John G. Rose <[EMAIL PROTECTED]> > wrote: > > > > I think though that particular proof of concepts may not need more > than a > > few people. Putting it all together would require more than a few. > Then the > > resources needed to make it interact with various systems in the world > would > > make the number of people needed grow exponentially. > > > > Then what's the point? We have this problem with existing software > already, and it's precisely the magic bullet of AGI that should allow > free lunch of automatic interfacing with real-world issues... >
The assumed value of AGI is blanketed magic bullets. They'll be quite a bit of automatic interfacing. There will be quite a bit of prevented and controlled automatic interfacing. But in the beginning, think about it, it's not instantaneous super-intelligence. John ------------------------------------------- singularity Archives: http://www.listbox.com/member/archive/11983/=now RSS Feed: http://www.listbox.com/member/archive/rss/11983/ Modify Your Subscription: http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4 Powered by Listbox: http://www.listbox.com