--- On Tue, 10/14/08, William Pearson <[EMAIL PROTECTED]> wrote: > There are things you can't model with limits of > processing > power/memory which restricts your ability to solve them.
Processing power, storage capacity, and so forth, are all important in the realization of an AI but I don't see how they limit your ability to model or solve problems except in terms of performance... i.e. can a problem be solved within time T. Those are factors outside of the black box of intelligence. Cognitive architecture is the guts of the black box. Any attempt to create AGI cannot be taken seriously if it doesn't explain what intelligence does, inside the black box, whether you're talking about an individual agent or a globally distributed one. (By the way, it's worth noting that "problem solving ability Y" is uncomputable since it's basically just a twist on Kolmogorov Complexity. Which is to say, you can never prove that you have the perfect (un-improvable) cognitive architecture given finite resources.) With toy problems like chess, increasing computing power can compensate for what amounts to a wildly inefficient cognitive architecture. In the real world of AGI, you have to work on efficiency first because the complexity is just too high to manage. So while you can get linear improvement on Y by increasing out-of-the-black-box factors, it's inside the box you get the non-linear, punctuated gains that are in all likelihood necessary to create AGI. Terren --- On Tue, 10/14/08, William Pearson <[EMAIL PROTECTED]> wrote: > From: William Pearson <[EMAIL PROTECTED]> > Subject: Re: [agi] Updated AGI proposal (CMR v2.1) > To: [email protected] > Date: Tuesday, October 14, 2008, 1:13 PM > Hi Terren, > > > I think humans provide ample evidence that > intelligence is not necessarily correlated with processing > power. The genius engineer in my example solves a given > problem with *much less* overall processing than the > ordinary engineer, so in this case intelligence is > correlated with some measure of "cognitive > efficiency" (which I will leave undefined). Likewise, a > grandmaster chess player looks at a given position and can > calculate a better move in one second than you or me could > come up with if we studied the board for an hour. > Grandmasters often do publicity events where they play > dozens of people simultaneously, spending just a few seconds > on each board, and winning most of the games. > > > What I meant was at processing power/memory Z, there is an > problem > solving ability Y which is the maximum. To increase the > problem > solving ability above Y you would have to increase > processing > power/memory. That is when cognitive efficiency reaches > one, in your > terminology. Efficiency is normally measured in ratios so > that seems > natural. > > There are things you can't model with limits of > processing > power/memory which restricts your ability to solve them. > > > Of course, you were referring to intelligence > "above a certain level", but if that level is high > above human intelligence, there isn't much we can assume > about that since it is by definition unknowable by humans. > > > > Not quite what I meant. > > Will > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com
