Charles, I'm not sure it's possible to nail down a measure of intelligence that's going to satisfy everyone. Presumably, it would be some measure of performance in problem solving across a wide variety of novel domains in complex (i.e. not toy) environments.
Obviously among potential agents, some will do better in domain D1 than others, while doing worse in D2. But we're looking for an average across all domains. My task-specific examples may have confused the issue there, you were right to point that out. But if you give all agents identical processing power and storage space, then the winner will be the one that was able to assimilate and model each problem space the most efficiently, on average. Which ultimately means the one which used the *least* amount of overall computation. Terren --- On Tue, 10/14/08, Charles Hixson <[EMAIL PROTECTED]> wrote: > From: Charles Hixson <[EMAIL PROTECTED]> > Subject: Re: [agi] Updated AGI proposal (CMR v2.1) > To: [email protected] > Date: Tuesday, October 14, 2008, 2:12 PM > If you want to argue this way (reasonable), then you need a > specific > definition of intelligence. One that allows it to be > accurately > measured (and not just "in principle"). IQ > definitely won't serve. > Neither will G. Neither will GPA (if you're discussing > a student). > > Because of this, while I think your argument is generally > reasonable, I > don't thing it's useful. Most of what you are > discussing is "task > specific", and as such I'm not sure that > intelligence is a reasonable > term to use. An expert engineer might be, e.g., a lousy > bridge player. > Yet both are thought of as requiring intelligence. I would > assert that > in both cases a lot of what's being measured is task > specific > processing, i.e., narrow AI. > > (Of course, I also believe that an AGI is impossible in the > true sense > of general, and that an approximately AGI will largely act > as a > coordinator between a bunch of narrow AI pieces of varying > generality. > This seems to be a distinctly minority view.) > > Terren Suydam wrote: > > Hi Will, > > > > I think humans provide ample evidence that > intelligence is not necessarily correlated with processing > power. The genius engineer in my example solves a given > problem with *much less* overall processing than the > ordinary engineer, so in this case intelligence is > correlated with some measure of "cognitive > efficiency" (which I will leave undefined). Likewise, a > grandmaster chess player looks at a given position and can > calculate a better move in one second than you or me could > come up with if we studied the board for an hour. > Grandmasters often do publicity events where they play > dozens of people simultaneously, spending just a few seconds > on each board, and winning most of the games. > > > > Of course, you were referring to intelligence > "above a certain level", but if that level is high > above human intelligence, there isn't much we can assume > about that since it is by definition unknowable by humans. > > > > Terren > > > > --- On Tue, 10/14/08, William Pearson > <[EMAIL PROTECTED]> wrote: > > > >> The relationship between processing power and > results is > >> not > >> necessarily linear or even positively correlated. > And as > >> an increase > >> in intelligence above a certain level requires > increased > >> processing > >> power (or perhaps not? anyone disagree?). > >> > >> When the cost of adding more computational power, > outweighs > >> the amount > >> of money or energy that you acquire from adding > the power, > >> there is > >> not much point adding the computational power. > Apart from > >> if you are > >> in competition with other agents, that can out > smart you. > >> Some of the > >> traditional views of RSI neglects this and thinks > that > >> increased > >> intelligence is always a useful thing. It is not > very > >> > >> There is a reason why lots of the planets biomass > has > >> stayed as > >> bacteria. It does perfectly well like that. It > survives. > >> > >> Too much processing power is a bad thing, it means > less for > >> self-preservation and affecting the world. > Balancing them > >> is a tricky > >> proposition indeed. > >> > >> Will Pearson > >> > >> > > > > > > > > ------------------------------------------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com
