It doesn't need to satisfy everyone, it just has to be the definition that you are using in your argument, and which you agree to stick to.

E.g., if you define intelligence to be the resources used (given some metric) in solving some particular selection of problems, then that is a particular definition of intelligence. It may not be a very good one, though, as it looks like a system that knows the answers ahead of time and responds quickly would win over one that understood the problems in depth. Rather like a multiple choice test rather than an essay.

I'm sure that one could fudge the definition to skirt that particular pothole, but it would be an ad hoc patch. I don't trust that entire mechanism of defining intelligence. Still, if I know what you mean, I don't have to accept your interpretations to understand your argument. (You can't average across all domains, only across some pre-specified set of domains. Infinity doesn't exist in the implementable universe.)

Personally, I'm not convinced by the entire process of "measuring intelligence". I don't think that there *IS* any such thing. If it were a disease, I'd call intelligence a syndrome rather than a diagnosis. It's a collection of partially related capabilities given one name to make them easy to think about, while ignoring details. As such it has many uses, but it's easy to mistake it for some genuine thing, especially as it's an intangible.

As an analogy consider the "gene for blue eyes". There is no such gene. There is a combination of genes that yields blue eyes, and it's characterized by the lack of genes for other eye colors. (It's more complex than that, but that's enough.)

E.g., there appears to be a particular gene which is present in almost all people which enables them to parse grammatical sentences. But there have been found a few people in one family where this gene is damaged. The result is that about half the members of that family can't speak or understand language. Are they unintelligent? Well, the can't parse grammatical sentences, and they can't learn language. In most other ways they appear as intelligent as anyone else.

So I'm suspicious of ALL definitions of intelligence which treat it as some kind of global thing. But if you give me the definition that you are using in an argument, then I can at least attempt to understand what you are saying.


Terren Suydam wrote:
Charles,

I'm not sure it's possible to nail down a measure of intelligence that's going 
to satisfy everyone. Presumably, it would be some measure of performance in 
problem solving across a wide variety of novel domains in complex (i.e. not 
toy) environments.

Obviously among potential agents, some will do better in domain D1 than others, 
while doing worse in D2. But we're looking for an average across all domains. 
My task-specific examples may have confused the issue there, you were right to 
point that out.

But if you give all agents identical processing power and storage space, then 
the winner will be the one that was able to assimilate and model each problem 
space the most efficiently, on average. Which ultimately means the one which 
used the *least* amount of overall computation.

Terren

--- On Tue, 10/14/08, Charles Hixson <[EMAIL PROTECTED]> wrote:

From: Charles Hixson <[EMAIL PROTECTED]>
Subject: Re: [agi] Updated AGI proposal (CMR v2.1)
To: agi@v2.listbox.com
Date: Tuesday, October 14, 2008, 2:12 PM
If you want to argue this way (reasonable), then you need a
specific definition of intelligence. One that allows it to be accurately measured (and not just "in principle"). IQ definitely won't serve. Neither will G. Neither will GPA (if you're discussing
a student).

Because of this, while I think your argument is generally
reasonable, I don't thing it's useful. Most of what you are discussing is "task specific", and as such I'm not sure that intelligence is a reasonable term to use. An expert engineer might be, e.g., a lousy bridge player. Yet both are thought of as requiring intelligence. I would assert that in both cases a lot of what's being measured is task specific processing, i.e., narrow AI.
(Of course, I also believe that an AGI is impossible in the
true sense of general, and that an approximately AGI will largely act as a coordinator between a bunch of narrow AI pieces of varying generality. This seems to be a distinctly minority view.)

Terren Suydam wrote:
Hi Will,

I think humans provide ample evidence that
intelligence is not necessarily correlated with processing
power. The genius engineer in my example solves a given
problem with *much less* overall processing than the
ordinary engineer, so in this case intelligence is
correlated with some measure of "cognitive
efficiency" (which I will leave undefined). Likewise, a
grandmaster chess player looks at a given position and can
calculate a better move in one second than you or me could
come up with if we studied the board for an hour.
Grandmasters often do publicity events where they play
dozens of people simultaneously, spending just a few seconds
on each board, and winning most of the games.
Of course, you were referring to intelligence
"above a certain level", but if that level is high
above human intelligence, there isn't much we can assume
about that since it is by definition unknowable by humans.
Terren

--- On Tue, 10/14/08, William Pearson
<[EMAIL PROTECTED]> wrote:
The relationship between processing power and
results is
not
necessarily linear or even positively  correlated.
And as
an increase
in intelligence above a certain level requires
increased
processing
power (or perhaps not? anyone disagree?).

When the cost of adding more computational power,
outweighs
the amount
of money or energy that you acquire from adding
the power,
there is
not much point adding the computational power.
Apart from
if you are
in competition with other agents, that can out
smart you.
Some of the
traditional views of RSI neglects this and thinks
that
increased
intelligence is always a useful thing. It is not
very
There is a reason why lots of the planets biomass
has
stayed as
bacteria. It does perfectly well like that. It
survives.
Too much processing power is a bad thing, it means
less for
self-preservation and affecting the world.
Balancing them
is a tricky
proposition indeed.

  Will Pearson


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to