Shane Legg [EMAIL PROTECTED] wrote:if it searches different parts of the
space in a context
and experience sensitive manner, it is intelligent; if it doesn't only
search among listed alternatives, but also find out new alternatives,
it is much more intelligent.
Hmmm. Ok, imagine that you have
Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: Shane Legg wrote:
Would the following be possible with your notion of intelligence:
There is a computer system that does a reasonable job of solving
some optimization problem. We go along and keep on plugging
more and more RAM and CPUs into
On 5/31/07, James Ratcliff [EMAIL PROTECTED] wrote:
The actual algorithm in this case is less intelligent, but the AI is more
intelligent because it has 2 algorithms to use, and knows enough to choose
in between them.
This is similar to the sorting problem... depending on how large a list
Eliezer,
As the system is now solving the optimization problem in a much
simpler way (brute force search), according to your perspective it
has actually become less intelligent?
It has become more powerful and less intelligent, in the same way that
natural selection is very powerful and
Pei,
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
I think this comes
Pei:
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
Shane: I think
It would be nice to see an example of this emergence - of one basic
computational/ problem-solving process [or set of processes] that you think
will give rise to an additional or higher-level process - so we can discuss
it.
Understood...
I'll reply to this a little later when I have time
On 5/17/07, John G. Rose [EMAIL PROTECTED] wrote:
I may be coming in from left field and haven't read a lot of these
discussions on defining intelligence, but defining intelligence verbally,
yes, it can have numerous descriptions and arguments. But I need something
concrete and measurable in
On 5/17/07, Shane Legg [EMAIL PROTECTED] wrote:
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution
On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:
One of the huge flaws in the way you guys are talking about intelligence
(and one of the reasons you do need a dual definition as I suggested
earlier) is that you've reduced intelligence to an entirely computational,
disembodied affair. But it
On 5/17/07, Pei Wang [EMAIL PROTECTED] wrote:
I assuming you are not arguing that evolution is not the only
way to produce intelligence ...
Sorry, it should be I assume you are not arguing that evolution is
the only way to produce intelligence
Pei
-
This list is sponsored by AGIRI:
On 5/17/07, Pei Wang [EMAIL PROTECTED] wrote:
Sorry, it should be I assume you are not arguing that evolution is
the only way to produce intelligence
Definitely not. Though the results in my elegant sequence prediction
paper show that at some point math is of no further use due to
John G. Rose wrote:
I may be coming in from left field and haven't read a lot of these
discussions on defining intelligence, but defining intelligence verbally,
yes, it can have numerous descriptions and arguments. But I need something
concrete and measurable in the form of an equation. Is
Intelligence - we're talking about storing and flipping bits -
minimalistically that's it. How many variables will it take to come
up with
an equation? 6? 7? Some of the variables are specific and some may
be
general. One may be a measurement of complexity, one a vector set
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
John G. Rose wrote:
Intelligence - we're talking about storing and flipping bits -
minimalistically that's it. How many variables will it take to come
up with
an equation? 6? 7? Some of the variables are specific and some may
be
Pei,
However, in general I do think that, other things being equal, the
system that uses less resources is more intelligent.
Would the following be possible with your notion of intelligence:
There is a computer system that does a reasonable job of solving
some optimization problem. We go
Shane,
Would the following be possible with your notion of intelligence:
There is a computer system that does a reasonable job of solving
some optimization problem. We go along and keep on plugging
more and more RAM and CPUs into the computer. At some point
the algorithm sees that it has
Pei,
No. To me that is not intelligence, though it works even better.
This seems to me to be very divergent from the usual meaning
of the word intelligence. It opens up the possibility that a super
computer that is able to win a Nobel prize by running a somewhat
efficient AI algorithm could
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
Am a little confused here - not that we're not talking very roughly along
the same lines and about the same areas. It's just that for me conceptual
blending is simply a form of analogy, which we've just discussed (and one
that works by
However, as I noted in From Complexity to Creativity, in some human minds
a personality subcomponent that may be called a creative subself may
emerge, which may utilize these basic cognitive processes in a manner
systematically oriented toward creative generation.
On 5/16/07, Shane Legg [EMAIL PROTECTED] wrote:
No. To me that is not intelligence, though it works even better.
This seems to me to be very divergent from the usual meaning
of the word intelligence. It opens up the possibility that a super
computer that is able to win a Nobel prize by
to come up with something myself
:).
John
-Original Message-
From: Pei Wang [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 16, 2007 11:22 AM
To: agi@v2.listbox.com
Subject: Re: [agi] definitions of intelligence, again?!
On 5/16/07, Shane Legg [EMAIL PROTECTED] wrote:
No. To me
Shane Legg wrote:
Would the following be possible with your notion of intelligence:
There is a computer system that does a reasonable job of solving
some optimization problem. We go along and keep on plugging
more and more RAM and CPUs into the computer. At some point
the algorithm sees that
Pei,
necessary to spend some time on this issue, since the definition of
intelligence one accepts directly determines one's research goal and
criteria in evaluating other people's work. Nobody can do or even talk
about AI or AGI without an idea about what it means.
This is exactly why I am
P.S.
I should have added one comment in my previous remarks: part of my
attack against those who try to make formal definitions of intelligence
is that I have a specific, technical argument that says that such formal
definitions are strictly impossible: that is what my AGIRI 2006 paper
Pei,
Thankyou for that.
I like everything you say about trying to define intelligence. In
essence, you and I are in perfect agreement at that level of the discussion.
However, there was a slight confusion, in that previous 'challenge' of
mine, with the exact target my remarks.
I was
Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 15, 2007 11:55 AM
Subject: **SPAM** Re: [agi] definitions of intelligence, again?!
Pei,
Thankyou for that.
I like everything you say about trying to define intelligence. In
essence, you and I are in perfect agreement
Mark Waser wrote:
But there is a second type of definition that tries to *formalize*
what the subject is, and that is where my challenge was really directed.
I believe that Gödel's Incompleteness Theorem basically renders this
form of your challenge impossible.
Okay, now I have to figure
.listbox.com
Sent: Tuesday, May 15, 2007 12:50 PM
Subject: **SPAM** Re: [agi] definitions of intelligence, again?!
Mark Waser wrote:
But there is a second type of definition that tries to *formalize* what
the subject is, and that is where my challenge was really directed.
I believe that Gödel's
Mark,
Gödel's theorem does not say that something is not true, but rather that
it cannot be proven to be true even though it is true.
Thus I think that the analogue of Gödel's theorem here would be something
more like: For any formal definition of intelligence there will exist a
form of
-
From: Shane Legg
To: agi@v2.listbox.com
Sent: Tuesday, May 15, 2007 1:16 PM
Subject: **SPAM** Re: [agi] definitions of intelligence, again?!
Mark,
Gödel's theorem does not say that something is not true, but rather that
it cannot be proven to be true even though it is true.
Thus I
Shane Legg wrote:
Mark,
Gödel's theorem does not say that something is not true, but rather that
it cannot be proven to be true even though it is true.
Thus I think that the analogue of Gödel's theorem here would be something
more like: For any formal definition of intelligence there will
Richard,
I was distinguishing between two different attitudes that people take to
the problem of making a definition. One attitude (the one you adopt
here, and the one I would also wholeheartedly adopt) is to look for a
useful *descriptive* definition: something that takes the commonsense
On 5/15/07, Richard Loosemore [EMAIL PROTECTED] wrote:
I will try to see if I can extract NARS and Novamente as special cases
of the framework at some point. I believe I have a chance of doing this
(I have actually thought about it, believe it or not), but its not going
to happen soon. :-)
It would be nice to have a universal definition of general intelligence, but I
don't think we even share enough common intuition about what is intelligent or
what is general.
Instead what we seem to have is, for example, a definition based on uncertain
reasoning from somebody building an
On 5/15/07, Derek Zahn [EMAIL PROTECTED] wrote:
The point is that maybe we don't need a definition of intelligence, all we
need is a vision of an endpoint and (the really interesting bit), the steps
we'll take to get there.
In that case, the vision of an endpoint is exactly your working
the capacity to solve any extraordinary, creative problem .
So ... suggestions?
- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 15, 2007 4:02 PM
Subject: [agi] definitions of intelligence, again?!
In late April I was too busy to join
AGI is a race where everyone has drawn their own finish line.
My goal is to have a machine predict natural language text as well as the
average adult human. Why?
1. It is a hard AI problem. A solution might lead to a better understanding
of human learning.
2. Language modeling has useful
, then you
also have the capacity to solve any extraordinary, creative problem .
So ... suggestions?
- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 15, 2007 4:02 PM
Subject: [agi] definitions of intelligence, again?!
In late April I was too
On 5/15/07, Derek Zahn [EMAIL PROTECTED] wrote:
Rather than try to come up with universally accepted definitions for a
concept that we all view differently, perhaps any proposed AGI
(or AGI-like) path could put forward its perceived endpoint: that is,
imagine the system you'd like to build...
Pei,
Fully agree. The situation in mainstream AI is even worse on this
topic, compared to the new AGI community. Will you write something for
AGI-08 on this?
Marcus suggested that I submit something to AGI-08. However I'm not
sure what I could submit at the moment. I'll have a think about
, then you
also have the capacity to solve any extraordinary, creative problem .
So ... suggestions?
- Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 15, 2007 4:02 PM
Subject: [agi] definitions of intelligence, again?!
In late April I
On 5/15/07, Shane Legg [EMAIL PROTECTED] wrote:
Hmmm. Ok, imagine that you have two optimization algorithms
X and Y and they both solve some problem equally well. The
difference is that Y uses twice as many resources as X to do it.
As I understand your notion of intelligence, X would be
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
I am suggesting that there are two main types of intelligence - and humans
have both.
Simulating the human mind isn't a definition of either of those types, or
intelligence, period.
Sorry for the misunderstanding.
The two main types of
For the philosophy of AI - and this IS a discussion of philosophy - to
ignore Psychology and human intelligence, and the very extensive work
already done here, including on creativity - doesn't seem v. wise, given
that AI/AGI still haven't got to square one in the attempt either to
emulate
or to
%20page/Undergraduate/yearthree/Developmental/6-
Original Message -
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, May 16, 2007 12:17 AM
Subject: Re: [agi] definitions of intelligence, again?!
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
I am suggesting
- would arguably count as a
real superintelligence.
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Wednesday, May 16, 2007 1:12 AM
Subject: Re: [agi] definitions of intelligence, again?!
For the philosophy of AI - and this IS a discussion
: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, May 16, 2007 12:17 AM
Subject: Re: [agi] definitions of intelligence, again?!
On 5/15/07, Mike Tintner [EMAIL PROTECTED] wrote:
I am suggesting that there are two main types of intelligence - and
humans
have both.
Simulating
48 matches
Mail list logo