Eliezer,
As the system is now solving the optimization problem in a much
simpler way (brute force search), according to your perspective it
has actually become less intelligent?
It has become more powerful and less intelligent, in the same way that
natural selection is very powerful and
Pei,
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
I think this comes
Shane,
I dealt with this in my 2006 book The Hidden Pattern by distinguishing
[roughly speaking --
there was some formalism but I'll avoid it for the moment]
intelligence in context C = total complexity of goals achievable in C
efficient intelligence = average over all goals G in C of:
Ben,
According to this distinction, AIXI and evolution have high intelligence
but low efficient intelligence.
Yes, and in the case of AIXI it is presumably zero given that the resource
consumption is infinite. Evolution on the other hand is just efficient
enough
that when implemented on a
Pei:
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution show any intelligence.
Shane: I think
Nevertheless, it is still the end product raw intelligence
generated by the system that really excites me, rather than statistics on
its
internal efficiency.
Shane
Yeah, I agree with that. But like I said, the question is whether in the
real world,
efficiency needs to be considered as
It would be nice to see an example of this emergence - of one basic
computational/ problem-solving process [or set of processes] that you think
will give rise to an additional or higher-level process - so we can discuss
it.
Understood...
I'll reply to this a little later when I have time
On 5/17/07, John G. Rose [EMAIL PROTECTED] wrote:
I may be coming in from left field and haven't read a lot of these
discussions on defining intelligence, but defining intelligence verbally,
yes, it can have numerous descriptions and arguments. But I need something
concrete and measurable in
On 5/17/07, Shane Legg [EMAIL PROTECTED] wrote:
This just shows the complexity of the usual meaning of the word
intelligence --- many people do associate with the ability of solving
hard problems, but at the same time, many people (often the same
people!) don't think a brute-force solution
On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:
One of the huge flaws in the way you guys are talking about intelligence
(and one of the reasons you do need a dual definition as I suggested
earlier) is that you've reduced intelligence to an entirely computational,
disembodied affair. But it
On 5/17/07, Pei Wang [EMAIL PROTECTED] wrote:
I assuming you are not arguing that evolution is not the only
way to produce intelligence ...
Sorry, it should be I assume you are not arguing that evolution is
the only way to produce intelligence
Pei
-
This list is sponsored by AGIRI:
On 5/17/07, Pei Wang [EMAIL PROTECTED] wrote:
Sorry, it should be I assume you are not arguing that evolution is
the only way to produce intelligence
Definitely not. Though the results in my elegant sequence prediction
paper show that at some point math is of no further use due to
Ben and Shane,
I started this discussion with the hope to show people that there are
actually different understandings (or call them definitions ) of
intelligence, each with its intuitions and motivations, and they lead
to different destinations and serve different purposes. These goals
cannot
Pei,
I think it all comes out in the wash, really ;-)
You are talking about
insufficient knowledge and resources
and my discussion of efficiency only pertains to the insufficient
resources
part.
But I think insufficient knowledge comes along automatically with
insufficient resources +
On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Pei,
I think it all comes out in the wash, really ;-)
You are going beyond my English capability. ;-)
You are talking about
insufficient knowledge and resources
and my discussion of efficiency only pertains to the insufficient
John G. Rose wrote:
I may be coming in from left field and haven't read a lot of these
discussions on defining intelligence, but defining intelligence verbally,
yes, it can have numerous descriptions and arguments. But I need something
concrete and measurable in the form of an equation. Is
Pei Wang wrote:
On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Pei,
I think it all comes out in the wash, really ;-)
You are going beyond my English capability. ;-)
Translation: It doesn't matter one way or the other. ;-)
Richard Loosemore.
-
This list is sponsored by
Mark, it seems that you're missing the point. We as humans aren't
ABSOLUTELY CERTAIN of anything. But we are perfectly capable of operating
on the fine line between assumed certainty and uncertainty. We KNOW that
molecules are made of up bonded atoms, but past a certain point, we can't
say
--- Pei Wang [EMAIL PROTECTED] wrote:
To me, traditional computer science (CS) studies what is the best
solution to a problem if the system has SUFFICIENT knowledge and
resources, and AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources. I
On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei: AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources.
Just so. I have just spent the last hour thinking about this area, and you
have spoken the line I allotted to you almost perfectly.
Richard,
Thanks!
But to me, it is 差之毫厘,谬以千里 --- a Chinese idiom meaning An error the
breadth of a single hair (in working definitions) can lead you a
thousand miles astray (in research results) --- of course, the word
in parenthesis are mine ;-)
Pei
On 5/17/07, Richard Loosemore [EMAIL
On 5/17/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Pei Wang [EMAIL PROTECTED] wrote:
To me, traditional computer science (CS) studies what is the best
solution to a problem if the system has SUFFICIENT knowledge and
resources, and AI is about what is the best solution to a problem if
the
Pei,
I don't think these distinctions between terms really matter in the final
analysis - right, optimal etc. What I'm assuming, however you define it,
is that you are saying that AI can find one solution that is better than
others under conditions of insufficient knowledge/uncertainty - and
Josh: Any well designed AI system should not have the masturbatory tendencies
to take unjustified risks.
Josh,
Jeez, you guys will not face reality. MOST of the problems we deal with involve
risks (and uncertainty). That's what human intelligence does most of the time -
that's what any
*Won't somebody actually deal with the problem - how will your AGI system
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow?
(You guys are supposed to be in the problem-solving business).*
Look, a Novamente-based AGI system could confront this problem in 1's of
On Thursday 17 May 2007 03:36:36 pm Matt Mahoney wrote:
What if you had sufficient computing power. Then how would you solve AGI?
This is actually the basis of my approach. I just assume the brain has on the
order of 1K times more processing power than I have to experiment with, so I
look
On Thursday 17 May 2007 04:42:33 pm Mike Tintner wrote:
Won't somebody actually deal with the problem - how will your AGI system
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow?
(You guys are supposed to be in the problem-solving business).
Au contraire. Mainstream
Ben,
Why are you still encouraging an obvious troll?
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Thursday, May 17, 2007 4:47 PM
Subject: Re: [agi] Intelligence vs Efficient Intelligence
Won't somebody actually deal with the problem -
On Thursday 17 May 2007 05:36:17 pm Mike Tintner wrote:
You don't start a creative process with the solution, or the kind of
solution you reckon you need, i.e. in this case, the kind of architectures
that you reckon will bring about AGI.
Wrong. Technological innovations are quite frequently
Yeah, Mark, you have a good point.
Mike Tintner, I'm going to once again make an effort to
stop succumbing to the childish urge
to reply to your messages, when we obviously are not communicating
in a useful way in this context... ;-)
-- Ben
On 5/17/07, Mark Waser [EMAIL PROTECTED] wrote:
In fact, I'll be offline for the next couple days, which will make it easy!
On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Yeah, Mark, you have a good point.
Mike Tintner, I'm going to once again make an effort to
stop succumbing to the childish urge
to reply to your messages, when
Intelligence - we're talking about storing and flipping bits -
minimalistically that's it. How many variables will it take to come
up with
an equation? 6? 7? Some of the variables are specific and some may
be
general. One may be a measurement of complexity, one a vector set
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
John G. Rose wrote:
Intelligence - we're talking about storing and flipping bits -
minimalistically that's it. How many variables will it take to come
up with
an equation? 6? 7? Some of the variables are specific and some may
be
33 matches
Mail list logo