Matt,
Shane Legg's definition of universal intelligence requires (I believe)
complexity but not adaptability.
In a universal intelligence test the agent never knows what the environment
it is facing is. It can only try to learn from experience and adapt in
order to
perform well. This means
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I think there is a different role for chaos theory. Richard Loosemore
describes a system as intelligent if it is complex and adaptive.
NO, no no no no!
I already denied this.
Misunderstanding: I do
Richard,
I agree with you that intelligence currently has no
classical/objective/true/formal definition.
However, I hope your opinion (given the title of the post) won't be
understood as you can take intelligence to mean whatever you want,
and since the term has no definition, all attempts
Pei Wang wrote:
Richard,
I agree with you that intelligence currently has no
classical/objective/true/formal definition.
However, I hope your opinion (given the title of the post) won't be
understood as you can take intelligence to mean whatever you want,
and since the term has no definition,
Richard,
It seems that the major difference between you and me is not on the
definition of intelligence, but on the definition of definition.
:)
Pei
On 5/21/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Pei Wang wrote:
Richard,
I agree with you that intelligence currently has no
.listbox.com
Subject: Re: There is no definition of intelligence [WAS Re: [agi]
Intelligence vs Efficient Intelligence]
Richard,
I agree with you that intelligence currently has no
classical/objective/true/formal definition.
However, I hope your opinion (given the title of the post) won't
I'm probably not answering your question but have been thinking more on all
this.
There's the usual thermodynamics stuff and relativistic physics that is
going on with intelligence and flipping bits within this universe, verses
the no-friction universe or Newtonian setup.
But what I've been
, 2007 11:45 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Intelligence vs Efficient Intelligence
I'm probably not answering your question but have been thinking more on
all
this.
There's the usual thermodynamics stuff and relativistic physics that is
going on with intelligence and flipping
--- John G. Rose [EMAIL PROTECTED] wrote:
But what I've been thinking and this is probably just reiterating what
someone else has worked through but basically a large part of intelligence
is chaos control, chaos feedback loops, operating within complexity.
Intelligence is some sort of delicate
Matt Mahoney wrote:
I think there is a different role for chaos theory. Richard Loosemore
describes a system as intelligent if it is complex and adaptive.
NO, no no no no!
I already denied this.
Misunderstanding: I do not say that a system as intelligent if it is
complex and adaptive.
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
I think there is a different role for chaos theory. Richard Loosemore
describes a system as intelligent if it is complex and adaptive.
NO, no no no no!
I already denied this.
Misunderstanding: I do not say
Well I'm going into conjecture area because my technical knowledge of some
of these disciplines is weak, but I'll keep going just for grins.
Take an example of an entity existing in a higher level of consciousness - a
Buddha who has achieved enlightenment. What is going on there? Verses and
ant
--- John G. Rose [EMAIL PROTECTED] wrote:
Well I'm going into conjecture area because my technical knowledge of some
of these disciplines is weak, but I'll keep going just for grins.
Take an example of an entity existing in a higher level of consciousness - a
Buddha who has achieved
OK I get it - there's a super infinite intelligence and then an efficient
intelligence that is represented and operates within our physical universe
restricted by thermodynamics and such?
Sounds reasonable.
So what's all the hubbub about definitions of intelligence? Sounds pretty
straight
--- John G. Rose [EMAIL PROTECTED] wrote:
So what's all the hubbub about definitions of intelligence? Sounds pretty
straight forward to me.
I guess people want intelligence to be useful, not just complex :-)
This raises a question. Suppose you had a very large program consisting of
random
--- John G. Rose [EMAIL PROTECTED] wrote:
Did you arrive at some sort of unit for intelligence? Typically
measurements are constructed of combinations of basic units for example 1
watt = 1 kg * m^2/s^3. Or is it not a unit but a set of units?
It is a unitless number. It is measured in
Time, entropy, bits, What else?
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Friday, May 18, 2007 9:14 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Intelligence vs Efficient Intelligence
Time has to included maybe?
-Original Message-
From
Time has to included maybe?
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Friday, May 18, 2007 7:55 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Intelligence vs Efficient Intelligence
--- John G. Rose [EMAIL PROTECTED] wrote:
Did you arrive at some sort
and MFLOPS on various benchmarks...
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Friday, May 18, 2007 7:55 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Intelligence vs Efficient Intelligence
--- John G. Rose [EMAIL PROTECTED] wrote:
Did you
have a cognition engine it operates over time and it will have units.
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Friday, May 18, 2007 9:48 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Intelligence vs Efficient Intelligence
--- John G. Rose [EMAIL
--- John G. Rose [EMAIL PROTECTED] wrote:
There's Newtonian and relativistic intelligence. Probably can model
intelligence formulas after physics because without physics there are no
bits so time needs to be in there as well. Intelligence is affected by the
speed of light as data
Pretty good calculations :)
Some thoughts on the topic of units and equations, some may be obvious or
redundant -
If something was extremely intelligent it would have an exact copy, bit for
bit, of the whole universe in its head. Maybe that's saying that the
universe is 100% intelligent
According to my view,
-- raw intelligence would be measured in bits
-- efficient intelligence would ultimately be measured in terms such as
bits/ (4D volume of a region of spacetime)
As noted the Bekenstein bound thus places an upper limit on efficient
intelligence
according to current
Ben,
According to this distinction, AIXI and evolution have high intelligence
but low efficient intelligence.
Yes, and in the case of AIXI it is presumably zero given that the resource
consumption is infinite. Evolution on the other hand is just efficient
enough
that when implemented on a
Nevertheless, it is still the end product raw intelligence
generated by the system that really excites me, rather than statistics on
its
internal efficiency.
Shane
Yeah, I agree with that. But like I said, the question is whether in the
real world,
efficiency needs to be considered as
Ben and Shane,
I started this discussion with the hope to show people that there are
actually different understandings (or call them definitions ) of
intelligence, each with its intuitions and motivations, and they lead
to different destinations and serve different purposes. These goals
cannot
Pei,
I think it all comes out in the wash, really ;-)
You are talking about
insufficient knowledge and resources
and my discussion of efficiency only pertains to the insufficient
resources
part.
But I think insufficient knowledge comes along automatically with
insufficient resources +
On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Pei,
I think it all comes out in the wash, really ;-)
You are going beyond my English capability. ;-)
You are talking about
insufficient knowledge and resources
and my discussion of efficiency only pertains to the insufficient
Pei Wang wrote:
On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Pei,
I think it all comes out in the wash, really ;-)
You are going beyond my English capability. ;-)
Translation: It doesn't matter one way or the other. ;-)
Richard Loosemore.
-
This list is sponsored by
Mark, it seems that you're missing the point. We as humans aren't
ABSOLUTELY CERTAIN of anything. But we are perfectly capable of operating
on the fine line between assumed certainty and uncertainty. We KNOW that
molecules are made of up bonded atoms, but past a certain point, we can't
say
--- Pei Wang [EMAIL PROTECTED] wrote:
To me, traditional computer science (CS) studies what is the best
solution to a problem if the system has SUFFICIENT knowledge and
resources, and AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources. I
On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei: AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources.
Just so. I have just spent the last hour thinking about this area, and you
have spoken the line I allotted to you almost perfectly.
Richard,
Thanks!
But to me, it is 差之毫厘,谬以千里 --- a Chinese idiom meaning An error the
breadth of a single hair (in working definitions) can lead you a
thousand miles astray (in research results) --- of course, the word
in parenthesis are mine ;-)
Pei
On 5/17/07, Richard Loosemore [EMAIL
On 5/17/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Pei Wang [EMAIL PROTECTED] wrote:
To me, traditional computer science (CS) studies what is the best
solution to a problem if the system has SUFFICIENT knowledge and
resources, and AI is about what is the best solution to a problem if
the
Pei,
I don't think these distinctions between terms really matter in the final
analysis - right, optimal etc. What I'm assuming, however you define it,
is that you are saying that AI can find one solution that is better than
others under conditions of insufficient knowledge/uncertainty - and
Josh: Any well designed AI system should not have the masturbatory tendencies
to take unjustified risks.
Josh,
Jeez, you guys will not face reality. MOST of the problems we deal with involve
risks (and uncertainty). That's what human intelligence does most of the time -
that's what any
*Won't somebody actually deal with the problem - how will your AGI system
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow?
(You guys are supposed to be in the problem-solving business).*
Look, a Novamente-based AGI system could confront this problem in 1's of
On Thursday 17 May 2007 03:36:36 pm Matt Mahoney wrote:
What if you had sufficient computing power. Then how would you solve AGI?
This is actually the basis of my approach. I just assume the brain has on the
order of 1K times more processing power than I have to experiment with, so I
look
On Thursday 17 May 2007 04:42:33 pm Mike Tintner wrote:
Won't somebody actually deal with the problem - how will your AGI system
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow?
(You guys are supposed to be in the problem-solving business).
Au contraire. Mainstream
Ben,
Why are you still encouraging an obvious troll?
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Thursday, May 17, 2007 4:47 PM
Subject: Re: [agi] Intelligence vs Efficient Intelligence
Won't somebody actually deal with the problem
On Thursday 17 May 2007 05:36:17 pm Mike Tintner wrote:
You don't start a creative process with the solution, or the kind of
solution you reckon you need, i.e. in this case, the kind of architectures
that you reckon will bring about AGI.
Wrong. Technological innovations are quite frequently
:
Ben,
Why are you still encouraging an obvious troll?
- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Thursday, May 17, 2007 4:47 PM
*Subject:* Re: [agi] Intelligence vs Efficient Intelligence
*Won't somebody actually deal
*Sent:* Thursday, May 17, 2007 4:47 PM
*Subject:* Re: [agi] Intelligence vs Efficient Intelligence
*Won't somebody actually deal with the problem - how will your AGI
system decide to invest or not to invest $10,000 in a Chinese mutual fund
tomorrow? (You guys are supposed to be in the problem
43 matches
Mail list logo