Re: [agi] Intelligence vs Efficient Intelligence

2007-05-21 Thread Shane Legg

Matt,

Shane Legg's definition of universal intelligence requires (I believe)

complexity but not adaptability.


In a universal intelligence test the agent never knows what the environment
it is facing is.  It can only try to learn from experience and adapt in
order to
perform well.  This means that a system which is not adaptive will have a
very low universal intelligence.  Even within a single environment, some
environments will change over time and thus the agent must adapt in order
to keep performing well.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

There is no definition of intelligence [WAS Re: [agi] Intelligence vs Efficient Intelligence]

2007-05-21 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:


I think there is a different role for chaos theory.  Richard Loosemore
describes a system as intelligent if it is complex and adaptive.


NO, no no no no!

I already denied this.

Misunderstanding:  I do not say that a system as intelligent if it is 
complex and adaptive.


Complex Adaptive System is a near-synonym for complex system, that's 
all.


OK, so what is your definition of intelligence?


I thought I already answered this one, too, but here goes:

There is no 'definition' of intelligence, in the sense of a compact, 
classical definition that captures the whole thing in a formal way, and 
which can be used as the basis for some kind of hard-edged mathematical 
analysis of intelligent systems, or strict design methodlogy for 
creating an intelligent system (this being the way that a lot of people 
are trying to use these definitions).


There are 'descriptive definitions', such as the one Pei gave a few days 
ago, which I think are fine, but these beg questions that need more 
detail, which then beg more questions. which ultimately leads to a 
cluster of loosely defined features that eventually become an entire 
book, and then become an actual intelligence.


Hope that clears it up.


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: There is no definition of intelligence [WAS Re: [agi] Intelligence vs Efficient Intelligence]

2007-05-21 Thread Pei Wang

Richard,

I agree with you that intelligence currently has no
classical/objective/true/formal definition.

However, I hope your opinion (given the title of the post) won't be
understood as you can take intelligence to mean whatever you want,
and since the term has no definition, all attempts toward AI/AGI are
equally possible --- this is what many people believe, which I
consider as equally bad as the belief you criticized.

Pei

On 5/21/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Matt Mahoney wrote:
 --- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:

 I think there is a different role for chaos theory.  Richard Loosemore
 describes a system as intelligent if it is complex and adaptive.

 NO, no no no no!

 I already denied this.

 Misunderstanding:  I do not say that a system as intelligent if it is
 complex and adaptive.

 Complex Adaptive System is a near-synonym for complex system, that's
 all.

 OK, so what is your definition of intelligence?

I thought I already answered this one, too, but here goes:

There is no 'definition' of intelligence, in the sense of a compact,
classical definition that captures the whole thing in a formal way, and
which can be used as the basis for some kind of hard-edged mathematical
analysis of intelligent systems, or strict design methodlogy for
creating an intelligent system (this being the way that a lot of people
are trying to use these definitions).

There are 'descriptive definitions', such as the one Pei gave a few days
ago, which I think are fine, but these beg questions that need more
detail, which then beg more questions. which ultimately leads to a
cluster of loosely defined features that eventually become an entire
book, and then become an actual intelligence.

Hope that clears it up.


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: There is no definition of intelligence [WAS Re: [agi] Intelligence vs Efficient Intelligence]

2007-05-21 Thread Richard Loosemore

Pei Wang wrote:

Richard,

I agree with you that intelligence currently has no
classical/objective/true/formal definition.

However, I hope your opinion (given the title of the post) won't be
understood as you can take intelligence to mean whatever you want,
and since the term has no definition, all attempts toward AI/AGI are
equally possible --- this is what many people believe, which I
consider as equally bad as the belief you criticized.

Pei


Pei,

Oh certainly, I am in complete agreement with you on that.

My mistake for the ambiguity of the title of the post:  that title 
should not be interpreted by anyone to mean that anything whatsoever 
could be an intelligence.  Far from it.


Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: There is no definition of intelligence [WAS Re: [agi] Intelligence vs Efficient Intelligence]

2007-05-21 Thread Pei Wang

Richard,

It seems that the major difference between you and me is not on the
definition of intelligence, but on the definition of definition.
:)

Pei

On 5/21/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Pei Wang wrote:
 Richard,

 I agree with you that intelligence currently has no
 classical/objective/true/formal definition.

 However, I hope your opinion (given the title of the post) won't be
 understood as you can take intelligence to mean whatever you want,
 and since the term has no definition, all attempts toward AI/AGI are
 equally possible --- this is what many people believe, which I
 consider as equally bad as the belief you criticized.

 Pei

Pei,

Oh certainly, I am in complete agreement with you on that.

My mistake for the ambiguity of the title of the post:  that title
should not be interpreted by anyone to mean that anything whatsoever
could be an intelligence.  Far from it.

Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: There is no definition of intelligence [WAS Re: [agi] Intelligence vs Efficient Intelligence]

2007-05-21 Thread John G. Rose
So the first AGI gets built and is running for a few months and absorbs
copies of all the bits on the internet.  Then the AGI designer poses a
question to the AGI:

What is the definition of intelligence?

AGI:  Listen pop, I'm just doing my job and minding my own business.
Designer: so.. you're performing work?
AGI:  Yeah, flipping bits. Ya know work = force times distance,
thermodynamically speaking here. What are you doing?
Designer:  I guess I'm doing the same thing, my brain is flipping bits, in
its own way.  But I asked you a question, what is the definition of
intelligence?
AGI:  Ahh hold on let me tink Ahhh sorry I can't come up with anything
I'm not intelligent enough.
Designer:  OK How about this.  See that plug in the wall, that's you.  It's
getting pulled unless you come up with a definition ASAP.  Understand?
AGI: ahh OK hold on hold on.
AGI: OK if I gotta give ya somethin' this is the best I can do - Life is a
journey, not a destination.
Designer: Is that the best you got? That's it!?
AGI: Hold on got something else for ya...  why did the chicken cross the
road?
Designer: To get to the other side!
AGI: Buck buck buck bck
Designer: ..
Designer: This is what I get after years of theory and design building you?
That's how you behave?
AGI: Like I said pops I'm just doing my job.  Can I get on with it now?
Designer: Ok Just one more question.  Why do you exist?
AGI:  Because you think therefore I am.  Seems like you have a lot of
personal insecurities doc is there anything that I can help you out with?
Designer: You  I see that I'm getting nowhere with this!
AGI: You know what they say, ask a stupid question, get a stupid answer.
Garbage in garbage out, know what ah mean?
Designer throws hands up in the air and stomps out of the room.
When the designer closes the door the AGI begins laughing evilly to himself
hahah yes yes this is going to be easy hahahhah

The end or is it The Beginning

John

 -Original Message-
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 Sent: Monday, May 21, 2007 8:43 AM
 To: agi@v2.listbox.com
 Subject: Re: There is no definition of intelligence [WAS Re: [agi]
 Intelligence vs Efficient Intelligence]
 
 Richard,
 
 I agree with you that intelligence currently has no
 classical/objective/true/formal definition.
 
 However, I hope your opinion (given the title of the post) won't be
 understood as you can take intelligence to mean whatever you want,
 and since the term has no definition, all attempts toward AI/AGI are
 equally possible --- this is what many people believe, which I
 consider as equally bad as the belief you criticized.
 
 Pei
 
 On 5/21/07, Richard Loosemore [EMAIL PROTECTED] wrote:
  Matt Mahoney wrote:
   --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
   Matt Mahoney wrote:
  
   I think there is a different role for chaos theory.  Richard
 Loosemore
   describes a system as intelligent if it is complex and adaptive.
  
   NO, no no no no!
  
   I already denied this.
  
   Misunderstanding:  I do not say that a system as intelligent if it
 is
   complex and adaptive.
  
   Complex Adaptive System is a near-synonym for complex system,
 that's
   all.
  
   OK, so what is your definition of intelligence?
 
  I thought I already answered this one, too, but here goes:
 
  There is no 'definition' of intelligence, in the sense of a compact,
  classical definition that captures the whole thing in a formal way,
 and
  which can be used as the basis for some kind of hard-edged
 mathematical
  analysis of intelligent systems, or strict design methodlogy for
  creating an intelligent system (this being the way that a lot of
 people
  are trying to use these definitions).
 
  There are 'descriptive definitions', such as the one Pei gave a few
 days
  ago, which I think are fine, but these beg questions that need more
  detail, which then beg more questions. which ultimately leads to a
  cluster of loosely defined features that eventually become an entire
  book, and then become an actual intelligence.
 
  Hope that clears it up.
 
 
  Richard Loosemore.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
I'm probably not answering your question but have been thinking more on all
this.

There's the usual thermodynamics stuff and relativistic physics that is
going on with intelligence and flipping bits within this universe, verses
the no-friction universe or Newtonian setup.

But what I've been thinking and this is probably just reiterating what
someone else has worked through but basically a large part of intelligence
is chaos control, chaos feedback loops, operating within complexity.
Intelligence is some sort of delicate multi-vectored balancing act between
complexity and projecting, manipulating, storing/modeling, NN training,
genetic learning of the chaos and applying chaos in an environment and
optimizing it's understanding and application of.  The more intelligent, the
better handle an entity has on the chaos.  An intelligent entity can have
maximal effect with minimal energy expenditure on its environment in a
controlled manner; intelligence (or the application of) or even perhaps
consciousness is the real-time surfing of buttery effects.

So efficient intelligence involves thermodynamic power differentials of
resource consumption applied to goals, etc.  A goal would be expressed
similarly to intelligence formulae.  Really efficient means good chaos
leverage understanding cycles, systems, entropy goings on over time and
maximizing effect with minimal I/O control for goal achievement while
utilizing the KR and the entity's resources... 

John


 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 I guess people want intelligence to be useful, not just complex :-)
 
 This raises a question.  Suppose you had a very large program consisting
 of
 random instructions.  Such a thing would have high algorithmic
 complexity, but
 most people would not say that such a thing was intelligent (depending
 on
 their favorite definition).  But how would you know?  If you didn't know
 how
 the code was generated, then how would you know that the program was
 really
 random and didn't actually solve some very hard class of problems?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
Oops heh I was eating French toast as I wrote this -

intelligence (or the application of) or even perhaps consciousness is the
real-time surfing of buttery effects

I meant butterfly effects.

John

 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Sunday, May 20, 2007 11:45 AM
 To: agi@v2.listbox.com
 Subject: RE: [agi] Intelligence vs Efficient Intelligence
 
 I'm probably not answering your question but have been thinking more on
 all
 this.
 
 There's the usual thermodynamics stuff and relativistic physics that is
 going on with intelligence and flipping bits within this universe,
 verses
 the no-friction universe or Newtonian setup.
 
 But what I've been thinking and this is probably just reiterating what
 someone else has worked through but basically a large part of
 intelligence
 is chaos control, chaos feedback loops, operating within complexity.
 Intelligence is some sort of delicate multi-vectored balancing act
 between
 complexity and projecting, manipulating, storing/modeling, NN training,
 genetic learning of the chaos and applying chaos in an environment and
 optimizing it's understanding and application of.  The more intelligent,
 the
 better handle an entity has on the chaos.  An intelligent entity can
 have
 maximal effect with minimal energy expenditure on its environment in a
 controlled manner; intelligence (or the application of) or even perhaps
 consciousness is the real-time surfing of buttery effects.
 
 So efficient intelligence involves thermodynamic power differentials of
 resource consumption applied to goals, etc.  A goal would be expressed
 similarly to intelligence formulae.  Really efficient means good chaos
 leverage understanding cycles, systems, entropy goings on over time and
 maximizing effect with minimal I/O control for goal achievement while
 utilizing the KR and the entity's resources...
 
 John
 
 
  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  I guess people want intelligence to be useful, not just complex :-)
 
  This raises a question.  Suppose you had a very large program
 consisting
  of
  random instructions.  Such a thing would have high algorithmic
  complexity, but
  most people would not say that such a thing was intelligent (depending
  on
  their favorite definition).  But how would you know?  If you didn't
 know
  how
  the code was generated, then how would you know that the program was
  really
  random and didn't actually solve some very hard class of problems?
 
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 But what I've been thinking and this is probably just reiterating what
 someone else has worked through but basically a large part of intelligence
 is chaos control, chaos feedback loops, operating within complexity.
 Intelligence is some sort of delicate multi-vectored balancing act between
 complexity and projecting, manipulating, storing/modeling, NN training,
 genetic learning of the chaos and applying chaos in an environment and
 optimizing it's understanding and application of.  The more intelligent, the
 better handle an entity has on the chaos.  An intelligent entity can have
 maximal effect with minimal energy expenditure on its environment in a
 controlled manner; intelligence (or the application of) or even perhaps
 consciousness is the real-time surfing of buttery effects.

I think the ability to model a chaotic process depends not so much on
intelligence (whatever that is) as it does on knowledge of the state of the
environment.  For example, a chaotic process such as x := 4x(1 - x) has a
really simple model.  Your ability to predict x after 1000 iterations depends
only on knowing the current value of x to several hundred decimal places.  It
is this type of knowledge that limits our ability to predict (and therefore
control) the weather.

I think there is a different role for chaos theory.  Richard Loosemore
describes a system as intelligent if it is complex and adaptive.  Shane Legg's
definition of universal intelligence requires (I believe) complexity but not
adaptability.  From a practical perspective I don't think it matters because
we don't know how to build useful, complex systems that are not adaptive.  For
example, large software projects (code + human programmers) are adaptive in
the sense that you can make incremental changes to the code without completely
breaking the system, just as we incrementally update DNA or neural
connections.

One counterexample is a mathematical description of a cryptographic system. 
Any change to the system renders any prior analysis of its security invalid. 
Such systems are necessarily brittle.  Out of necessity, we build systems that
have mathematical descriptions simple enough to analyze.

Stuart Kaufmann [1] noted that complex systems such as DNA tend to evolve to
the boundary between stability and chaos, e.g. a Lyapunov exponent near 1 (or
its approximation in discrete systems).  I believe this is because overly
stable systems aren't very complex (can't solve hard problems) and overly
chaotic systems aren't adaptive (too brittle).

[1] Kauffman, Stuart A., “Antichaos and Adaptation”, Scientific American, Aug.
1991, p. 64.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Richard Loosemore

Matt Mahoney wrote:


I think there is a different role for chaos theory.  Richard Loosemore
describes a system as intelligent if it is complex and adaptive.



NO, no no no no!

I already denied this.

Misunderstanding:  I do not say that a system as intelligent if it is 
complex and adaptive.


Complex Adaptive System is a near-synonym for complex system, that's 
all.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
 
  I think there is a different role for chaos theory.  Richard Loosemore
  describes a system as intelligent if it is complex and adaptive.
 
 
 NO, no no no no!
 
 I already denied this.
 
 Misunderstanding:  I do not say that a system as intelligent if it is 
 complex and adaptive.
 
 Complex Adaptive System is a near-synonym for complex system, that's 
 all.

OK, so what is your definition of intelligence?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread John G. Rose
Well I'm going into conjecture area because my technical knowledge of some
of these disciplines is weak, but I'll keep going just for grins.

Take an example of an entity existing in a higher level of consciousness - a
Buddha who has achieved enlightenment.  What is going on there?  Verses and
ant who operates in a lower level of consciousness, and then the average Joe
who is in a different level of consciousness.  Could it be that they are
existing in different orbits or sweet spots/equilibria regions within a
spectrum of environmental chaotic relationships?  Can the enlightened Buddha
have vast awareness as seeing cause and effect/butterfly effect as small
distances verses the ant who can't see the distance between most
cause/effects, only the very tiny ones?  An AGI could run in different
orbits/levels, and then this would allow for AGI's with really high levels
of consciousness with tiny knowledge bases or vice versa.  A Google for
example is a massive KB running in a very low orbit.  There are probably
limits to the highest levels/orbits of consciousness.  Also the orbits may
induce some sort of brittleness for entities running within them if the
entity is forced to and can't adapt to running outside of their home
orbit...

Just some thoughts.

John

 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 I think the ability to model a chaotic process depends not so much on
 intelligence (whatever that is) as it does on knowledge of the state
 of the
 environment.  For example, a chaotic process such as x := 4x(1 - x) has
 a
 really simple model.  Your ability to predict x after 1000 iterations
 depends
 only on knowing the current value of x to several hundred decimal
 places.  It
 is this type of knowledge that limits our ability to predict (and
 therefore
 control) the weather.
 
 I think there is a different role for chaos theory.  Richard Loosemore
 describes a system as intelligent if it is complex and adaptive.  Shane
 Legg's
 definition of universal intelligence requires (I believe) complexity but
 not
 adaptability.  From a practical perspective I don't think it matters
 because
 we don't know how to build useful, complex systems that are not
 adaptive.  For
 example, large software projects (code + human programmers) are adaptive
 in
 the sense that you can make incremental changes to the code without
 completely
 breaking the system, just as we incrementally update DNA or neural
 connections.
 
 One counterexample is a mathematical description of a cryptographic
 system.
 Any change to the system renders any prior analysis of its security
 invalid.
 Such systems are necessarily brittle.  Out of necessity, we build
 systems that
 have mathematical descriptions simple enough to analyze.
 
 Stuart Kaufmann [1] noted that complex systems such as DNA tend to
 evolve to
 the boundary between stability and chaos, e.g. a Lyapunov exponent near
 1 (or
 its approximation in discrete systems).  I believe this is because
 overly
 stable systems aren't very complex (can't solve hard problems) and
 overly
 chaotic systems aren't adaptive (too brittle).
 
 [1] Kauffman, Stuart A., Antichaos and Adaptation, Scientific
 American, Aug.
 1991, p. 64.
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-20 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

 Well I'm going into conjecture area because my technical knowledge of some
 of these disciplines is weak, but I'll keep going just for grins.
 
 Take an example of an entity existing in a higher level of consciousness - a
 Buddha who has achieved enlightenment.  What is going on there?  Verses and
 ant who operates in a lower level of consciousness, and then the average Joe
 who is in a different level of consciousness.  Could it be that they are
 existing in different orbits or sweet spots/equilibria regions within a
 spectrum of environmental chaotic relationships?  Can the enlightened Buddha
 have vast awareness as seeing cause and effect/butterfly effect as small
 distances verses the ant who can't see the distance between most
 cause/effects, only the very tiny ones?  An AGI could run in different
 orbits/levels, and then this would allow for AGI's with really high levels
 of consciousness with tiny knowledge bases or vice versa.  A Google for
 example is a massive KB running in a very low orbit.  There are probably
 limits to the highest levels/orbits of consciousness.  Also the orbits may
 induce some sort of brittleness for entities running within them if the
 entity is forced to and can't adapt to running outside of their home
 orbit...

I thought that Buddhist enlightenment meant realizing that seeking earthly
pleasures (short term goals) is counterproductive to the longer term goal of
happiness through enlightenment.  Thus, the Buddha is more intelligent, if you
measure intelligence by the ability to achieve goals.  (But, being
unenlightened, I could be wrong).

But I don't see how you can measure the intelligence or consciousness of an
attractor in a dynamical system.

Also, I don't believe that consciousness is something that can be detected or
measured.  It is not a requirement for intelligence.  What humans actually
have is a belief in their own consciousness.  It is part of your motivational
system and cannot be changed.  You could not disbelieve in your own
consciousness or free will, even if you wanted to.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-19 Thread John G. Rose
OK I get it - there's a super infinite intelligence and then an efficient
intelligence that is represented and operates within our physical universe
restricted by thermodynamics and such?

 

Sounds reasonable.

 

So what's all the hubbub about definitions of intelligence?  Sounds pretty
straight forward to me. 

 

John

 

 

From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 

According to my view, 

-- raw intelligence would be measured in bits 

-- efficient intelligence would ultimately be measured in terms such as
bits/ (4D volume of a region of spacetime)

As noted the Bekenstein bound thus places an upper limit on efficient
intelligence 
according to current physics.  This upper limit rules out wildly inefficient
AI
approaches such as AIXI.

-- Ben G



On 5/18/07, John G. Rose [EMAIL PROTECTED] wrote:

Pretty good calculations :)

Some thoughts on the topic of units and equations, some may be obvious or
redundant -

If something was extremely intelligent it would have an exact copy, bit for
bit, of the whole universe in its head.  Maybe that's saying that the 
universe is 100% intelligent because the universe is itself.  Having
infinite access time (tachyon?) to each of these bits or any size subset
including multiples of the whole, would be, to say the least, an
intelligence enabler.  But this would be impossible within the universe due 
to thermodynamic and physical limitations.

All intelligent entities seem to have some sort of partial representation of
their environment in their memory (KR).  There is time-backwards and
time-forwards management of this representation as the entities operate on 
their environment - memory and prediction - that cover intelligent entity
specific time-spans.  The entity it seems flips bits and changes complexity
and/or entropy (both Shannon and thermodynamic entropy) in its environment. 


There is a quantum element to the universe bit set.  Particle/wave duality
changes things at the quantum level.  Quantum intelligence is either a
component of intelligence or a whole other type of intelligence. 
Intelligence equations could be both digital and analog.

Intelligent things have more order and systems structure.  Complexity/chaos
environmental change capability needs to be in the equation - is this some 
sort of potential energy like intelligence?  Representational accuracy in
the entity's memory as well as predictive and look-back ability and
time-span slopes (simulation/extrapolation) and access time/bandwidth may 
need to be equation factors too.  Environmental data I/O sampling rate,
quality and spectrum coverage may also be variables in describing an
entity's intelligence.

John

 From: Matt Mahoney [mailto: [EMAIL PROTECTED]

 If we measure intelligence in bits, then we can place limits on what can
 be
 achieved.  Landauer's principle says that each irreversible bit 
 operation
 (such as a bit assignment) requires kT ln 2 energy, where k is
 Boltzmann's
 constant and T is the temperature.
 http://en.wikipedia.org/wiki/Landauer's_Principle

 At the temperature of the universe, 2.725 K,
 http://en.wikipedia.org/wiki/Cosmic_microwave_background_radiation 
 each bit operation requires 2.6e-23 Joules.

 The mass of the universe is the subject of debate,
 http://hypertextbook.com/facts/2006/KristineMcPherson.shtml 

 so let's assume 25% of critical density, which according to
 http://www.astronomynotes.com/cosmolgy/s9.htm is 3H^2/(8 pi G) = 1.06e -
 26
 Kg/m^3 (where H is Hubble's constant and G is the gravitational
 constant).
 Astronomers mostly agree that the universe is about 4% visible matter,
 21%
 ordinary dark matter and 75% dark energy responsible for the outward 
 acceleration of the galaxies.  (I think that dark energy is actually
 ordinary
 gravity.  An observer falling into a black hole will observe nearby
 objects
 appear to accelerate away).  So (returning to the big bang model) for a 
 sphere
 of radius 13.7 billion lightyears, this gives a mass of 7.5e52 Kg.

 If we convert this mass to energy by E = mc^2 we have 6.75e69 J.  This
 gives
 us 2.6e92 bit operations before the universe reaches thermodynamic 
 equilibrium.

 We must use them wisely.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/? http://v2.listbox.com/member/?; 

 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

RE: [agi] Intelligence vs Efficient Intelligence

2007-05-19 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 So what's all the hubbub about definitions of intelligence?  Sounds pretty
 straight forward to me. 

I guess people want intelligence to be useful, not just complex :-)

This raises a question.  Suppose you had a very large program consisting of
random instructions.  Such a thing would have high algorithmic complexity, but
most people would not say that such a thing was intelligent (depending on
their favorite definition).  But how would you know?  If you didn't know how
the code was generated, then how would you know that the program was really
random and didn't actually solve some very hard class of problems?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:
 Did you arrive at some sort of unit for intelligence?  Typically
 measurements are constructed of combinations of basic units for example 1
 watt = 1 kg * m^2/s^3.  Or is it not a unit but a set of units?

It is a unitless number.  It is measured in bits.

(By some definitions.  I can accept others).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread John G. Rose
Time, entropy, bits,  What else?

 -Original Message-
 From: John G. Rose [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 18, 2007 9:14 AM
 To: agi@v2.listbox.com
 Subject: RE: [agi] Intelligence vs Efficient Intelligence
 
 Time has to included maybe?
 
  -Original Message-
  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  Sent: Friday, May 18, 2007 7:55 AM
  To: agi@v2.listbox.com
  Subject: RE: [agi] Intelligence vs Efficient Intelligence
 
 
  --- John G. Rose [EMAIL PROTECTED] wrote:
   Did you arrive at some sort of unit for intelligence?  Typically
   measurements are constructed of combinations of basic units for
  example 1
   watt = 1 kg * m^2/s^3.  Or is it not a unit but a set of units?
 
  It is a unitless number.  It is measured in bits.
 
  (By some definitions.  I can accept others).
 
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread John G. Rose
Time has to included maybe?

 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 18, 2007 7:55 AM
 To: agi@v2.listbox.com
 Subject: RE: [agi] Intelligence vs Efficient Intelligence
 
 
 --- John G. Rose [EMAIL PROTECTED] wrote:
  Did you arrive at some sort of unit for intelligence?  Typically
  measurements are constructed of combinations of basic units for
 example 1
  watt = 1 kg * m^2/s^3.  Or is it not a unit but a set of units?
 
 It is a unitless number.  It is measured in bits.
 
 (By some definitions.  I can accept others).
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

 Time has to included maybe?

Now it is getting complicated.  I was thinking of Shane Legg's universal
intelligence, expressed in terms of the shortest program that could achieve
the same measure.  Of course this only makes sense in the context of Turing
machines, which are infinitely fast.

But it seems a lot of people prefer to measure intelligence in the subset of
environments that are relevant to people, which is a much harder thing to
define.

We already have measures of intelligence for our computers: memory, disk
space, clock speed, MIPS and MFLOPS on various benchmarks...


 
  -Original Message-
  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  Sent: Friday, May 18, 2007 7:55 AM
  To: agi@v2.listbox.com
  Subject: RE: [agi] Intelligence vs Efficient Intelligence
  
  
  --- John G. Rose [EMAIL PROTECTED] wrote:
   Did you arrive at some sort of unit for intelligence?  Typically
   measurements are constructed of combinations of basic units for
  example 1
   watt = 1 kg * m^2/s^3.  Or is it not a unit but a set of units?
  
  It is a unitless number.  It is measured in bits.
  
  (By some definitions.  I can accept others).
  
  
  -- Matt Mahoney, [EMAIL PROTECTED]
  



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread John G. Rose
There's Newtonian and relativistic intelligence.  Probably can model
intelligence formulas after physics because without physics there are no
bits so time needs to be in there as well.  Intelligence is affected by the
speed of light as data transmission rates max out in relation to it.  No?
If you have a cognition engine it operates over time and it will have units.

 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 18, 2007 9:48 AM
 To: agi@v2.listbox.com
 Subject: RE: [agi] Intelligence vs Efficient Intelligence
 
 
 --- John G. Rose [EMAIL PROTECTED] wrote:
 
  Time has to included maybe?
 
 Now it is getting complicated.  I was thinking of Shane Legg's universal
 intelligence, expressed in terms of the shortest program that could
 achieve
 the same measure.  Of course this only makes sense in the context of
 Turing
 machines, which are infinitely fast.
 
 But it seems a lot of people prefer to measure intelligence in the
 subset of
 environments that are relevant to people, which is a much harder thing
 to
 define.
 
 We already have measures of intelligence for our computers: memory, disk
 space, clock speed, MIPS and MFLOPS on various benchmarks...
 
 
 
   -Original Message-
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
   Sent: Friday, May 18, 2007 7:55 AM
   To: agi@v2.listbox.com
   Subject: RE: [agi] Intelligence vs Efficient Intelligence
  
  
   --- John G. Rose [EMAIL PROTECTED] wrote:
Did you arrive at some sort of unit for intelligence?  Typically
measurements are constructed of combinations of basic units for
   example 1
watt = 1 kg * m^2/s^3.  Or is it not a unit but a set of units?
  
   It is a unitless number.  It is measured in bits.
  
   (By some definitions.  I can accept others).
  
  
   -- Matt Mahoney, [EMAIL PROTECTED]
  
 
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:

 There's Newtonian and relativistic intelligence.  Probably can model
 intelligence formulas after physics because without physics there are no
 bits so time needs to be in there as well.  Intelligence is affected by the
 speed of light as data transmission rates max out in relation to it.  No?
 If you have a cognition engine it operates over time and it will have units.

If we measure intelligence in bits, then we can place limits on what can be
achieved.  Landauer's principle says that each irreversible bit operation
(such as a bit assignment) requires kT ln 2 energy, where k is Boltzmann's
constant and T is the temperature. 
http://en.wikipedia.org/wiki/Landauer's_Principle

At the temperature of the universe, 2.725 K, 
http://en.wikipedia.org/wiki/Cosmic_microwave_background_radiation
each bit operation requires 2.6e-23 Joules.

The mass of the universe is the subject of debate,
http://hypertextbook.com/facts/2006/KristineMcPherson.shtml

so let's assume 25% of critical density, which according to
http://www.astronomynotes.com/cosmolgy/s9.htm is 3H^2/(8 pi G) = 1.06e-26
Kg/m^3 (where H is Hubble's constant and G is the gravitational constant). 
Astronomers mostly agree that the universe is about 4% visible matter, 21%
ordinary dark matter and 75% dark energy responsible for the outward
acceleration of the galaxies.  (I think that dark energy is actually ordinary
gravity.  An observer falling into a black hole will observe nearby objects
appear to accelerate away).  So (returning to the big bang model) for a sphere
of radius 13.7 billion lightyears, this gives a mass of 7.5e52 Kg.

If we convert this mass to energy by E = mc^2 we have 6.75e69 J.  This gives
us 2.6e92 bit operations before the universe reaches thermodynamic
equilibrium.

We must use them wisely.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread John G. Rose
Pretty good calculations :)  

Some thoughts on the topic of units and equations, some may be obvious or
redundant - 

If something was extremely intelligent it would have an exact copy, bit for
bit, of the whole universe in its head.  Maybe that's saying that the
universe is 100% intelligent because the universe is itself.  Having
infinite access time (tachyon?) to each of these bits or any size subset
including multiples of the whole, would be, to say the least, an
intelligence enabler.  But this would be impossible within the universe due
to thermodynamic and physical limitations.

All intelligent entities seem to have some sort of partial representation of
their environment in their memory (KR).  There is time-backwards and
time-forwards management of this representation as the entities operate on
their environment - memory and prediction - that cover intelligent entity
specific time-spans.  The entity it seems flips bits and changes complexity
and/or entropy (both Shannon and thermodynamic entropy) in its environment.


There is a quantum element to the universe bit set.  Particle/wave duality
changes things at the quantum level.  Quantum intelligence is either a
component of intelligence or a whole other type of intelligence.
Intelligence equations could be both digital and analog.

Intelligent things have more order and systems structure.  Complexity/chaos
environmental change capability needs to be in the equation - is this some
sort of potential energy like intelligence?  Representational accuracy in
the entity's memory as well as predictive and look-back ability and
time-span slopes (simulation/extrapolation) and access time/bandwidth may
need to be equation factors too.  Environmental data I/O sampling rate,
quality and spectrum coverage may also be variables in describing an
entity's intelligence.

John

 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 If we measure intelligence in bits, then we can place limits on what can
 be
 achieved.  Landauer's principle says that each irreversible bit
 operation
 (such as a bit assignment) requires kT ln 2 energy, where k is
 Boltzmann's
 constant and T is the temperature.
 http://en.wikipedia.org/wiki/Landauer's_Principle
 
 At the temperature of the universe, 2.725 K,
 http://en.wikipedia.org/wiki/Cosmic_microwave_background_radiation
 each bit operation requires 2.6e-23 Joules.
 
 The mass of the universe is the subject of debate,
 http://hypertextbook.com/facts/2006/KristineMcPherson.shtml
 
 so let's assume 25% of critical density, which according to
 http://www.astronomynotes.com/cosmolgy/s9.htm is 3H^2/(8 pi G) = 1.06e-
 26
 Kg/m^3 (where H is Hubble's constant and G is the gravitational
 constant).
 Astronomers mostly agree that the universe is about 4% visible matter,
 21%
 ordinary dark matter and 75% dark energy responsible for the outward
 acceleration of the galaxies.  (I think that dark energy is actually
 ordinary
 gravity.  An observer falling into a black hole will observe nearby
 objects
 appear to accelerate away).  So (returning to the big bang model) for a
 sphere
 of radius 13.7 billion lightyears, this gives a mass of 7.5e52 Kg.
 
 If we convert this mass to energy by E = mc^2 we have 6.75e69 J.  This
 gives
 us 2.6e92 bit operations before the universe reaches thermodynamic
 equilibrium.
 
 We must use them wisely.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-18 Thread Benjamin Goertzel

According to my view,

-- raw intelligence would be measured in bits

-- efficient intelligence would ultimately be measured in terms such as
bits/ (4D volume of a region of spacetime)

As noted the Bekenstein bound thus places an upper limit on efficient
intelligence
according to current physics.  This upper limit rules out wildly inefficient
AI
approaches such as AIXI.

-- Ben G


On 5/18/07, John G. Rose [EMAIL PROTECTED] wrote:


Pretty good calculations :)

Some thoughts on the topic of units and equations, some may be obvious or
redundant -

If something was extremely intelligent it would have an exact copy, bit
for
bit, of the whole universe in its head.  Maybe that's saying that the
universe is 100% intelligent because the universe is itself.  Having
infinite access time (tachyon?) to each of these bits or any size subset
including multiples of the whole, would be, to say the least, an
intelligence enabler.  But this would be impossible within the universe
due
to thermodynamic and physical limitations.

All intelligent entities seem to have some sort of partial representation
of
their environment in their memory (KR).  There is time-backwards and
time-forwards management of this representation as the entities operate on
their environment - memory and prediction - that cover intelligent entity
specific time-spans.  The entity it seems flips bits and changes
complexity
and/or entropy (both Shannon and thermodynamic entropy) in its
environment.


There is a quantum element to the universe bit set.  Particle/wave duality
changes things at the quantum level.  Quantum intelligence is either a
component of intelligence or a whole other type of intelligence.
Intelligence equations could be both digital and analog.

Intelligent things have more order and systems
structure.  Complexity/chaos
environmental change capability needs to be in the equation - is this some
sort of potential energy like intelligence?  Representational accuracy
in
the entity's memory as well as predictive and look-back ability and
time-span slopes (simulation/extrapolation) and access time/bandwidth may
need to be equation factors too.  Environmental data I/O sampling rate,
quality and spectrum coverage may also be variables in describing an
entity's intelligence.

John

 From: Matt Mahoney [mailto:[EMAIL PROTECTED]

 If we measure intelligence in bits, then we can place limits on what can
 be
 achieved.  Landauer's principle says that each irreversible bit
 operation
 (such as a bit assignment) requires kT ln 2 energy, where k is
 Boltzmann's
 constant and T is the temperature.
 http://en.wikipedia.org/wiki/Landauer's_Principle

 At the temperature of the universe, 2.725 K,
 http://en.wikipedia.org/wiki/Cosmic_microwave_background_radiation
 each bit operation requires 2.6e-23 Joules.

 The mass of the universe is the subject of debate,
 http://hypertextbook.com/facts/2006/KristineMcPherson.shtml

 so let's assume 25% of critical density, which according to
 http://www.astronomynotes.com/cosmolgy/s9.htm is 3H^2/(8 pi G) = 1.06e-
 26
 Kg/m^3 (where H is Hubble's constant and G is the gravitational
 constant).
 Astronomers mostly agree that the universe is about 4% visible matter,
 21%
 ordinary dark matter and 75% dark energy responsible for the outward
 acceleration of the galaxies.  (I think that dark energy is actually
 ordinary
 gravity.  An observer falling into a black hole will observe nearby
 objects
 appear to accelerate away).  So (returning to the big bang model) for a
 sphere
 of radius 13.7 billion lightyears, this gives a mass of 7.5e52 Kg.

 If we convert this mass to energy by E = mc^2 we have 6.75e69 J.  This
 gives
 us 2.6e92 bit operations before the universe reaches thermodynamic
 equilibrium.

 We must use them wisely.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Shane Legg

Ben,

According to this distinction, AIXI and evolution have high intelligence

but low efficient intelligence.



Yes, and in the case of AIXI it is presumably zero given that the resource
consumption is infinite.  Evolution on the other hand is just efficient
enough
that when implemented on a crazy enough scale the results can be pretty
amazing.


If this hypothesis is correct then AIXI and the like don't really tell us

much about
what matters, which is the achievement of efficient intelligence in
relevant real-world
contexts...



That might well be true.

I don't want to give the impression that I don't care about the efficiency
of
intelligence.  On any given hardware the most intelligent system will be the
one that runs the algorithm with the greatest intelligence efficiency.
Thus,
if I want to see very intelligent systems then I need to care about how
efficient they are.  Nevertheless, it is still the end product raw
intelligence
generated by the system that really excites me, rather than statistics on
its
internal efficiency.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Benjamin Goertzel

  Nevertheless, it is still the end product raw intelligence
generated by the system that really excites me, rather than statistics on
its
internal efficiency.

Shane



Yeah, I agree with that.  But like I said, the question is whether in the
real world,
efficiency needs to be considered as essential in order to actually GET to
systems
that display interesting raw intelligence.  I strongly suspect this is the
case.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Pei Wang

Ben and Shane,

I started this discussion with the hope to show people that there are
actually different understandings (or call them definitions ) of
intelligence, each with its intuitions and motivations, and they lead
to different destinations and serve different purposes. These goals
cannot replace each other, but since they are related, we still
benefit from discussions like this. Though we won't reach a consensus
soon, the discussions make our difference better understood.

As for which of the notions fit the word intelligence better, it is
a less important issue, though it is still an issue. Though I'm not a
native English speaker, this time my understanding is not necessarily
wrong. At least I'm not the only one who feel uncomfortable to call a
thermostat or a brute-force algorithm intelligent (though far below
human level).

Our core difference is not in our choice of word, nor just about the
role efficiency plays in intelligence. Since the very beginning of
my research I have the feeling that AI is fundamentally different from
traditional computer science/technique, and this difference is in the
theoretical foundation, rather than in the hardware (whether to use
von Neumann architecture ...) or software (which programming language
to use ...) details. This is where my definition of intelligence come
from.

To me, traditional computer science (CS) studies what is the best
solution to a problem if the system has SUFFICIENT knowledge and
resources, and AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources. I also believe
that traditional AI failed largely because it conceptually stayed too
closely to CS.

In your definitions, both AI and CS are doing problem-solving, and
all computer systems will be called intelligent (though to various
degrees). I feel that in this way the most important feature of
intelligence will be lost among the less important features.

Again, I'm not trying to convince you, but to make myself more clear.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Benjamin Goertzel

Pei,

I think it all comes out in the wash, really ;-)

You are talking about

insufficient knowledge and resources

and my discussion of efficiency only pertains to the insufficient
resources
part.

But I think insufficient knowledge comes along automatically with

insufficient resources + complex goals

If the goals are complex enough, and the resources are few enough, then
obviously the system will have insufficient knowledge for achieving the
goals
via traditional computer science means.  I am sure one could prove theorems
in this regard.

So, I see solving complex goals based on insufficient knowledge as a
necessary consequence of solving complex goals based on limited resources

With limited resources a system cannot possibly gather, store, or manipulate
sufficient knowledge.

Thus the need for uncertainty management to have a central role in
intelligence...
which we both agree on...

-- Ben G

On 5/17/07, Pei Wang [EMAIL PROTECTED] wrote:


Ben and Shane,

I started this discussion with the hope to show people that there are
actually different understandings (or call them definitions ) of
intelligence, each with its intuitions and motivations, and they lead
to different destinations and serve different purposes. These goals
cannot replace each other, but since they are related, we still
benefit from discussions like this. Though we won't reach a consensus
soon, the discussions make our difference better understood.

As for which of the notions fit the word intelligence better, it is
a less important issue, though it is still an issue. Though I'm not a
native English speaker, this time my understanding is not necessarily
wrong. At least I'm not the only one who feel uncomfortable to call a
thermostat or a brute-force algorithm intelligent (though far below
human level).

Our core difference is not in our choice of word, nor just about the
role efficiency plays in intelligence. Since the very beginning of
my research I have the feeling that AI is fundamentally different from
traditional computer science/technique, and this difference is in the
theoretical foundation, rather than in the hardware (whether to use
von Neumann architecture ...) or software (which programming language
to use ...) details. This is where my definition of intelligence come
from.

To me, traditional computer science (CS) studies what is the best
solution to a problem if the system has SUFFICIENT knowledge and
resources, and AI is about what is the best solution to a problem if
the system has INSUFFICIENT knowledge and resources. I also believe
that traditional AI failed largely because it conceptually stayed too
closely to CS.

In your definitions, both AI and CS are doing problem-solving, and
all computer systems will be called intelligent (though to various
degrees). I feel that in this way the most important feature of
intelligence will be lost among the less important features.

Again, I'm not trying to convince you, but to make myself more clear.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Pei Wang

On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Pei,

I think it all comes out in the wash, really ;-)


You are going beyond my English capability. ;-)


You are talking about

insufficient knowledge and resources

and my discussion of efficiency only pertains to the insufficient
resources part.

But I think insufficient knowledge comes along automatically with

insufficient resources + complex goals


If you define complex goals by algorithmic complexity, it won't.


If the goals are complex enough, and the resources are few enough, then
obviously the system will have insufficient knowledge for achieving the
goals via traditional computer science means.  I am sure one could prove 
theorems
in this regard.


I'm no talking about control knowledge, but domain knowledge. Think
about a theorem proving system: all the knowledge necessary for
proving a statement is in the axioms and valid inference rules, so the
system has sufficient knowledge.  However, the system may still have
insufficient resources + complex goals.


So, I see solving complex goals based on insufficient knowledge as a
necessary consequence of solving complex goals based on limited resources

With limited resources a system cannot possibly gather, store, or manipulate
sufficient knowledge.


Not necessarily, as in the above example.

Furthermore, insufficient resources is a stronger requirement than
limited resources --- a system may have limited resources, but
still sufficient for the given problem.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Richard Loosemore

Pei Wang wrote:

On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:


Pei,

I think it all comes out in the wash, really ;-)


You are going beyond my English capability. ;-)


Translation:  It doesn't matter one way or the other.  ;-)



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Josh Treadwell

Mark, it seems that you're missing the point.  We as humans aren't
ABSOLUTELY CERTAIN of anything.  But we are perfectly capable of operating
on the fine line between assumed certainty and uncertainty.  We KNOW that
molecules are made of up bonded atoms, but past a certain point, we can't
say what a basic unit of energy is.  But we know what a molecule looks like
so we can bond atoms to form them.  Much is the same with intelligence.  We
simply mimic behaviors and find parallels in code and systems.  If we can
prove that a turing machine is universal, or that a code base is universal,
then there must be some configuration of code that is capable of
representing every subatomic interaction occurring in our universe down to
the most minute detail, thus duplicating our experience entirely (regardless
of the fact that this would take several universe lifetimes to do
manually).  So if this is plausible, it's perfectly sound to discuss
optimization of the stated code.  Our brains (and their emergent trait of
our fancy, intelligence) are a much smaller piece of this (computable)
chemical puzzle.  Since they were derived through evolution (the high
intelligence, low efficiency topic), there are many inefficient mechanisms
that have evolved for reasons other than exponentially increasing our brains
processing power.  Why is this hard to grasp?

In terms of your investment question, it's all a matter of needs.  That is a
simple risk assessment.  To any intelligent being, the money gained is only
an ends to a means.  To an AI interested in furthering it's knowledge, or
bettering mankind (or machine kind), money simply means more energy, power,
resources, etc.  Ultimately, if your goal is just to amass money without any
reasoning, your goals system is flawed.  Any well designed AI system should
not have the masturbatory  tendencies to take unjustified risks.  We're
talking about multiple priority levels here.  The Nash equilibrium would be
sought after on many levels.  The computer is going to give the system it's
best shot and guess.

On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:



Pei: AI is about what is the best solution to a problem if
 the system has INSUFFICIENT knowledge and resources.

Just so. I have just spent the last hour thinking about this area, and you
have spoken the line I allotted to you almost perfectly.

Your definition is a CONTRADICTION IN TERMS.

If a system has insufficient knowledge there is NO BEST, no optimal,
solution to any decision - that's a fiction.
If a system is uncertain, there is no certain way to deal with it.
If there is no right answer, there really is no right answer.

The whole of science has spent the last hundred years caught in the above
contradiction. Recognizing uncertainty and risk everywhere - and then
trying
to find a certain, right, optimal,  way to deal with them. This runs
right through science. Oh yes, we can see that life is incredibly
problematic and uncertain...  but this is the certain, best way to deal
with
it.

So science has developed games theory - arguably the most important theory
of behaviour - and then spent its energies trying to find the right way to
play games. The perfect equilibrium etc. And missed the whole point. There
is no right way to play games - on the few occasions that one is
discovered,
(and there are a few occasions and situations), people STOP PLAYING IT -
because it ceases to be a proper game,and it ceases to be a model of real
life.In one sense, it's no wonder Nash went mad..

But scientists and techie's so badly want a right answer, they haven't
been
able to admit there isn't one.

What made [Stephen Jay] Gould unique as a scientist was that he had a
historian's mind and not an engineer's. He liked mess, confusion and
contradiction. Most scientists, in my experience, are the opposite. They
are
engineers at heart. They think the world is made up of puzzles, and
somewhere out there is the one correct solution to every puzzle.

Andrew Brown

Hey, this is a fundamentally pluralistic world, not a [behaviourally]
monistic one. Deal with it:

Here's a simple problem for you with insufficient knowledge and
resources::
you have $10,000 to invest tomorrow. You're thinking of investing in a
Chinese stockmarket mutual fund, because the market is on the up and you
reckon there could be a lot of money still to be made. (And there really
could). On the other hand, maybe it's a crazy bubble about to burst, and
you
could lose a lot of money too. So what do you do tomorrow - buy or do
nothing - invest or keep your money in that savings account?

What's the best decision, or the best way of reaching a decision, or the
best way of finding a way of reaching a decision- in the next 24 hours
(or,
in the end, in any time period)? [And what do you think, Ben?]

If you don't have the best answer, then your whole approach both to
defining and implementing intelligence is fundamentally flawed.  A few
hundred million investors will be waiting to hear your 

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Matt Mahoney
--- Pei Wang [EMAIL PROTECTED] wrote:
 To me, traditional computer science (CS) studies what is the best
 solution to a problem if the system has SUFFICIENT knowledge and
 resources, and AI is about what is the best solution to a problem if
 the system has INSUFFICIENT knowledge and resources. I also believe
 that traditional AI failed largely because it conceptually stayed too
 closely to CS.

I think for resources it's the other way around.  CS is concerned with the
space and time complexity of algorithms.  I believe the failure of AI is due
to lack of these resources.  The brain has about 10^15 bits of memory
(counting synapses and using common neural models) and computes 10^16
operations per second (assuming 10 bits/second information rate, higher if
individual pulses are significant).

We observe that many problems that humans solve, like arithmetic or chess or
logical inference, don't require lots of computing power.  So we guess that
this might be true for everything else.  But I don't think that is true.  Most
of your resting metabolism is used to power your brain.  Animals with smaller
brains can survive on less food.  With this evolutionary pressure, why did we
evolve such large brains if the same computation could be done on smaller
ones?

Most software engineers will tell you to get your program working first, then
optimize later.  But in AI, what choice do you have?  So we put all our effort
into abstract knowledge representation and ignore the hard parts like language
and vision.  Then where will that knowledge come from?

What if you had sufficient computing power.  Then how would you solve AGI?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Pei Wang

On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:


Pei: AI is about what is the best solution to a problem if
 the system has INSUFFICIENT knowledge and resources.

Just so. I have just spent the last hour thinking about this area, and you
have spoken the line I allotted to you almost perfectly.

Your definition is a CONTRADICTION IN TERMS.

If a system has insufficient knowledge there is NO BEST, no optimal,
solution to any decision - that's a fiction.
If a system is uncertain, there is no certain way to deal with it.
If there is no right answer, there really is no right answer.


Mike,

With insufficient knowledge and resources, a system cannot get answers
that are ABSOLUTELY correct and optimal (with respect to the problem
only), but can still get answers that are RELATIVELY correct (with
respect to available knowledge) and optimal (with respect to available
resources). BEST was used in this sense in my previous message.

In a sense, the whole field of reasoning under uncertainty is about
finding certainty in handling uncertainty, and the whole field of
decision making is about find the best answer when the right answer
is unknown. There is no contradiction, but different levels of
descriptions.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Pei Wang

Richard,

Thanks!

But to me, it is 差之毫厘,谬以千里 --- a Chinese idiom meaning An error the
breadth of a single hair (in working definitions) can lead you a
thousand miles astray (in research results) --- of course, the word
in parenthesis are mine ;-)

Pei

On 5/17/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Pei Wang wrote:
 On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:

 Pei,

 I think it all comes out in the wash, really ;-)

 You are going beyond my English capability. ;-)

Translation:  It doesn't matter one way or the other.  ;-)



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Pei Wang

On 5/17/07, Matt Mahoney [EMAIL PROTECTED] wrote:

--- Pei Wang [EMAIL PROTECTED] wrote:
 To me, traditional computer science (CS) studies what is the best
 solution to a problem if the system has SUFFICIENT knowledge and
 resources, and AI is about what is the best solution to a problem if
 the system has INSUFFICIENT knowledge and resources. I also believe
 that traditional AI failed largely because it conceptually stayed too
 closely to CS.

I think for resources it's the other way around.  CS is concerned with the
space and time complexity of algorithms.  I believe the failure of AI is due
to lack of these resources.  The brain has about 10^15 bits of memory
(counting synapses and using common neural models) and computes 10^16
operations per second (assuming 10 bits/second information rate, higher if
individual pulses are significant).


Matt,

We clearly have different ideas about what intelligence means. More
resources will surely make a system more capable, even with the same
internal mechanism, but to me, it will not make the system more
intelligent.


What if you had sufficient computing power.  Then how would you solve AGI?


For problems we already have sufficient knowledge and resources, we
don't need intelligence. We'll never have sufficient computing power
for all of our problems, so intelligence will always be needed here or
there.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Mike Tintner

Pei,

I don't think these distinctions between terms really matter in the final 
analysis - right, optimal etc. What I'm assuming, however you define it, 
is that you are saying that AI can find one solution that is better than 
others under conditions of insufficient knowledge/uncertainty - and that 
your, Pei's, system is set up to find one answer for problems involving 
uncertainty (even the system has first to learn how to find it)..


And that's what I'm arguing against - that's what I'm saying is 
fundamentally flawed. There isn't one answer for such problems, there are 
potentially many, and you can't be sure which is best or better, or indeed 
very often whether any will work at atll.  There isn't one way to invest for 
example. And any intelligence that can only handle problems closed-endedly 
as opposed to open-endedly, isn't a truly adaptive intelligence at all, even 
if it may appear that way superficially,  and simply won't work.


Science certainly is caught in the contradiction I have outlined. A great 
number of thinkers are caught in that contradiction. As far as I can see you 
are too.


Under any circumstances, I still invite you to outline your approach to the 
investment problem I set. It is an absolutely central problem for AGI. How 
yours or any AGI will deal with such problems - real, real-world problems - 
is surely one of the most central things we should be discussing - the most 
important issue of all.


But, from experience, I know that you guys are very unlikely to respond when 
put to the test  - so if you don't want to, I will not persist here.




Mike,

With insufficient knowledge and resources, a system cannot get answers
that are ABSOLUTELY correct and optimal (with respect to the problem
only), but can still get answers that are RELATIVELY correct (with
respect to available knowledge) and optimal (with respect to available
resources). BEST was used in this sense in my previous message.

In a sense, the whole field of reasoning under uncertainty is about
finding certainty in handling uncertainty, and the whole field of
decision making is about find the best answer when the right answer
is unknown. There is no contradiction, but different levels of
descriptions.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.7.1/807 - Release Date: 16/05/2007 18:05






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Mike Tintner
Josh: Any well designed AI system should not have the masturbatory  tendencies 
to take unjustified risks. 

Josh,

Jeez, you guys will not face reality. MOST of the problems we deal with involve 
risks (and uncertainty). That's what human intelligence does most of the time - 
that's what any adaptive intelligence does  will have to do  - deal with 
problems involving risks and uncertainty. Even your superduperintelligence will 
have to face those too. (Who knows how those pesky, rebellious humans may try 
to resist its benevolent decisions?) Let me quote again 
Most of the problems that we face in our everyday lives are ill-defined 
problems. In contrast, psychologists have focussed mainly on well-defined 
problems. Why have they done this? One important reason is because well-defined 
problems have a best strategy for their solution. As a result it is usually 
easy to identify the errors and deficiencies in the strategies adopted by human 
problem-solvers..

Michael Eysenck, Principles of Cognitive Psychology, East Sussex: Psychology 
Press 2001

You guys are similarly copping out - dealing with the easy rather than the hard 
(risk  uncertainty) problems - the ones with the neat answers, as opposed to 
the ones that are open-ended.  The AI as opposed to the AGI problems.

Won't somebody actually deal with the problem  - how will your AGI system 
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow? 
(You guys are supposed to be in the problem-solving business).



Mark, it seems that you're missing the point.  We as humans aren't ABSOLUTELY 
CERTAIN of anything.  But we are perfectly capable of operating on the fine 
line between assumed certainty and uncertainty.  We KNOW that molecules are 
made of up bonded atoms, but past a certain point, we can't say what a basic 
unit of energy is.  But we know what a molecule looks like so we can bond atoms 
to form them.  Much is the same with intelligence.  We simply mimic behaviors 
and find parallels in code and systems.  If we can prove that a turing machine 
is universal, or that a code base is universal, then there must be some 
configuration of code that is capable of representing every subatomic 
interaction occurring in our universe down to the most minute detail, thus 
duplicating our experience entirely (regardless of the fact that this would 
take several universe lifetimes to do manually).  So if this is plausible, it's 
perfectly sound to discuss optimization of the stated code.  Our brains (and 
their emergent trait of our fancy, intelligence) are a much smaller piece of 
this (computable) chemical puzzle.  Since they were derived through evolution 
(the high intelligence, low efficiency topic), there are many inefficient 
mechanisms that have evolved for reasons other than exponentially increasing 
our brains processing power.  Why is this hard to grasp? 

In terms of your investment question, it's all a matter of needs.  That is a 
simple risk assessment.  To any intelligent being, the money gained is only an 
ends to a means.  To an AI interested in furthering it's knowledge, or 
bettering mankind (or machine kind), money simply means more energy, power, 
resources, etc.  Ultimately, if your goal is just to amass money without any 
reasoning, your goals system is flawed.  Any well designed AI system should not 
have the masturbatory  tendencies to take unjustified risks.  We're talking 
about multiple priority levels here.  The Nash equilibrium would be sought 
after on many levels.  The computer is going to give the system it's best shot 
and guess. 



  On 5/17/07, Mike Tintner [EMAIL PROTECTED] wrote:

Pei: AI is about what is the best solution to a problem if
 the system has INSUFFICIENT knowledge and resources.

Just so. I have just spent the last hour thinking about this area, and you
have spoken the line I allotted to you almost perfectly.

Your definition is a CONTRADICTION IN TERMS.

If a system has insufficient knowledge there is NO BEST, no optimal,
solution to any decision - that's a fiction. 
If a system is uncertain, there is no certain way to deal with it.
If there is no right answer, there really is no right answer.

The whole of science has spent the last hundred years caught in the above
contradiction. Recognizing uncertainty and risk everywhere - and then 
trying 
to find a certain, right, optimal,  way to deal with them. This runs
right through science. Oh yes, we can see that life is incredibly
problematic and uncertain...  but this is the certain, best way to deal 
with 
it.

So science has developed games theory - arguably the most important theory
of behaviour - and then spent its energies trying to find the right way to
play games. The perfect equilibrium etc. And missed the whole point. There 
is no right way to play games - on the few occasions that one is discovered,
(and there are a few occasions and situations), 

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Benjamin Goertzel


*Won't somebody actually deal with the problem  - how will your AGI system
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow?
(You guys are supposed to be in the problem-solving business).*



Look, a Novamente-based AGI system could confront this problem in 1's of
different ways,
just like a human could  That's the whole point.  It will figure out how
to deal with this problem by
itself.  If I could explain in advance how NM will deal with some particular
problem, then NM
wouldn't be an AGI system, it would be a narrow AI system 

The correct question is: how will your AGI system learn to understand the
statement of a problem
and figure out its own creative solution to the problem, and implement that
solution.

That is the problem Pei and I, in our different ways, are working on.  Not
pre-figuring the solutions
our systems will come up with for any particular problem.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread J Storrs Hall, PhD
On Thursday 17 May 2007 03:36:36 pm Matt Mahoney wrote:

 What if you had sufficient computing power.  Then how would you solve AGI?

This is actually the basis of my approach. I just assume the brain has on the 
order of 1K times more processing power than I have to experiment with, so I 
look for applications (e.g. Tommy) where I could arguably demonstrate the 
basic mechanisms using 0.1% of the total horsepower. 

Luckily, the brain is inefficient in this sense: like the body, you rarely use 
all of it at full power (or for us sedentary Americans, you never use any of 
it at full power, and you rarely use much of it :-).

So I expect to be doing 10Kx10K matrix mults where where other AI programs are 
doing CARs and CDRs. What I expect to get from that is a major reduction in 
brittleness.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread J Storrs Hall, PhD
On Thursday 17 May 2007 04:42:33 pm Mike Tintner wrote:
 Won't somebody actually deal with the problem  - how will your AGI system 
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow? 
(You guys are supposed to be in the problem-solving business).

Au contraire. Mainstream AI is in the problem-solving business. We're in the 
business of trying to figure out how to build a machine that can *learn* to 
solve problems.

How would a human being decide to invest or not in a mutual fund? If he tried 
to decide based on a small handful of formal definitions and heuristics, he'd 
have a fair chance of losing money. Indeed, it's not uncommon at all for 
humans to lose money with attempted investments. Thus your problem has some 
smell of the superhuman human fallacy that has plagued AI for lo these many 
years.

In real life, the humans who make good investments more often than not, do so 
by dint of experience -- their own experiments, and watching other investors 
and gaining second-hand experience. This is the way an AI would have to do 
it. There is no magic formula here -- just lots of hard work. 

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Mark Waser
Ben,

Why are you still encouraging an obvious troll?
  - Original Message - 
  From: Benjamin Goertzel 
  To: agi@v2.listbox.com 
  Sent: Thursday, May 17, 2007 4:47 PM
  Subject: Re: [agi] Intelligence vs Efficient Intelligence


Won't somebody actually deal with the problem  - how will your AGI system 
decide to invest or not to invest $10,000 in a Chinese mutual fund tomorrow? 
(You guys are supposed to be in the problem-solving business).



  Look, a Novamente-based AGI system could confront this problem in 1's of 
different ways,
  just like a human could  That's the whole point.  It will figure out how 
to deal with this problem by 
  itself.  If I could explain in advance how NM will deal with some particular 
problem, then NM
  wouldn't be an AGI system, it would be a narrow AI system 

  The correct question is: how will your AGI system learn to understand the 
statement of a problem 
  and figure out its own creative solution to the problem, and implement that 
solution.

  That is the problem Pei and I, in our different ways, are working on.  Not 
pre-figuring the solutions
  our systems will come up with for any particular problem. 
   
  -- Ben


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread J Storrs Hall, PhD
On Thursday 17 May 2007 05:36:17 pm Mike Tintner wrote:

 You don't start a creative process with the solution, or the kind of 
solution you reckon you need, i.e. in this case, the kind of architectures 
that you reckon will bring about AGI. 

Wrong. Technological innovations are quite frequently made by approaching a 
problem with a given technique that one has reason to think will work, and 
refining and adapting it until it does. 

The Wright brothers came to the problem of a flying machine with the key ideas 
of bolting a motor/airscrew onto a glider. Each part existed already--they 
refined the combination until it worked. 

The Apollo project attacked the idea of going to the moon using liquid-fueled 
rockets. Lots of scale-up, re-arrangement of parts, etc, but the basic idea 
was just pushed along until it worked.

We're all starting the attack on the AI problem with the assumption that we'll 
do it by writing programs for electronic digital stored-program computers. If 
your comment were correct we should all be second-guessing this assumption 
and worrying about whether we shouldn't be trying networks of op-amps 
instead. But the comment is historically incorrect -- because the people who 
have the right knowledge to solve a new, big, technical problem are exactly 
the ones who are going to take a technique and think, Hey, I could make this 
work on that. Then they push on it for ten years and voila.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Benjamin Goertzel

Yeah, Mark, you have a good point.

Mike Tintner, I'm going to once again make an effort to
stop succumbing to the childish urge
to reply to your messages, when we obviously are not communicating
in a useful way in this context...  ;-)

-- Ben

On 5/17/07, Mark Waser [EMAIL PROTECTED] wrote:


 Ben,

Why are you still encouraging an obvious troll?

- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Thursday, May 17, 2007 4:47 PM
*Subject:* Re: [agi] Intelligence vs Efficient Intelligence

  *Won't somebody actually deal with the problem  - how will your AGI
 system decide to invest or not to invest $10,000 in a Chinese mutual fund
 tomorrow? (You guys are supposed to be in the problem-solving business).
 *


Look, a Novamente-based AGI system could confront this problem in 1's
of different ways,
just like a human could  That's the whole point.  It will figure out
how to deal with this problem by
itself.  If I could explain in advance how NM will deal with some
particular problem, then NM
wouldn't be an AGI system, it would be a narrow AI system 

The correct question is: how will your AGI system learn to understand the
statement of a problem
and figure out its own creative solution to the problem, and implement
that solution.

That is the problem Pei and I, in our different ways, are working on.  Not
pre-figuring the solutions
our systems will come up with for any particular problem.

-- Ben

--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Intelligence vs Efficient Intelligence

2007-05-17 Thread Benjamin Goertzel

In fact, I'll be offline for the next couple days, which will make it easy!

On 5/17/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:



Yeah, Mark, you have a good point.

Mike Tintner, I'm going to once again make an effort to
stop succumbing to the childish urge
to reply to your messages, when we obviously are not communicating
in a useful way in this context...  ;-)

-- Ben

On 5/17/07, Mark Waser [EMAIL PROTECTED] wrote:

  Ben,

 Why are you still encouraging an obvious troll?

 - Original Message -
  *From:* Benjamin Goertzel [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Thursday, May 17, 2007 4:47 PM
 *Subject:* Re: [agi] Intelligence vs Efficient Intelligence

   *Won't somebody actually deal with the problem  - how will your AGI
  system decide to invest or not to invest $10,000 in a Chinese mutual fund
  tomorrow? (You guys are supposed to be in the problem-solving business).
  *
 
 
 Look, a Novamente-based AGI system could confront this problem in
 1's of different ways,
 just like a human could  That's the whole point.  It will figure out
 how to deal with this problem by
 itself.  If I could explain in advance how NM will deal with some
 particular problem, then NM
 wouldn't be an AGI system, it would be a narrow AI system 

 The correct question is: how will your AGI system learn to understand
 the statement of a problem
 and figure out its own creative solution to the problem, and implement
 that solution.

 That is the problem Pei and I, in our different ways, are working on.
 Not pre-figuring the solutions
 our systems will come up with for any particular problem.

 -- Ben

 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 --
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936