[agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Ben Goertzel

Check out this paper...

http://www.numenta.com/Numenta_HTM_Concepts.pdf


I think it's a good article.

It seems to fairly fully reveal the scope and nature of their current
scientific activities, though it says nothing about their plans for
commercialization or other practical application.

What they have is

-- a hierarchical representation of
patterns-among-patterns-among-patterns, arranged as a tree (or
sometimes a directed acyclic graph)

-- a Bayes net type belief updating function helping to determine
which patterns are present in a given situation

-- a scheme for recognizing temporal patterns in data, used at each
node in the memory tree, based on a simple greedy learning algorithm
that conceptually resembles Hopfield net learning but is implemented
differently

This is very nice but please note what is not here, for example

-- any kind of cognitive architecture for integrating action,
perception and cognition toward the achievement of goals in an
environment

-- any way of pragmatically representing abstract knowledge rather
than relatively simple repetitive patterns in data streams

-- any way of learning complex coordinated procedures, plans, etc.

etc.

The theoretical presumption here is that once you've solve the problem
of recognizing moderately complex patterns in perceptual data streams,
then you're essentially done with the AGI problem and the rest is just
some wrappers placed around your perception code.  I don't think
so  I think they are building a nice perceptual pattern
recognition module, and waving their hands around arguing that it
actually is just an exemplar for an approach that can be more general.

I agree that **philosophically** their approach can be extended beyond
perception, in the sense that the combination of hierarchy,
probabilistic belief propagation, and pattern recognition is critical
to all aspects of intelligence.   But the particular ways theyve
implemented these themes seems to me not to generalize hardly at all
beyond the domain of perceptual pattern recognition.  (I note that
Novamente also embodies all these themes, but in a different way,
oriented more toward cognition than perception.)

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI bottlenecks

2006-06-02 Thread William Pearson

On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:


I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:


 A reinforcement scenario, from wikipedia is defined as

 Formally, the basic reinforcement learning model consists of:

  1. a set of environment states S;
  2. a set of actions A; and
  3. a set of scalar rewards in the Reals.
 

Here is my standard response to Behaviorism (which is what the above
reinforcement learning model actually is):  Who decides when the rewards
should come, and who chooses what are the relevant states and actions?


The rewards I don't deal with, I am interested in external brain
add-ons rather than autonomous systems, so the reward system will be
closely coupled to a human in some fashion.

The rest of post I was trying to outline a system that could alter
what it considered actions and states (and bias, learning algorithms
etc). The RL definition  was just there as an example to work against.


If you find out what is doing *that* work, you have found your
intelligent system.  And it will probably turn out to be so enormously
complex, relative to the reinforcement learning part shown above, that
the above formalism (assuming it has not been discarded by then) will be
almost irrelevant.


The internals of the system will be enormously more complex compared
to the reinforcement part I described. But it won't make that
irrelevent. What goes on inside a PC is vastly more complex than the
system that governs the permissions of what each *nix program can do.
This doesn't mean the permission governing system is irrelevent.

Like the permissions system in *nix the reinforcement system it is
only supposed to govern who is allowed to do what, not what actually
happens. Unlike the permission system it is supposed to get that from
the affect of the programs on the environment.  Without it both sorts
of systems would be highly unstable.

I see it as a necessity for complete modular flexibility. If you get
one of the bits that does the work wrong, or wrong for the current
environment, how do you allow it to change?


Just my deux centimes' worth.



Appreciated.



On a more positive note, I do think it is possible for AGI researchers
to work together within a common formalism.  My presentation at the
AGIRI workshop was about that, and when I get the paper version of the
talk finalized I will post it somewhere.



I'll be interested, but sceptical.

 Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Universal Test for AI?...... AGI bottlenecks

2006-06-02 Thread DGoe

What is the universal test for the ability of any given AI SYSTEM 
to Perceive Reason and Act? 

Is there such a test? 

What is the closest test known to date? 

Dan Goe





From : William Pearson [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] AGI bottlenecks
Date : Fri, 2 Jun 2006 14:30:20 +0100
 On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:
 
  I had similar feelings about William Pearson's recent message about
  systems that use reinforcement learning:
 
  
   A reinforcement scenario, from wikipedia is defined as
  
   Formally, the basic reinforcement learning model consists of:
  
1. a set of environment states S;
2. a set of actions A; and
3. a set of scalar rewards in the Reals.
   
 
  Here is my standard response to Behaviorism (which is what the above
  reinforcement learning model actually is):  Who decides when the 
rewards 
  should come, and who chooses what are the relevant states and 
actions? 
 
 The rewards I don't deal with, I am interested in external brain
 add-ons rather than autonomous systems, so the reward system will be
 closely coupled to a human in some fashion.
 
 The rest of post I was trying to outline a system that could alter
 what it considered actions and states (and bias, learning algorithms
 etc). The RL definition  was just there as an example to work against.
 
  If you find out what is doing *that* work, you have found your
  intelligent system.  And it will probably turn out to be so enormously
  complex, relative to the reinforcement learning part shown above, that
  the above formalism (assuming it has not been discarded by then) will 
be 
  almost irrelevant.
 
 The internals of the system will be enormously more complex compared
 to the reinforcement part I described. But it won't make that
 irrelevent. What goes on inside a PC is vastly more complex than the
 system that governs the permissions of what each *nix program can do.
 This doesn't mean the permission governing system is irrelevent.
 
 Like the permissions system in *nix the reinforcement system it is
 only supposed to govern who is allowed to do what, not what actually
 happens. Unlike the permission system it is supposed to get that from
 the affect of the programs on the environment.  Without it both sorts
 of systems would be highly unstable.
 
 I see it as a necessity for complete modular flexibility. If you get
 one of the bits that does the work wrong, or wrong for the current
 environment, how do you allow it to change?
 
  Just my deux centimes' worth.
 
 
 Appreciated.
 
 
  On a more positive note, I do think it is possible for AGI researchers
  to work together within a common formalism.  My presentation at the
  AGIRI workshop was about that, and when I get the paper version of the
  talk finalized I will post it somewhere.
 
 
 I'll be interested, but sceptical.
 
   Will
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Mike Ross

The theoretical presumption here is that once you've solve the problem
of recognizing moderately complex patterns in perceptual data streams,
then you're essentially done with the AGI problem and the rest is just
some wrappers placed around your perception code.  I don't think
so  I think they are building a nice perceptual pattern
recognition module, and waving their hands around arguing that it
actually is just an exemplar for an approach that can be more general.


Some parts of the article definitely overemphasize the potential for
perceptual pattern recognition to account for a large number of
cognitive processes.  But I think that, ultimately, Hawkins et al
probably agree with your characterization of perception.  For
instance, they spend some time discussing the need to hook up an
external episodic memory module in order to get more powerful
behavior.  So surely, from an AGI perspective, they believe that HTM
would be just one (albeit important) element in a more complex system.

Mike

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Other approches to Seed AI? .... Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread DGoe
What are the other methods of approach to Seed AI? 
Dan Goe


From : Mike Ross [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Numenta: article on Jeff Hawkins' AGI approach
Date : Fri, 2 Jun 2006 10:54:31 -0400
  The theoretical presumption here is that once you've solve the problem
  of recognizing moderately complex patterns in perceptual data streams,
  then you're essentially done with the AGI problem and the rest is just
  some wrappers placed around your perception code.  I don't think
  so  I think they are building a nice perceptual pattern
  recognition module, and waving their hands around arguing that it
  actually is just an exemplar for an approach that can be more general.
 
 Some parts of the article definitely overemphasize the potential for
 perceptual pattern recognition to account for a large number of
 cognitive processes.  But I think that, ultimately, Hawkins et al
 probably agree with your characterization of perception.  For
 instance, they spend some time discussing the need to hook up an
 external episodic memory module in order to get more powerful
 behavior.  So surely, from an AGI perspective, they believe that HTM
 would be just one (albeit important) element in a more complex system.
 
 Mike
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Scott Brown
Hi all,The way I've read Hawkins' and company's work so far is that they view HTM as a cognitive engine that, while perceptually based, would essentially drive other cognitive functions, including behavior. I think you're right that they would agree that these additional cognitive functions would likely need extensions to the architecture. 
I think that perception has gotten short shrift in AI for a long time, so I'm very hapy to see that they're taking this approach (I am biased, however, being a Master's student under Stan Franklin at the University of Memphis working on -- you guessed it -- the perception module for Stan's LIDA system).
-- ScottOn 6/2/06, Mike Ross [EMAIL PROTECTED]
 wrote:
 The theoretical presumption here is that once you've solve the problem of recognizing moderately complex patterns in perceptual data streams, then you're essentially done with the AGI problem and the rest is just
 some wrappers placed around your perception code.I don't think soI think they are building a nice perceptual pattern recognition module, and waving their hands around arguing that it actually is just an exemplar for an approach that can be more general.
Some parts of the article definitely overemphasize the potential forperceptual pattern recognition to account for a large number ofcognitive processes.But I think that, ultimately, Hawkins et alprobably agree with your characterization of perception.For
instance, they spend some time discussing the need to hook up anexternal episodic memory module in order to get more powerfulbehavior.So surely, from an AGI perspective, they believe that HTMwould be just one (albeit important) element in a more complex system.
Mike---To unsubscribe, change your address, or temporarily deactivate your subscription,please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Mike Ross

One of the more interesting ideas the Numenta people have is of how a
perceptual system could be used in a motor-control system by hooking
up expectations to actual commands.  I think its fair to say that
Numenta is pushing towards AGI from the animalistic perspective.  Once
they hook up some memory and tie it in with a control system, it seems
they have a good chance of getting something thats about as smart as
some dumb animals.  To imagine how animals think, I always like to
imagine the part of my consciousness that is driving a car while Im
driving and having a conversation.  The conversation control is the
human part of me.  The car control is the animal mind.   Im
guessing that if Numenta makes a lot of progress, they can get that
animal mind.  But the work described in that paper doesnt seem to have
much to do with the human aspect of mind.

Mike

On 6/2/06, Mike Ross [EMAIL PROTECTED] wrote:

 The theoretical presumption here is that once you've solve the problem
 of recognizing moderately complex patterns in perceptual data streams,
 then you're essentially done with the AGI problem and the rest is just
 some wrappers placed around your perception code.  I don't think
 so  I think they are building a nice perceptual pattern
 recognition module, and waving their hands around arguing that it
 actually is just an exemplar for an approach that can be more general.

Some parts of the article definitely overemphasize the potential for
perceptual pattern recognition to account for a large number of
cognitive processes.  But I think that, ultimately, Hawkins et al
probably agree with your characterization of perception.  For
instance, they spend some time discussing the need to hook up an
external episodic memory module in order to get more powerful
behavior.  So surely, from an AGI perspective, they believe that HTM
would be just one (albeit important) element in a more complex system.

Mike



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Scott Brown
... they have a good chance of getting something thats about as smart as some dumb animalsI agree, Mike, and it seems to me that, from an AGI perspective (as opposed to an AI perspective), this is an excellent goal to have.
On 6/2/06, Mike Ross [EMAIL PROTECTED] wrote:
One of the more interesting ideas the Numenta people have is of how aperceptual system could be used in a motor-control system by hookingup expectations to actual commands.I think its fair to say that
Numenta is pushing towards AGI from the animalistic perspective.Oncethey hook up some memory and tie it in with a control system, it seemsthey have a good chance of getting something thats about as smart as
some dumb animals.To imagine how animals think, I always like toimagine the part of my consciousness that is driving a car while Imdriving and having a conversation.The conversation control is thehuman part of me.The car control is the animal mind. Im
guessing that if Numenta makes a lot of progress, they can get thatanimal mind.But the work described in that paper doesnt seem to havemuch to do with the human aspect of mind.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI bottlenecks

2006-06-02 Thread Richard Loosemore


Will,

Comments taken, but the direction of my critique may have gotten lost in 
the details:


Suppose I proposed a solution to the problem of unifying quantum 
mechanics and gravity, and suppose I came out with a solution that said 
that the unified theory involved (a) a specific interface to quantum 
theory, which I spell out in great detail, and (b) ditto for an 
interface with geometrodynamics, and (c) a linkage component, to be 
specified.


Physicists would laugh at this.  What linkage component?! they would 
say.  And what makes you *believe* that once you sorted out the linkage 
component, the two interfaces you just specified would play any role 
whatsoever in that linkage component?  They would point out that my 
linkage component was the meat of the theory, and yet I had referred 
to in such a way that it seemed as though it was just an extra, to be 
sorted out later.


This is exactly what happened to Behaviorism, and the idea of 
Reinforcement Learning.  The one difference was that they did not 
explicitly specify an equivalent of my (c) item above:  it was for the 
cognitive psychologists to come along later and point out that 
Reinforcement Learning implicitly assumed that something in the brain 
would do the job of deciding when to give rewards, and the job of 
deciding what the patterns actually were  and that that something 
was the part doing all the real work.  In the case of all the 
experiments in the behaviorist literature, the experimenter substituted 
for those components, making them less than obvious.


Exactly the same critique bears on anyone who suggests that 
Reinforcement Learning could be the basis for an AGI.  I do not believe 
there is still any reply to that critique.


Richard Loosemore





William Pearson wrote:

On 01/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:


I had similar feelings about William Pearson's recent message about
systems that use reinforcement learning:


 A reinforcement scenario, from wikipedia is defined as

 Formally, the basic reinforcement learning model consists of:

  1. a set of environment states S;
  2. a set of actions A; and
  3. a set of scalar rewards in the Reals.
 

Here is my standard response to Behaviorism (which is what the above
reinforcement learning model actually is):  Who decides when the rewards
should come, and who chooses what are the relevant states and 
actions?


The rewards I don't deal with, I am interested in external brain
add-ons rather than autonomous systems, so the reward system will be
closely coupled to a human in some fashion.

The rest of post I was trying to outline a system that could alter
what it considered actions and states (and bias, learning algorithms
etc). The RL definition  was just there as an example to work against.


If you find out what is doing *that* work, you have found your
intelligent system.  And it will probably turn out to be so enormously
complex, relative to the reinforcement learning part shown above, that
the above formalism (assuming it has not been discarded by then) will be
almost irrelevant.


The internals of the system will be enormously more complex compared
to the reinforcement part I described. But it won't make that
irrelevent. What goes on inside a PC is vastly more complex than the
system that governs the permissions of what each *nix program can do.
This doesn't mean the permission governing system is irrelevent.

Like the permissions system in *nix the reinforcement system it is
only supposed to govern who is allowed to do what, not what actually
happens. Unlike the permission system it is supposed to get that from
the affect of the programs on the environment.  Without it both sorts
of systems would be highly unstable.

I see it as a necessity for complete modular flexibility. If you get
one of the bits that does the work wrong, or wrong for the current
environment, how do you allow it to change?


Just my deux centimes' worth.



Appreciated.



On a more positive note, I do think it is possible for AGI researchers
to work together within a common formalism.  My presentation at the
AGIRI workshop was about that, and when I get the paper version of the
talk finalized I will post it somewhere.



I'll be interested, but sceptical.

 Will

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread BillK

On 6/2/06, Ben Goertzel wrote:

Mike

You note that Numenta's approach seems oriented toward implementing an
animal-level mind...

I agree, and I do think this is a fascinating project, and an approach
that can ultimately succeed...  but I think that for it to succeed
Hawkins will have to introduce a LOT of deep concepts that he is
currently ignoring in his approach.  Most critically he ignores the
complex, chaotic dynamics of brain systems...

I suppose part of the motivation for starting with animal mind is that
the human mind is just a minor adjustment to the animal mind, which is
sorta true genetically and evolutionarily

But on the other hand, just because animal brains evolved into human
brains, doesn't mean that every system with animal-brain functionality
has similar evolve-into-human-brain potentiality


snip

Just from a computer systems design perspective, I think this project
is admirable.

I think it is safe to claim that all the big computer design disasters
occurred because they tried to do too much all at once. ''We want it
all, and we want it now!'.

Ben may be correct in claiming that major elements are being omitted,
but if they even get an animal level intelligence running, this will
be a remarkable achievement. They will be world leaders and will learn
a lot about designing such systems.

Even if it cannot progress to higher levels of intelligence, the
experience gained will set their technicians well on the road to the
next generation design.

BillK

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] procedural vs declarative knowledge

2006-06-02 Thread Mike Dougherty
On 6/2/06, Charles D Hixson [EMAIL PROTECTED] wrote:
Rule of thumb:First get it working, doing what you want.Thenoptimize.When optimizing, first check your algorithms,then check tosee where time is actually spent.Apply extensive optimization only to
the most used 10% (or less) of the code.If you need to optimize morethan that, then you need to either redesign from the base, or get afaster machine.Expect that you will need to redesign pieces so often while in
development that it's better to chose the form of code that's easiest tounderstand, redesign, and fix than to optimize it.Only whendevelopment is essentially complete is it time to give optimization forspeed or size serious consideration.
That said, do you agree that some applications call for a 'ground up' build mentality? For example, adding security after an application is nearly finished is usually a terrible approach (despite being incredibly common)


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Limits to Size and Resources...Procedural vs declarative knowledge

2006-06-02 Thread DGoe
The question remains of the limits of MB/GB size and resource 
requirements. 
Execution time for a given process 
There are limits...

Does anyone have any idea of the size of Executable code of fully 
developed AI System versus a seed AI system? 

Dan Goe

From : Mike Dougherty [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] procedural vs declarative knowledge
Date : Fri, 2 Jun 2006 15:51:34 -0400
 On 6/2/06, Charles D Hixson [EMAIL PROTECTED] wrote:
 
 
  Rule of thumb:  First get it working, doing what you want.  Then
  optimize.  When optimizing, first check your algorithms,  then check 
to 
  see where time is actually spent.  Apply extensive optimization only 
to 
  the most used 10% (or less) of the code.  If you need to optimize more
  than that, then you need to either redesign from the base, or get a
  faster machine.
 
  Expect that you will need to redesign pieces so often while in
  development that it's better to chose the form of code that's easiest 
to 
  understand, redesign, and fix than to optimize it.  Only when
  development is essentially complete is it time to give optimization 
for 
  speed or size serious consideration.
 
 
 
 That said, do you agree that some applications call for a 'ground up' 
build 
 mentality?  For example, adding security after an application is 
nearly 
 finished is usually a terrible approach (despite being incredibly 
common) 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] procedural vs declarative knowledge

2006-06-02 Thread Charles D Hixson
Mike Dougherty wrote:
 On 6/2/06, *Charles D Hixson* [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:


 Rule of thumb:  First get it working, doing what you want.  Then
 optimize.  When optimizing, first check your algorithms,  then
 check to
 see where time is actually spent.  Apply extensive optimization
 only to
 the most used 10% (or less) of the code.  If you need to optimize more
 than that, then you need to either redesign from the base, or get a
 faster machine.

 Expect that you will need to redesign pieces so often while in
 development that it's better to chose the form of code that's
 easiest to
 understand, redesign, and fix than to optimize it.  Only when
 development is essentially complete is it time to give
 optimization for
 speed or size serious consideration.



 That said, do you agree that some applications call for a 'ground up'
 build mentality?  For example, adding security after an application
 is nearly finished is usually a terrible approach (despite being
 incredibly common)
That's not a ground up build.  That's getting the design right before
you commit to anything final. 

I do agree that getting the security right should be done early.   I
would assert that it needs to be designed into the system rather than
being patched on.  The problem is that right is incredibly picky
here.  You need it not to be so cumbersome that the system is
overwhelming either to compute or to use.  And for any general purpose
system it needs to be adaptable to a wide variety of circumstances, when
you won't necessarily be able to predict in advance, e.g., what
peripherals are available.  (E.g.: you can't depend on visual
recognition in low light environments, or if there might not be cameras
available.)

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]