>====Zahn===>

 If you are suggesting that concept formation is a (perhaps stochastic)
generate-and-test procedure, that seems like an okay idea but the issues are
then redescribed as: what is the generation procedure, what causes it to be
invoked, what the test procedure is, and so on.
 
These questions cannot be answered outside the context of a particular
system; they are just the things I'd like to understand exactly how they
would happen in Novamente or Texai or whatever, with all handwaving removed.
 
To get back to the original question of this thread, these are some of the
many missing conceptual pieces TO ME because I cannot see the specific nuts
and bolts solution for any proposed system.  It may in fact be that for any
non-toy example the mechanisms and data are going to be too complicated for
such analysis... that is, my brain is too puny and ineffective to understand
(in a clear and relatively complete way) the inner workings of a general
intelligence.  In that case, all I can do is hope for proof by performance.

 

====Porter===>

With regard to the generate part of generate and test --- there are multiple
ways to generate patterns and concepts.  I think a lot can be achieved just
by recording significant parts of  the hierarchical memory activation states
(such as the most attended parts of the most attended state). And then
generalizing over such states, as described in my recent posts regarding
hierarchical memory.  But many feel this relatively direct recording of
experience at multiple generalizational and compositional levels is not good
enough.  Novamente uses an evolutionary learning process in addition to its
more standard record and generalize type of learning.  Wlodzislaw Duch at
AGI 2008 told me that one of the current theories of cortical columns in the
brain is that they receive input patterns and they project them into a much
higher dimensional space (i.e., making from a given input pattern many
output patterns) , making available for learning a larger number of
representations of patterns, some of which my be more helpful for finding
the valuable commonalities and valuable distinctions between patterns.  This
is somewhat like the way in which the kernel trick, in effect, project data
from a lower dimensional space into a higher dimensional space so support
vectors can better distinguish between classes.  

 

With regard to test part of generate and test --- reinforcement learning,
has proved to be a very powerful form or machine learning.  It provides a
good model for how a network representation can properly allocate scores
reflecting the values their various states and transition links have played
in obtaining some desired reward, by distributing value back from the
rewarded state to the transitions and states through which it was reached in
a particular experience.  A similar type of projecting of value back from
rewarded states into patterns that prove useful in achieving that state
could be used in a Novamente type system.

 

Your statement about the brain being too puny to totally understand a
powerful AGI system is true for all humans.  This is analogous to the fact
that one of the major things speeding brain science today is computer
simulation, which is allowing simulated neural network circuits to indicate
how various hypothesized circuits in the brain would work at a level of
model complexity in which human minds would it find virtually impossible to
make accurate predictions.

 

If you spend enough time reading about AI and AGI architectures and brain
science you should be able to develop a feeling for how an you might expect
one or more AGI systems to work.  But it is impossible to actually imagine
the full complexity of a roughly human level AGI.  We can have feelings that
certain types of architectures should behave in certain ways, some based on
evidence of similar systems, and some based on intuition and partial
simulations in our own mind.  But for complex things --- like the meaning of
all the patterns in a human level, automatically created memory hierarchy
--- , or like the most efficient behaviors and parameters for tuning
spreading activation and implication --- I think at this stage we will have
to build such systems to learn how to do this well.  Any simulation of them
is way beyond the human mind --- and would probably be as complex to do by
computer simulation as by the building of a real AGI computer system itself.

 

 

 

-----Original Message-----
From: Derek Zahn [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 21, 2008 3:46 PM
To: agi@v2.listbox.com
Subject: RE: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent
input and responses

 

Vladimir Nesov writes:

> Generating "concepts" out of thin air is no big deal, if only a
> resource-hungry process. You can create a dozen for each episode, for
> example.
 
If I am not certain of the appropriate mechanism and circumstances for
generating one concept, it doesn't help to suggest that a dozen get
generated instead... now I have twelve times as many things to explain.  If
you are suggesting that concept formation is a (perhaps stochastic)
generate-and-test procedure, that seems like an okay idea but the issues are
then redescribed as: what is the generation procedure, what causes it to be
invoked, what the test procedure is, and so on.
 
These questions cannot be answered outside the context of a particular
system; they are just the things I'd like to understand exactly how they
would happen in Novamente or Texai or whatever, with all handwaving removed.
 
To get back to the original question of this thread, these are some of the
many missing conceptual pieces TO ME because I cannot see the specific nuts
and bolts solution for any proposed system.  It may in fact be that for any
non-toy example the mechanisms and data are going to be too complicated for
such analysis... that is, my brain is too puny and ineffective to understand
(in a clear and relatively complete way) the inner workings of a general
intelligence.  In that case, all I can do is hope for proof by performance.
 

  _____  


agi |  <http://www.listbox.com/member/archive/303/=now> Archives
<http://www.listbox.com/member/archive/rss/303/> |
<http://www.listbox.com/member/?&;
> Modify Your Subscription

 <http://www.listbox.com> 

 

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to