>> I was using the term "episodic" in the standard sense of "episodic memory" 
>> from cog psych, in which episodic memory is differentiated from procedural 
>> and declarative memory. 

I understood that.  The problem is that procedural and declarative memory is 
*not* as simple as is often purported.  If you can't rapidly realize when and 
why your previously reliable procedural and declarative stuff is suddenly no 
longer valid . . . . 

>> The main point is, we have specialized indices to make memory access 
>> efficient for knowledge involving (certain and uncertain) logical 
>> relationships, associations, spatial and temporal relationships, and 
>> procedures

Indices are important but compactness of data storage is also important as are 
ways to have what is effectively indexed derivation of knowledge.  Obviously my 
knowledge of Novamente is becoming dated but, unless you opened some really new 
areas, there is a lot of work that could be done in this area that you're not 
focusing on.  (Note: Please don't be silly infer that by compactness of data 
storage that I mean that disk size is important -- we're long past those days.  
Assume that I mean the computational costs of manipulating data that is not 
stored in an efficient manner).

>> Research project 1.  How do you find analogies between neural networks, 
>> enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 
> That is a question most humans couldn't answer, and is only suitable for 
> testing an AGI that is already very advanced.

In your opinion.  I don't believe that an AGI is going to get far at all 
without having at least a partial handle on this.

>> Research project 2.  How do you recognize and package up all of the data 
>> that represents horse and expose only that which is useful at a given time? 
> That is covered quite adequately in the NM design, IMO.  We are actually 
> doing a commercial project right now (w/ delivery in 2008) that will showcase 
> our ability to solve this problem.  Details are confidential unfortunately, 
> due to the customer's preference. 

I'm afraid that I have to snort at this.  Either you didn't understand the full 
implications of what I'm saying or you're snowing me (ok, I'll give you a .1% 
chance of having it).

>> That is what is called "map encapsulation" in the Novamente design.

Yes, yes, I saw it in the design . . . . "a miracle happens here".        
Which, granted, is better than not realizes that the area exists . . . . but 
still . . . .

>> I do not think the design has any huge gaps.  But much further R&D work is 
>> required, and I agree there may be a simpler approach; but I am not 
>> convinced that you have one. 

These are two *very* different issues (with a really spurious statement tacked 
onto the end).

Of course you don't think the design has any gaps -- you would have filled them 
if you saw them.

There is no reason to be convinced that *I* have a simpler approach because I 
haven't put one forth.  I may or may not be working on one    :-) but if I am, 
I certainly haven't got to the point where I feel that I can defend it.    :-)

  ----- Original Message ----- 
  From: Benjamin Goertzel 
  To: [email protected] 
  Sent: Monday, November 12, 2007 11:45 AM
  Subject: Re: [agi] What best evidence for fast AI?





  On Nov 12, 2007 11:36 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

    >> I am extremely confident of Novamente's memory design regarding 
declarative and procedural knowledge.  Tweaking the system for optimal 
representation of episodic knowledge may require some more thought. 

        Granted -- the memory design is very generic and will handle virtually 
anything.  The question is -- is it in a reasonably optimal from for retrieval 
and other operations (i.e. optimal enough that it won't end up being impossibly 
slow once you get a realistic amount of data/knowledge).  Your caveat on 
episodic knowledge proves very informative since *all* knowledge is effectively 
episodic.

  I was using the term "episodic" in the standard sense of "episodic memory" 
from cog psych, in which episodic memory is differentiated from procedural and 
declarative memory. 

  The main point is, we have specialized indices to make memory access 
efficient for knowledge involving (certain and uncertain) logical 
relationships, associations, spatial and temporal relationships, and procedures 
... but we haven't put much work into creating specialized indices to make 
access of stories/narratives efficient.  Though this may not wind up being 
necessary since the AtomTable now has the capability to create new indices on 
the fly, based on the statistics of the data contained therein. 

   

    >> I have no idea what you mean by "scale invariance of knowledge" nor and 
only weak understanding of what you mean by "ways of determining and exploiting 
encapsulation and modularity of knowledge without killing useful "leaky" 
abstractions."

        Research project 1.  How do you find analogies between neural networks, 
enzyme kinetics and the formation of galaxies (hint:  think Boltzmann)? 

  That is a question most humans couldn't answer, and is only suitable for 
testing an AGI that is already very advanced.
   
    Research project 2.  How do you recognize and package up all of the data 
that represents horse and expose only that which is useful at a given time? 


  That is covered quite adequately in the NM design, IMO.  We are actually 
doing a commercial project right now (w/ delivery in 2008) that will showcase 
our ability to solve this problem.  Details are confidential unfortunately, due 
to the customer's preference. 
   
    >> In terms of determining modularity of knowledge, NM seeks to do this via 
various mechanisms, e.g.
    >> -- pattern-mining using PLN, MOSES and clustering on the AtomTable, to 
identify modularity of declarative knowledge within the existing knowledge base 
    >> -- some specific program-tree-reduction heuristics to identify 
modularity w/in populations of program trees ... i.e. mechanisms which focus on 
procedural knowledge

    These are all designed to operate on densely packed data of the same type, 
not widely flung association networks.  

  Not true.  So long as you can create SimilarityLinks amongst a set of Nodes 
or Links (regardless of how these SimLinks are created) you can apply 
pattern-mining methods to search for modular groupings within the graph of 
SimLinks.  There are specific methods in the NM design (and code, for that 
matter) for finding SimLinks btw Nodes/Links of heterogeneous type and 
representing heterogeneous sorts of content. 

   


    >> Exploiting modularity of knowledge once it's identified is easier, 
because modules once recognized may be explicitly represented as Atoms in the 
AtomTable and as Combo nodes w/ in Combo program trees. 

    The "once it's identified" make the point moot.  The question is -- how do 
you recognize modules (see research project 2 above).


  That is what is called "map encapsulation" in the Novamente design.

   

    Novamente is great for what it does -- but I don't think that it's got the 
full area of AGI covered yet.

  I do not think the design has any huge gaps.  But much further R&D work is 
required, and I agree there may be a simpler approach; but I am not convinced 
that you have one. 

  -- Ben



------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64202805-cecd68

Reply via email to