Moshe and Ben,

 

I feel I understand much of Novamente, even parts that I haven't heard
explained, because it is quite similar to ideas I had developed before ever
hearing about Novamente.  

 

But I have never quite understood the role of MOSES in Novamene.

 

I do not question the power of genetic programming.  Ever since I attended a
1999 lecture by Koza on what he had managed to accomplish with genetic
programming --- and then spent about half an hour talking with him after the
lecture and at other times at the supercomputing conference at which it
occurred --- I have been very aware of GP's potential power.  I am quite
certain he said that in roughly a tera-opp (which could be done in one
thousandth of a second on the current fastest computer) he claimed it
derived a band-pass filter it took humanity almost 3 decades (until the
1940s) to develop after appearance of the earliest band-pass filters.

 

But I haven't been able to figure out exactly how MOSES is used in the
NOVAMENTE environment.  For example, Koza said a key to the success of his
genetic programming was having a task for which there was an appropriate
fitness function.  He said that in his experiments using GP to design new
electronic circuitry to operate as relatively optimal band-pass filters the
fitness function to determine how well each proposed solution functioned was
the electronic simulation software called SPICE.  He estimated roughly 99%
of the his network's compute time (i.e., the above mentioned 1 TeraOpps) was
spent evaluating this fitness function.  

 

>From a recent re-reading all the portion of a January 2007 version of Ben's
Novamente book (an earlier version of the Open Cog documentation) that
related to MOSES, I don't remember any clear explanation of what fitness
function would be used to evaluate the presumably many thousands, or
millions, of combo programs that would be generated in an attempt to solve a
single problem.  

 

For example, in a pet brain, the pet presumably would not get a chance to
try out each of the thousands of individual combo programs on a human user,
to see which received a proper feedback from the user, without thoroughly
exhausting the human user.

 

(?1) So what fitness function would be used to select combo programs or
direct the probabilistic distributions that are used to tune their spawning?
(If this is knowledge you intend to be in the public domain.) 

 

Also, I don't understand the relationship of combo programs to the
hypergraph.  Combo uses a functional language which presumably seeks to do
away with, or greatly restrict, side effects (obvioulsly a plus, if you are
somewhat blindly cutting and pasting program fragments together) --- whereas
it seems spreading activation in a hypergraph is largely all about side
effects. (The 1000 to 10K synapses per neuron, plus electromagnetic field
effects caused by neurons in the brain, sure sound like side effects up the
wazoo to me.) 

 

I understand (a) that a combo program could be associated with individual
nodes and be computed when they are activated, (b) that hypergraph nodes or
edge values can be variables in a combo expression, (c) that some subset of
hypergraph spreading-activation inferencing ( I didn't understand exactly
which) can be used as functions in combo expressions, and (d) that the
hypergraph can be used to record and generalize information about a combo
program to appropriately enable inference in the hypergraph to appropriately
active combo programs associated with particular hypergraph nodes.  

 

(?2) Am I correct in understanding that item (d) just listed could be quite
important as a general concept in AGI learning how to automatically program,
because it would allowed the non-combo aspects of Novamente to model and use
probabilistic inference and attention focusing to reason about combo
programs and when they should be used, combined, fed what input, or perhaps
even be modified?

 

 

(?3) Am I also correct in guessing that (b) and (c) would seem to enable
combo programs, to, in effect, create and try out (provided a proper fitness
function) novel hypergraph nodes, which would function in a manner similar
to non-combo nodes largely through spreading activation?

 

(?4) Other than what is explained above, how are combo programs and
hypergraphs synergistically used in Novamente.

 

Moshe and Ben, you are both very bright --- and you both place a lot of
importance on incorporating MOSES into Novamente --- so I assume there is
something important I am missing.  

 

I would appreciate it very much if you could tell me what it is that I am
missing.

 

Ed Porter

 

-----Original Message-----
From: Moshe Looks [mailto:[email protected]] 
Sent: Monday, December 15, 2008 1:33 PM
To: [email protected]
Subject: [agi] internship opportunity at Google (Mountain View, CA)

 

Hi,

 

I am seeking an intern to work on the open-source probabilistic

learning of programs project over Summer 2009 at Google in Mountain

View, CA. Probabilistic learning of programs (plop) is a Common Lisp

framework for experimenting with meta-optimizing semantic evolutionary

search (MOSES) and related approaches to learning with probability

distributions over program spaces. Possible research topics to focus

on include:

 

 * Learning procedural abstractions

 * Adapting estimation-of-distribution algorithms to program evolution

 * Applying plop to various interesting data sets

 * Adapting plop to do natural language processing or image processing

 * Better mechanisms for exploiting background knowledge in program
evolution

 

This position is open to all students currently pursuing a BS, MS or

PhD in computer science or a related technical field. It is probably

better-suited to a grad student, but I'm open to considering an

advanced undergrad as well. The only hard and fast requirements for

consideration are a strong programming background (any language(s))

and some experience in AI and/or machine learning. Some pluses:

 

 * Functional programming experience (esp. Lisp, but ML, Haskell, or

even the functional style of C++ count too)

 * Experience with evolutionary computation or stochastic local search

(esp. estimation-of-distribution algorithms and/or genetic

programming)

 * Open-source contributor

 

More info on plop at http://code.google.com/p/plop/, more info on the

Google internship program at: http://www.google.com/jobs/students

 

Please contact me directly (off-list) if you are interested.

 

Thanks!

Moshe Looks

 

P.S. Disclaimer: I can't promise anyone an internship, you have to go

through the standard Google application & interview process for

interns, yada yada ...

 

 

-------------------------------------------

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to