Kindly advise,  which component is "The Magic Happens Here"?  I didn't 
recognize it on Ben's diagram or mine.
https://www.facebook.com/photo.php?fbid=460040220684646&set=a.217364258285578.55073.203359906352680&type=1&theater
 
Please advise.
~PM.


From: [email protected]
Date: Tue, 25 Sep 2012 20:44:11 -0500
Subject: [agi] Magic Happens Here [was Re: [opencog-dev] Uber big scary monster 
OpenCog diagram
To: [email protected]
CC: [email protected]; [email protected]



On 12 September 2012 15:36, Ben Goertzel <[email protected]> wrote:




http://goertzel.org/WholeBigOpenCogDiagram.pdf


So I got to thinking about the "Magic Happens Here" part of the diagram, and I 
think maybe it got left out of the diagram.


So, for example, I was recently thinking that I could use MOSES to 
automatically learn new link-grammar parse relations.   Now, it doesn't 
explicitly have to be MOSES that would do the learning; I suppose that many 
modern-day, reasonably competent learning systems might do the trick: they 
would need only to be able to do some basic modelling of the innards of a black 
box.  And it doesn't have to be link-grammar either: some other reasonably 
flexible NLP system would do. 


As it became clear as to how to hook these two up, so as to learn new rules, I 
also thought about what it couldn't do ... and realized that I'd need some 
other variants and modifications to handle those cases.  So, my plan went from 
hooking up the two things, to realizing that I would need to hook them up in 
three distinct ways.  Each different way handles a certain generic kind of 
learning problem, but with a different focus and a different  output/outcome.   
And I started grasping that in fact, maybe there are 4 or 5 or 6 different 
other situations to deal with. 


And at that point, it gets out of hand -- all of a sudden, I need to manually 
handle a bunch of different learning tasks. Each task may take months to code 
up and make operational. And what if there are more than 6?  That's when I 
realized that, in fact, I have a meta-learning problem.  That is, what I really 
need is a system that can learn how to learn specific learning tasks.  *That* 
is the "Magic happens Here" bubble. 


At the moment, I have no clue how to do this meta-learning.  But I do know 
where to start: experimental trial and error.  Go ahead, hook up moses to 
link-grammar. See what happens. Plug the holes, See if a general pattern or 
paradigm emerges.  If, after much work, some meta-pattern becomes apparent, 
then, well, that was the magic part.  After that .. who knows.    But one 
cannot find out, until one starts trying to build these things.


--linas



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to