Re: [agi] CopyCat
Vladimir Nesov wrote: On Wed, Dec 17, 2008 at 6:03 PM, Ben Goertzel b...@goertzel.org wrote: I happened to use CopyCat in a university AI class I taught years ago, so I got some experience with it It was **great** as a teaching tool, but I wouldn't say it shows anything about what can or can't work for AGI, really... CopyCat gives a general feel of self-assembling representation and operations performed on reflexive level. It captures intuitions about high-level perception better than any other self-contained description I've seen (which is rather sad, especially given that CopyCat only touches on using hand-made shallow multilevel representations, without inventing them, without learning). Some of the things happening in my model of high-level representation (on the rights of description of what's happening, not as elements of model itself) can be naturally described using lexicon from CopyCat (slippages, temperature, salience, structural analogy), even though algorithm on the low level is different. I agree with your sentiments about CopyCat (and its cousins). It is not so much that it delivers specific performance by itself, so much as it is a different way to think about how to do such things: an inspiration for a whole class of models. It is certainly part of the inspiration for my system. Sounded to me like Ben's initial disparaging remarks about CopyCat were mostly the result of a BHDE (a Bad Hair Day Event). It *really* is not that useless. Richard Loosemore --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
[agi] CopyCat
[repost from opencog list; I should've posted it on AGI in the first place, instead of opencog] On Wed, Dec 17, 2008 at 7:01 AM, Ben Goertzel b...@goertzel.org wrote: First thing: CopyCat doesn't work. Not just in the sense that it's not AGI ... in the sense that it can't even solve hardly any of the simple, narrow analogy problems it was designed to solve. It's basically a non-operational thought experiment. Run the code yourself and see, there are some online versions It occasionally solves some simple problem, but most of Hofstadter's simple analogy problems, it just will never solve... And, there is no coherent theory backing up why a Copycat-like system would ever work. Do you mean that examples that Hofstadter/Mitchell used in their papers for CopyCat did not in fact work on their codebase? I remember downloading second copycat implementations (in Java IIRC), it seemed to be working. Besides, they don't claim anything grandiose for this model, and it seems like it shouldn't be too hard to make it work. Another story is that it's not obvious how to extend this style of algorithm to anything interesting, too much gets projected into manually specified parameters and narrow domain. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
Re: [agi] CopyCat
I happened to use CopyCat in a university AI class I taught years ago, so I got some experience with it It was **great** as a teaching tool, but I wouldn't say it shows anything about what can or can't work for AGI, really... ben On Wed, Dec 17, 2008 at 10:02 AM, Ben Goertzel b...@goertzel.org wrote: Do you mean that examples that Hofstadter/Mitchell used in their papers for CopyCat did not in fact work on their codebase? I remember downloading second copycat implementations (in Java IIRC), it seemed to be working. Besides, they don't claim anything grandiose for this model, and it seems like it shouldn't be too hard to make it work. Those examples work, but if you take random examples of letter string analogy problems from Metamagical Themas, you can't get CopyCat to handle them... ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
Re: [agi] CopyCat
On Wed, Dec 17, 2008 at 6:03 PM, Ben Goertzel b...@goertzel.org wrote: I happened to use CopyCat in a university AI class I taught years ago, so I got some experience with it It was **great** as a teaching tool, but I wouldn't say it shows anything about what can or can't work for AGI, really... CopyCat gives a general feel of self-assembling representation and operations performed on reflexive level. It captures intuitions about high-level perception better than any other self-contained description I've seen (which is rather sad, especially given that CopyCat only touches on using hand-made shallow multilevel representations, without inventing them, without learning). Some of the things happening in my model of high-level representation (on the rights of description of what's happening, not as elements of model itself) can be naturally described using lexicon from CopyCat (slippages, temperature, salience, structural analogy), even though algorithm on the low level is different. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com
Re: [agi] CopyCat
Do you mean that examples that Hofstadter/Mitchell used in their papers for CopyCat did not in fact work on their codebase? I remember downloading second copycat implementations (in Java IIRC), it seemed to be working. Besides, they don't claim anything grandiose for this model, and it seems like it shouldn't be too hard to make it work. Those examples work, but if you take random examples of letter string analogy problems from Metamagical Themas, you can't get CopyCat to handle them... ben --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com