On Wed, Dec 17, 2008 at 6:03 PM, Ben Goertzel <b...@goertzel.org> wrote:
>
> I happened to use CopyCat in a university AI class I taught years ago, so I
> got some experience with it
>
> It was **great** as a teaching tool, but I wouldn't say it shows anything
> about what can or can't work for AGI, really...
>

CopyCat gives a general feel of "self-assembling" representation and
operations performed on reflexive level.  It captures intuitions about
high-level perception better than any other self-contained description
I've seen (which is rather sad, especially given that CopyCat only
touches on using hand-made shallow multilevel representations, without
inventing them, without learning). Some of the things happening in my
model of high-level representation (on the rights of description of
what's happening, not as elements of model itself) can be naturally
described using lexicon from CopyCat (slippages, temperature,
salience, structural analogy), even though algorithm on the low level
is different.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to