Well... some of the rhetoric in the review paper is a bit much, but the long technical report delivers on details.... I don't think autoconceptors solve the symbol grounding problem in a philosophical sense, but they do provide a new species of NN whose trajectory space can be naturally divided into discrete symbolic tokens, which seems it could be pretty useful for building either NN systems capable of symbolic reasoning, or hybrid systems gluing NNs together with explicit symbolic reasoning subsystems
-- Ben G On Wed, Jun 25, 2014 at 11:43 AM, Mike Archbold <[email protected]> wrote: > I read the new paper (after reading your paper as well which I did > look at a few years ago) up to page 24. It does look pretty > interesting. In general I like it. > > He does make some lofty sounding claims on page 23, saying the > framework of "institutions" (which I have yet to read, I think > presented later, it is a 100+ page long paper) PROVIDES A UNIFIED > VIEW ON THE MULTITUDE OF EXISTING "LOGICS". (Quotes are his). > Comments anyone? > > Also he says: > > "In mathematical logics the semantics (“meaning”) > of a symbol or operator is formalized as its extension. " > > and then > > "Both in mathematical logic and cognitive > science, extensions need not be confined to physical objects; the > modeler may also > define extensions in terms of mathematical structures, sensory > perceptions, hypo- > thetical worlds, ideas or facts. " > > "But at any rate, there is an ontological difference > between the two ends of the semantic relationship. > This ontological gap dissolves in the case of conceptors. The natural account > of the “meaning” of a matrix conceptor C is the shape of the neural state > cloud > it is derived from." > > SO -- he seems to be saying the difference between a symbol and it's > meaning disappears under his scheme. I'm not clear how you are left > with much other than the infamous chinese room if you have to resort > to the "neural state cloud" for meaning. Look, I mean it sounds like > he saying he's solved grounding/meaning and logic all in one long > paper. I don't know! Need to read more.... > > Mike A > > On 6/16/14, Ben Goertzel via AGI <[email protected]> wrote: >> On Mon, Jun 16, 2014 at 7:03 PM, Ben Goertzel <[email protected]> wrote: >>> Hi all, >>> >>> This new variant of recurrent NN (conceptor networks) looks interesting, >>> >>> http://arxiv.org/abs/1403.3369 >>> >>> In fact it looks like a better-realized variant of the idea of "glocal >>> neural nets" that my colleagues and I experimented with a few years >>> ago, >>> >>> http://www.sciencedirect.com/science/article/pii/S0925231210002808 >> >> For those without access to that site, the paper is also here >> >> http://goertzel.org/glocal_memory_paper.pdf >> >> ;) >> ben >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae >> Modify Your Subscription: >> https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com >> -- Ben Goertzel, PhD http://goertzel.org "In an insane world, the sane man must appear to be insane". -- Capt. James T. Kirk "Emancipate yourself from mental slavery / None but ourselves can free our minds" -- Robert Nesta Marley ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
