On Tue, Aug 28, 2012 at 10:20 AM, Mike Tintner <[email protected]> wrote: > Ben, > > Thanks for reply. > > What then is the idea[s] for grounding this system?
The two papers Perception Processing for General Intelligence, Part I: Representationally Transparent Deep Learning Perception Processing for General Intelligence, Part II: Bridging the Symbolic/Subsymbolic Gap linked here http://wp.goertzel.org/?p=404 explain how we are intending to link DeSTIN in with OpenCog, to serve as OpenCog's visual cortex... > When the Opencog brain says "pick up the box" to the robot, what will "box" > and "pick up" connect to in the robot system? "Box" is a fuzzy set of attractor patterns in the DeSTIN hierarchy (I know you often interpret the word "pattern" in an unusual way, but I'll continue to use it in the usual way anyway ;p) "Pick up" is also a fuzzy set of attractor patterns in DeSTIN (because actions can also be perceived), but also a fuzzy set of paths through configuration-space, represented as such in the action hierarchy "Box" and "pick up" are also Atoms in OpenCog's Atomspace, with multiple semantic links indicating/ constituting their semantic content... The DeSTIN attractors may be represented as Atoms in OpenCog, and the (learned) links btw the DeSTIN centroids involved in the attractors, and the corresponding Atoms in OpenCog, constitute the concrete means of the "grounding" We have not built this yet though, at the moment we're just working with virtual world agents, and doing the first steps of tweaking DeSTIN to make it OpenCog-compatible > And what will the robot system have for "pick up"? Again some kind of > variable movement routine? A fuzzy set of paths through C-space > Is this sort of thing BTW discussed much here/elsewhere? I can't recall many > if any discussions about the exact principles for grounding - correct me. > > We certainly should be discussing ideas for all this a lot. It has been much discussed privately in the OpenCog HK team, and privately between me and David Hansons and Itamar, etc. It seems that discussions of deep issues on this list rarely go very far, because the diversity of perspectives is so great that few of the vocal list members agree even on the right way to think about the deep AGI issues at the high level.... Whereas in discussions within a team focused on a particular approach, one can plunge much deeper into specific issues like this... -- Ben G ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
