And the robot needs to identify a cup to pick up. The generalization of the command is very difficult, and the different parts of knowledge become entwined with the simple direction. This is another example of the fact that it takes additional concepts to understand (or act on) a simple concept and it is an example of the confounding complexity and relativism of insight. Jim Bromer From: [email protected] To: [email protected] Subject: RE: [agi] A Very Simple AGI Project Date: Fri, 2 Aug 2013 11:20:18 -0700
Add to that, the fact that PICK UP THE CUP needs preconditions and terminating conditions.In order to PICK UP THE CUP we must know that we are not holding THE CUP. We also must recognize when we are HOLDING THE CUP, or at least when PICK UP THE CUP becomes truein the forward, hypothetical, or actual situation. When you GO TO THE KITCHEN you must have a terminating condition of BEING AT THE KITCHEN. Terminating condition recognition may get you out of having to store all those historical episodes or possible solutions. ~PM Date: Fri, 2 Aug 2013 14:07:51 +0100 Subject: Re: [agi] A Very Simple AGI Project From: [email protected] To: [email protected] JB: So, for an example, an action may be distinctly modeled but we still might suggest that there are variations to the action model that we want to assign to the same action name. 'Picking up a cup' is not a single action consisting of only one sequence of actions but it has to consist of numerous variations. PICK UP THE CUPpresents more or less the same basic problem as GO TO THE KITCHEN The cup can be in an infinite diversity of positions and situations relative to you, the agent, in a given room or field - and require an infinite diversity of hand/arm routes to pick it up. Similarly the route to the kitchen can take/require an infinite diversity of forms and require an infinite diversity of forms of travel. Pray tell how you are going to model the infinite diversity of forms required by PICK UP THE CUP with your verbal database. What are the core elements/words? On 2 August 2013 13:33, Jim Bromer <[email protected]> wrote: My simple AGI project is based on a somewhat complicated database management program that I started to write 10 years ago. I found a stripped down version of and I have been trying to get it going but it has been very difficult since it was so heavily stripped and I can only work on it a few hours a day. When I came across an annoying problem with Microsoft's complier programming model, I started looking for my old db management program and I found a more complete version. So I should be able to use the more complete version to get working on my AGI theories very soon. My current theory is that the AGI may be simpler than we think. My theories about conceptual structure and reason-based reasoning are somewhat vague and perhaps simplistic. However, I don't recall anyone talking about this with me and that suggests that the theory is slightly different than you'll find in the major paradigms that are going around these days. So even though (I believe) the conceptual structure theory has to be fundamental to any AGI program I am now thinking that because I am looking at it as distinct idea I might be on to something. If I am right, I should be able to get some crude results in the next few months. If I actually got something working I would want to talk about it, but I think I would be less inclined to try to discuss it with people who are not actually interested (since I would have something more interesting to do with my spare time.) And of course many of the armchair critics who are already convinced that I am clueless will not be interested even if I did get some interesting results. The conceptual structure theory is very simple. When we refer to an idea (directly or implicitly) we are usually referring to complex combinations of methods. These subcomponents of ideas, like a particular application of a general method, can themselves be studied more carefully so we are left with a relativistic model of a concept or an idea. There are no absolute fundamentals of a concept. So, for an example, an action may be distinctly modeled but we still might suggest that there are variations to the action model that we want to assign to the same action name. 'Picking up a cup' is not a single action consisting of only one sequence of actions but it has to consist of numerous variations. And we can continue to analyze any particular action in novel ways. For instance we might want to define the action with greater reference to the background of the action. I say the same thing goes for any simple concept. And we can only think about a concept in terms of other concepts. So there is no such thing as an elemental concept, although we can store distinct models as prototypes, and there is no way to represent a concept except by injecting other concepts into the representation. >From this reasoning I have concluded that it takes a great deal of knowledge >to know one simple thing. So even if most of what you know is wrong, there >are still some core ideas that you can take out of that knowledge. (However, >that core knowledge is not elemental even if it is foundational to your >thinking.) Jim Bromer AGI | Archives | Modify Your Subscription AGI | Archives | Modify Your Subscription AGI | Archives | Modify Your Subscription ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
