My comment about an action was not intended to be interpreted as something that might occur in a text-only AGI program like the one I hope to write soon. What I was saying is that there are numerous possible variations to a simple action like one that a robot might need to learn. By the way, Mike, as a non-programmer you might not be aware that a robot would not actually need an infinite diversity of forms to be able to explore new ways to pick up a cup (for example while the cup is full or while the robot is standing upside down and so on.) The number of variations would be too great for the robot to discreetly note in a database so it would be given some leeway, or more accurately a degree of variation in its movements, which can be expressed numerically. These variations could provide the robot with a potential range of movements that could cover uncountable numbers of variations. I was using the action analogy in the hopes of getting the programmer to recognize that something that is very similar is true for language. I was not making a direct correspondence between the action 'pick up a cup' and the linguistic reference "pick up a cup", but rather I was trying to say that there is an analogy to the many different ways to say something simple as there are variations to a simple action. (My conclusion that it takes a great many things to know one simple thing could even be related to this rather simple analogy.) Intentionally, the use of a numerical value of variation that might be given could be expressed as a weight which is a range of variation. So when I am arguing against weighted reasoning I am not arguing that weighted reasoning should never be used, I am only saying that it is not in itself the solution to what ails AGI. Now suppose that I was trying to use words to tell someone how to walk from one place to another place that I knew well where the landscape was not simple and I had no visual input to see where he was. In this case the problem would not be from a lack of words, but from a simple failure of memory - and communication. In other words if I could not figure out where he was based on his description then I would not be able to give him good directions. This has happened to many of us because we live in an age of cell phones. It is usually resolved when we both can see an uniquely numbered marker somewhere. Now suppose that he was on a city block. I could tell him to look for a numbered building that I knew well and even if I could not see the building the problem would start to resolve itself there. Of course in real life you have to keep asking the pedestrian to make sure that he is on the street he thinks he is on until he actually confirms it with a street sign of some kind. If I was drawing pictures would they be any better than words? No, because the problem is my memory, his ability to accurate identify landmarks that I happen to know and problems of communication which you will find in any medium of communication. Jim Bromer Date: Fri, 2 Aug 2013 14:07:51 +0100 Subject: Re: [agi] A Very Simple AGI Project From: [email protected] To: [email protected]
JB: So, for an example, an action may be distinctly modeled but we still might suggest that there are variations to the action model that we want to assign to the same action name. 'Picking up a cup' is not a single action consisting of only one sequence of actions but it has to consist of numerous variations. PICK UP THE CUPpresents more or less the same basic problem as GO TO THE KITCHEN The cup can be in an infinite diversity of positions and situations relative to you, the agent, in a given room or field - and require an infinite diversity of hand/arm routes to pick it up. Similarly the route to the kitchen can take/require an infinite diversity of forms and require an infinite diversity of forms of travel. Pray tell how you are going to model the infinite diversity of forms required by PICK UP THE CUP with your verbal database. What are the core elements/words? On 2 August 2013 13:33, Jim Bromer <[email protected]> wrote: My simple AGI project is based on a somewhat complicated database management program that I started to write 10 years ago. I found a stripped down version of and I have been trying to get it going but it has been very difficult since it was so heavily stripped and I can only work on it a few hours a day. When I came across an annoying problem with Microsoft's complier programming model, I started looking for my old db management program and I found a more complete version. So I should be able to use the more complete version to get working on my AGI theories very soon. My current theory is that the AGI may be simpler than we think. My theories about conceptual structure and reason-based reasoning are somewhat vague and perhaps simplistic. However, I don't recall anyone talking about this with me and that suggests that the theory is slightly different than you'll find in the major paradigms that are going around these days. So even though (I believe) the conceptual structure theory has to be fundamental to any AGI program I am now thinking that because I am looking at it as distinct idea I might be on to something. If I am right, I should be able to get some crude results in the next few months. If I actually got something working I would want to talk about it, but I think I would be less inclined to try to discuss it with people who are not actually interested (since I would have something more interesting to do with my spare time.) And of course many of the armchair critics who are already convinced that I am clueless will not be interested even if I did get some interesting results. The conceptual structure theory is very simple. When we refer to an idea (directly or implicitly) we are usually referring to complex combinations of methods. These subcomponents of ideas, like a particular application of a general method, can themselves be studied more carefully so we are left with a relativistic model of a concept or an idea. There are no absolute fundamentals of a concept. So, for an example, an action may be distinctly modeled but we still might suggest that there are variations to the action model that we want to assign to the same action name. 'Picking up a cup' is not a single action consisting of only one sequence of actions but it has to consist of numerous variations. And we can continue to analyze any particular action in novel ways. For instance we might want to define the action with greater reference to the background of the action. I say the same thing goes for any simple concept. And we can only think about a concept in terms of other concepts. So there is no such thing as an elemental concept, although we can store distinct models as prototypes, and there is no way to represent a concept except by injecting other concepts into the representation. >From this reasoning I have concluded that it takes a great deal of knowledge >to know one simple thing. So even if most of what you know is wrong, there >are still some core ideas that you can take out of that knowledge. (However, >that core knowledge is not elemental even if it is foundational to your >thinking.) Jim Bromer AGI | Archives | Modify Your Subscription AGI | Archives | Modify Your Subscription ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
