Piaget Modeler said: Take for example your assertion that a goal is a generalization. I have a different structure in mind for a goal, which looks like this: (prototype Intention :Goal :Urgency :Priority :Depth ... ) In my mental structure a goal is simply a slot of an intention. Anything can fill that slot. Anything. ------------------------------------------
I am not sure what I should say. 'Anything can fill that slot' sounds like a definition of an ultimate generalization to me. Every feature you defined for the goal, 'urgency', 'priority', 'depth' is a conceptual generalization. Presumably you have decided to specify the characteristic types that you defined and can ignore that they are generalizations, but they are still narrow computational generalizations (numerical value types) and they were derived via generalizations of human thought. My best guess is that you are defining these types as numerical values. But where do these values come from? They start out with Priority as = 0 but what causes them to change? (This is like a rhetorical question but I am trying to explain something.) The level of the sub goals? The order of the steps toward the goal? That is ok but is it really enough to exhibit intelligence or to define how priority of goals in higher intelligence might be managed? Whatever is going to change the values is either something that is going to come from the IO referential world, from the user, or they are going to be defined to be some sort of narrow abstract computational process. What I am saying is that the traditional methods aren't enough. You have to go beyond the narrow AI methods in some way. The program has to be able to prioritize its goals (that is, some of its goals) according to the consequences of its actions and the apparent 'behavior' of the objects it 'perceives' in the IO data environment. The problem with considering these 'goals' as foundational computational objects is that they can't be easily applied to the referential world except in narrow ways. They will work some of the time but not enough to gain some real traction. The recognition that certain sub goals had to be ordered to achieve a goal was an important step in early AI, we have to go beyond that now. One other thing. When it can be seen that a program has achieved something like AGI one might say that since it used computational processes, the program probably could be written using traditional computational structures. Just as you think I see goals as a simple generalization (as in taxonomy of old AI) I think you see goal prioritization as a simple numerical evaluation. That is just not going to be good enough. I am not trying to be dismissive of what you are talking about I wanted to be helpful. Jim Bromer On Thu, Oct 9, 2014 at 9:37 AM, Piaget Modeler via AGI <[email protected]> wrote: > Part of the problem too is that language is imprecise, it might be better > to communicate in logic. And also that words are placeholders for > a constellation of meanings. This confounds matters greatly. Lastly, > concepts get ossified in people's minds, and everyone's mind has a > different collection of meanings. > > Take for example your assertion that a goal is a generalization. > > I have a different structure in mind for a goal, which looks like this: > > > (prototype Intention > :Goal > :Urgency > :Priority > :Depth > ... > ) > > In my mental structure a goal is simply a slot of an intention. Anything > can fill that slot. Anything. I believe you have a different structure > in > mind for goals. Please describe it if you can. > > > One thing I do agree on is that we have to make a mould for software. > And pour our ideas into this mould. A computer program which specifies > algorithms and data structures is a mental mould, into which we pour > the data, the cognitive stuff. > > ~PM. > > > > Date: Thu, 9 Oct 2014 04:44:45 -0400 > > Subject: Re: [agi] Are all goals created equal? > > From: [email protected] > > To: [email protected] > > CC: [email protected] > > > > > We have to use concepts to think about other concepts. The concepts > > that we use while thinking about a subject group of concepts will > > illuminate the subject and shape the subject concepts that we are > > thinking about. Someone once actually argued against this idea but I > > had no idea what he was thinking about since he did not give any > > illustration of his ideas or provide any conversational reasons > > supporting his views. > > > > Another dismissal of this point of view was that AI had always been > > aiming at solving these kinds of problems. In other words, there is no > > reason to think about them because it would be the product of > > intelligence anyway. (!?!) (At least that was my best guess about > > their point of view since they were unwilling to discuss the problem > > or even acknowledge it in an insightful way.) > > > > When I can't get people to discuss this I start to wonder if I have a > > piece of knowledge that they don't. Because if most other people are > > missing something this simple and this basic then I might have had a > > strategic advantage that I don't realize that I possess. > > > > And all these things seem to be composed of generalizations. A goal is > > a generalization of different kinds of things. Measuring a goal using > > something about the sub goals is one strategy that may work in some > > cases, but again it is a generalization that does not define how it > > might be used in anything that has some practical use. For example, > > are the sub goals objectives that seem as they might be correlated to > > the goal or are they definitive steps to the goal? Are the sub goals > > methods that can be used to examine the progress toward the goal or > > are they objectives or steps toward the completion of the goal? We can > > think about the strategy only by thinking about relatively more > > specifics of the generalization and by examining the way they > > interrelate and can be varied when used by other methods and by > > looking at various alternatives. > > > > > > Jim Bromer > > > > > > On Tue, Oct 7, 2014 at 6:50 PM, Mike Archbold <[email protected]> > wrote: > > > Jim, I think about the issue you emphasize of no 'independent concepts' > > > frequently. It plays a role in my latest approximate design. Mike A > > > > > > On Tuesday, October 7, 2014, Jim Bromer via AGI <[email protected]> > wrote: > > >> > > >> Some years ago I kept mentioning my idea that concepts are > relativistic > > >> hoping that someone would discuss the effects of this relativism with > me. > > >> Eventually someone who was willing to talk to me once in a while > became a > > >> little exasperated with me for repeating this over and over, and he > > >> explained that two authors had written a textbook on Cognitive > Science that > > >> he read had pointed out that Concepts were relativistic back in 1972. > > >> (Implying that my idea was not new or particularly interesting.) I > wondered > > >> if that was possibly true so I wrote a reply and told him that I > would make > > >> a point to read that book. I made a note to get a copy the next time > I was > > >> in the state university library. A few months later I found a > reference in > > >> Wikipedia to the authors he had mentioned and it was quite clear that > they > > >> frequently emphasized the point that Concepts were related in their > > >> textbooks. > > >> > > >> Yes of course Concepts are related. But my choice of the term > > >> "relativistic" was not drawn from my cornucopia of grammatical errors > or > > >> because I wanted to pretentiously use a term from physics but because > I was > > >> trying to get the idea across that Concepts are not only related - > they are > > >> relativistic. > > >> AGI | Archives | Modify Your Subscription > > > > > > ------------------------------------------- > > AGI > > Archives: https://www.listbox.com/member/archive/303/=now > > RSS Feed: > https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc > > Modify Your Subscription: https://www.listbox.com/member/?& > > Powered by Listbox: http://www.listbox.com > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
