I may be a little optimistic about this version of my project but to summarize and add a couple of points.
It is reasonable to start out with a simple program and then improve on it. However, most AGI designs start with definitions that are meant to avoid the complications completely. I think that it makes better sense to simplify some of the complicating factors of a hoped for AGI program and see if you can solve some of those problems at the start. The problem of conceptual relativity is such a problem. While conceptual relativity can produce many different kinds of effects, because it is possible to start with simplified definitions of various kinds of conceptual relativism I think that if I can solve some basic problems early in the design of the program then subsequent development could be based on some real innovation. This is a change of the typical process of design seen in the development of would-be AGI (and semi-strong AI) programs. But in order to make this change you have to have enough insight into some of the complexities of higher mental processes to simplify them and build them into your initial design. I believe my thoughts about conceptual relativity have finally come to the stage where I can start to do that. As I noted I did find that Putnam did mention conceptual relativity in 1988 but my use of the term goes beyond that. For example, since knowledge can be defined in terms of literal knowledge and working knowledge the manifestations of conceptual relativity can be defined to produce effects in both those kinds of knowledge in a simple would be AGI program. And, for another example, I believe that when you apply different Concepts to analyze other Concepts they can produce different results. If you knew all the reasons for the differences you could say that an idea like conceptual relativity might not be necessary, but since you can't the dismissal of the -idea- itself is not completely reasonable at this stage of the development of AGI programs. I mentioned that by defining some 'types' for the conjecture-experiment-process the program would be able to avoid a severely degenerate tendency where any kind of result could be declared a success for the conjecture. However, there are times when the experiments used to analyze a conjecture should be changed. Without knowing the difference beforehand one has to hope to find other indicators of success or reasonability in the application of variations to the conjecture-experiment-process. By assigning types to the program's process of conjecture-and-experiment the program will be able to keep a record of the process in abstract meta symbols. These meta symbols can then be used in the design subsequent variations of the conjecture and experiment process. Of course there are complications to the use of this method but the worse of those kinds of complications are exactly the kind of thing that I am hoping I can solve at an early stage of development. One other problem that I had not been able to solve is the general definition of 'types' of Conceptual 'roles'. You must use Concepts to apply, analyze and shape Concepts. In the componential model of intelligence these different components will tend to be separate Concepts. (For example, a variety of how-to Concepts are needed when applying previously learned know-how to solve a new kind of problem.) I was never able to figure out how I should 'type' Concepts since Concepts need to be so varied. The problem is that the 'types' of Concepts should not be fully predefined because the program can be expected to need variations of abstractions about the application of Concepts to various situations. But I finally figured it out. They could be derived by building categorical collections of similarities, just as the analysis of data-events from the IO Data Environment would be derived. So here again is a relatively simple answer to a question that I had not been able to answer for a long time. Jim Bromer On Sat, Oct 11, 2014 at 6:39 PM, Jim Bromer <[email protected]> wrote: > It should be easy to start testing simple versions of these ideas out. > Of course, initially a greater proportion of the program will be > predefined than a later version will be if I ever get to that stage. > So, at first the program will have tasks that are programmed into it > and you won't be able to see it getting much traction choosing its own > path of inquiry unless my early programs work and I am able to solve > some of the problems of (what I call) conceptual relativism. These > tasks however, can be defined within a simplified vision of conceptual > relativism. > > The design of a program is not typically characterized in concrete > terms unless it is a design for a conventional program. I could, for > example, use terms to define a simple word processor program that > would not be contested because designing a simple word processor > program is not an unsolved problem. > > I am going to rewrite my program to be a text-based AI program. I > don't need to use more technical terminology because you already know > how a simple word processor might work. The program will be designed > initially to try to pick up fragments of the text and try to relate it > to other fragments in order to see if some combination of parts of the > text are able to be used to navigate and steer the human user to > 'discuss' the subject the program thinks may be associated with the > fragments it is 'considering'. Initially this 'discussion' and the > 'subject' will just concern fragments of text. I expect that it will > find numerous fragments which are insignificantly associated > (coincidentally associated) and some fragments which have some > correlation but the correlation will not be very reliable. However, > here the hope is that it would find some correlation (even low > correlations) between fragments of text which have some kind of > meaning. Once the program finds something like that, a sympathetic > user could pick up on it and establish a primitive method of > communication. As the program tries to work with different fragments > of text it will be able to make systems of possible categorical > relations. From there, it can explore other possibilities based on > inferences derived from these categorical systems. > > This isn't going to be stock producer of corpuses of correlations > between fragments of text. It will be designed to dynamically use > these collections and their possible relations to discover paths (for > example) of conversation (highly simplified conversation). It will > treat these possible relations as conjectures and initially the > program will characterize the parts of the conjectures and the > experimental processes with 'objects' that I will define for the > conjecture-and-experiment process. Since I am starting with something > that is extremely simple, I will initially design the program to > assign the parts of the conjecture or theory and the steps in the > experimental process to these types. Then if, for example, the program > tries to redefine a goal in order to make it match the results of > experiments which varied a great deal it could note all this in > descriptors of the simple assigned types. (The types of the > conjecture-experiment process). This means that it could be programmed > to avoid declaring a conjecture is valid by simply substituting more > dubious experiments. > > So not only would these conjectures be used in a trial and error > process but a (simple) meta symbol descriptor of this process could be > used in the management and subsequent direction of the similar > applications of the conjecture-experiment process. > > Jim Bromer > > Jim Bromer > > > On Fri, Oct 10, 2014 at 9:16 PM, Jim Bromer <[email protected]> wrote: >> I just read that Putnam used the term "Conceptual Relativity". >> From http://www.u.arizona.edu/~thorgan/papers/eminee/ConceptualRelativity.htm >> "One of the key ideas of conceptual relativity is that certain >> concepts including such fundamental concepts as object, entity, and >> existence have a multiplicity of different and incompatible uses >> (Putnam 1987, p. 19; 1988, pp. 110-14)." >> >> My idea of Conceptual Relativity goes further than this although I >> have talked about things like the integration of incommensurate data >> objects (or references) and things like that. >> >> But to get to what I was saying recently in another message, the >> nature of conceptual relativity, as it relates to AGI projects, makes >> a demand that we consider the effects of such things in our most >> fundamental definitions of the data objects that an AGI program would >> use. We have to use concepts in order to examine and use concepts. An >> illustration of Conceptual Relativity then is the case where the >> concepts that we use to shape a group of target subject concepts might >> themselves be shaped by the process. As I suggested, this is not a >> wacky theory but the expected experience of intelligent thought. >> >> And the concepts that are used in thinking might be described as >> playing different kinds of roles in these uses. These roles are >> significant because they can be used to further generalize and >> categorize the interaction of concepts. They are also significant >> because their use makes sense. >> >> This definition of systems of interrelated concepts does not have to >> be fully defined at the very start of a computational investigation of >> the nature. This is something that I have been looking for because we >> can't just jump in with a full fledged AGI project. We have to start >> off with something simple, and the over reliance on conventional >> programming objects has not been demonstrated any real traction in AGI >> type programs. By starting with some simple definitions of how >> systems of interrelated concepts might develop and play different >> roles, I believe that another step toward creating better AGI programs >> may be made. We have to figure out how to manage these 'concepts' or >> concept-like data objects so that they do not quickly lose traction >> when they are applied to references which do not act according to some >> conventional plan. The only way this can be done is by defining these >> systems so that they can exhibit the flexibility of conceptual >> relativity and then create the management strategies that will tend to >> handle new referential complexities as they are discovered. >> >> Jim Bromer ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
