Ah, that makes it a lot clearer. So really it's about experience and context (including other concepts and your own personal usage) modifying concepts on the fly. A dictionary always defines words in terms of other words, and so as the meaning of one word shifts, so do the meanings of all those connected to it. An AI can manage this sort of difficulty if it is built on the same principles. That's why I prefer semantic nets. They represent a data structure that corresponds directly to this sort of shifting web of concepts. Each concept receives its own node, and is defined entirely in terms of its connections to other nodes, which are updated constantly as new experiences and thoughts accrue.
On Thu, Aug 23, 2012 at 9:09 PM, Jim Bromer <[email protected]> wrote: > Aaron Hosford asked me if I could give an example of Conceptual Relativism? > > > No. I mean that everything is an example and conceptual relativization is > so omnipresent that we are constantly adapting around it. > > Let's say that you want to understand what a word means. You might start > by using it in some sentences. But every time you use it in a particular > sentence your sense of the meaning of the word naturally becomes more > strongly associated with that situation. So you try to define it in more > sentences (that is you try to better understand the concept-word by using > the concept in another situation.) It then takes on the characteristics of > that its specialized usage as it relates to that situation. Furthermore, > you realize that some language-concepts can only be defined by systems of > words and the meaning of many of the words has to be fitted to the > specialized system of the sentence or phrase. The same thing is true for > most any concept that you want to think about. > > Ok here is an example: > > What does pattern mean? Well, maybe someone might start with saying it is > a symmetric image that is repeated. Then you start to question what a > pattern is and look it up in a dictionary. It is not necessarily > symmetric. By including the concept of a meta pattern into your definition > you realize that pattern is not necessarily graphic. Then you realize that > there can be different definitions of meta patterns. You realize that > there are different -types- of patterns and meta patterns. And on and on. > At some point you realize that your new definitions of a pattern may have > taken you beyond the boundaries of what you would intuitively call a > pattern. So then you have to ask yourself if the new definitions are > valid. If you start by asking if it is useful or interesting then the new > variations will become more acceptable because they are interesting. If > you find that there is a better term to apply to some of the new variations > you can learn to accept the fact that even if they are related to the > concept of a pattern it might be easier to get other people to know what > you are talking about using the better term. But then you realize that in > some conversations the term 'pattern' can help people relate to the greater > context of some particular situation that you are trying to describe. > > I think this is a pretty good example although I did not present it as > well as it could be. It is not just a language thing, it is a > subject-of-thought thing. In fact, you can sometimes rely on social > convention of language to tone down linguistic relativism, but because > conceptualization might tend to include more systems of related thought > then you can describe using language the social conventions may get in the > way of the effort to better understand an idea. > > It is my opinion that our knowledge is permeated with conceptual > relativization and as a result the effort to limit the use > components-of-knowledge in thought becomes complicated. In fact, the > components-of-thought model becomes very tangled and when we use > components-of-thought they become more like indexes into a range of > variations on how the concept is used with other concepts. > > I think the most efficient computational methods are like combinatorial > component systems. I would want to use combinations of component concepts > because that seems like the more efficient method for AGI. But, because of > conceptual relativism this system cannot be logically constrained according > to universal principles. Of course the concept of 'universal' in logic is > relative. > > Jim Bromer > > > > On Thu, Aug 23, 2012 at 8:16 PM, Aaron Hosford <[email protected]> > wrote: > OK, that gives me a partial grasp. Can you give me an example? > > On Thu, Aug 23, 2012 at 7:05 PM, Jim Bromer <[email protected]> wrote: > Conceptual relativism is the idea that concepts must be used to think > about other concepts and when that happens the concepts that are used in an > expression or study of the subject concept can often affect the "meaning" > of the subject concept. So concepts are not only relative and relational > they are also relativistic. > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
