I am using semantic nets in a very non-standard way, which has probably led
to a lot of the confusion. Here's how I would represent an exceptionally
simple sentence, "Trees grow.":

(noun: tree)                                        (verb: grow)
      |                                                       |
[means]                                               [means]
      |                                                        |
      v                                                       v
  (kind)                                                 (kind)
      ^                                                       ^
      |                                                        |
[has kind]                                            [has kind]
      |                                                        |
(instance/plural) <--[has subject]-- (instance/plural/present/statement)


I hope it's readable on your end. In actuality, this is a simplification,
because where I show hard links, there are competing soft links, and there
are multiple kind nodes acting as meanings to the same word (and possibly
other words as well) which may be competing with each other to be the kind
of each instance. For example, the "tree" as data structure and the "tree"
as plant might both be kinds connected to the same "tree" word node, and
it's the system's job to figure out (or assume until proven wrong) which of
the two is appropriate based on context and familiarity, or leave the
question unanswered in the absence of sufficient information.

Kinds as yet are not connected into any sort of hierarchy, since I'm fully
aware that penguins don't fly just because they're birds, even if birds as
a group do indeed fly, and so something more subtle than a simple hierarchy
is needed.

Assuming your meaning for the word "concept" corresponds to the nodes in
the above net, words, the kinds those words correspond to, the instances of
those kinds, and even the links connecting those nodes (since Link inherits
from Node) can act as concepts. If you say, "Growing up is hard," the
"growing up" phrase is an instance of a kind corresponding to the verb
"grow", modified by the particle "up" so that it in fact represents an
instance of a different kind than the "grow" in the first sentence, having
distinct connotations of its own based on previously observed use of "grow
up" as opposed to "grow" by itself. In this new sentence, you have an
action acting as an object, in the role of the subject of a sentence, and
you have a phrase, "grow up", with a meaning different from that of its
individual constituents, "grow" and "up".

The meanings of sentences (or fragments), once extracted, are then combined
together to form a larger network that represents the stream of
conversation. A fragment can be incorporated into the network despite its
incompleteness because context is considered, so when someone asks, "Did
you do the dishes?" a "yes" or "no" can be interpreted without difficulty.
Likewise for other types of fragments, in other contexts.

So I have multiple concepts (kind nodes) corresponding to the same words
(or phrases), a determination of concept meanings based on context and
experience, a rejection of hierarchy as anything more than a sometimes
convenient tool for organizing knowledge, phrases representing individual
units of meaning which cannot be derived from the composition of the
meanings of the words they are composed of, sentence fragments and single
words with interpretable meaning based on context, and even the ability to
use words or phrases as "variables" (these are the "instances" in the
network), all within the same design. Furthermore, I can connect an
instance node to an arbitrary other node (a word, kind, instance, or
whatever) via a "same as" link, allowing the system to literally talk/think
about its own internal knowledge of words or concepts independently of
actually using them.

Some of this functionality is already implemented, some of it is in the
works, and some of it is still being designed. But at every stage, I do my
best to grab difficult, real world examples and bring them to bear to gain
insight into the best approach. I intentionally branch out from the
expected use cases and try to throw a monkey wrench in the works, to make
sure that anything people say, my system can understand. The whole goal of
this system is to represent meaning in a way that flexes with the user's
intent, instead of something stiff and formal (and utterly inadequate) as
often appears in logic-based approaches to reasoning. Is this conceptual
relativism, or did I miss the boat again?


On Wed, Oct 24, 2012 at 5:26 PM, Jim Bromer <[email protected]> wrote:

> Aaron,
>
> You told me that you are using a semantic net.  I looked semantic net up
> in Wikipedia to make sure that I understood it.  A contemporary semantic
> net would be a network of related 'meanings' of words if we take phrase
> literally.  The value of a traditional semantic net was one that combined
> a word with a characterization of that word and when you started mentioning
> grammatical categories I assumed that was a good indication of how you were
> going to use the semantic net in your AGI program.
>
>
>
> So I came up with a few characterizations of -what I call- conceptual
> relativity to describe some problems that you might relate to.
>
> 1. One concept (or word-concept) can have many different meanings.  That
> is a well known reference to ambiguity when it refers to the word-concepts
> of human language.
>
> 2.  Different words, phrases, sentences or accumulations of sentence
> fragments can play a role which is similar to a grammatical phrase.  Or
> more insightfully they can contain information which should lead to a
> direct inference of a relationship or characterization which is similar or
> the same as a grammatical word-concept.
>
> 3. The characterization of a word-concept in a grammatical semantic net is
> itself a concept.
>
>
>
> These are all examples of the problems of conceptual relativity.  They
> are expressed as generalities and I did not provide more explicit detailed
> examples (except for one).
>
>
>
> 4.  Another problem of conceptual relativity is that the level of
> generality or specificity is relative.  So to go to the example of the
> example that I just mentioned, these characteristics of conceptual
> relativity may not seem like examples to some people because it is clear
> that I should be able to find many specific examples relative to the
> characteristics (1 through 4) that I have given.  But they are examples
> regardless of your feelings about them.
>
>
>
> One example that I did give you Aaron was to explain that you could find
> relative positions from phrases, sentences and accumulations of fragments
> of sentences.  That was an example of 2, and it was related to how the
> traditional semantic net which used a characterization of prepositions
> would not be able to identify preposition-like information in more
> complicated sentences.  Yes I could give you a more detailed example to
> support my example, but never the less this is an example of the second
> characterization of conceptual relativity.
>
>
>
> So to try to get this idea across:
>
> 5. An example may be abstract (relatively abstract) and still be an
> example of something that would (usually) be more abstract or more general.
> So this (ie the previous sentence) is another example of conceptual
> relativity.  But wait a minute.  This last example (example 5) started
> out as an example of 4 (example 4) above!  That makes perfect sense
> because:
>
> 6. A hierarchy may not be hierarchal when examined from a slightly
> different perspective because a hierarchy is conceptually relative.
>
>
>
> Even in traditional hierarchies an example of a generality which is a
> subclass of a higher generality can be taken as an example of the higher
> hierarchy.  The difference is that the defining characteristics of the
> hierarchy may fail at crucial moments of examination of a concept.  This
> is so commonplace that I refuse to number it as another example of the
> characteristics of conceptual relativity.
>
>
>
> I am not saying that nothing is real and that hierarchies are impossible.
> But the only way you can characterize something with some certainty is to
> put some conceptual boundaries around the subject.
>
> 7. Conceptual Boundaries are not necessarily absolute and inviolable.  But
> we can still examine the nature of absolute conceptual boundaries using the
> same kinds of tools of thought that we use to think about other things.
>
> Jim Bromer
>
>
> On Tue, Oct 23, 2012 at 8:16 PM, Aaron Hosford <[email protected]>wrote:
>
>> Is there a paper or document I can read on conceptual relativism? I think
>> what I'm lacking is simply a full-bodied description of the idea. Either
>> (1) I've already implemented it or (2) I'm just not getting it. A pair of
>> words just isn't much to go by when trying to capture a concept that
>> operates at this level of abstraction. When you say them, I immediately
>> think of a long list of capabilities my system has, but you've rejected
>> most of the ones I've named off.
>>
>> My system:
>>   * is able to very flexibly deal with concepts, abstractions, and
>> "truth".
>>   * is built on a data structure, the semantic network, which is capable
>> of modeling any other data structure. (Is there a class of data structures
>> that is analogous to the class of Turing-complete languages? I think it's
>> obvious, but I've never seen a paper on it. Someone ought to give it a go.
>> I would, but it's not a priority for me given my limited schedule.)
>>   * has a representational scheme within this semantic network is capable
>> of fully capturing the meaning of an arbitrary sentence, verifiably so
>> because meanings can be converted back into sentences without loss of
>> information (or will shortly, when I've finished writing the code for it).
>>   * is fully capable of representing meta meanings, meanings that
>> correspond to statements about other meanings or statements.
>>   * doesn't define kinds or classes of things by an explicit definition,
>> but rather through context, usage, and experience, allowing the meaning of
>> concepts to shift over time.
>>
>> I just don't see what's missing. Which of these comes closest to the mark
>> on capturing the functionality you are getting at? Reading up on this, the
>> ideas of paradigm shifts and reality filters come to mind. My personal
>> motto has for some time been, "Expectation skews perception," which fits
>> right in with what I see in the literature I have come across on this
>> topic. Given this, I would expect the last of the points above, that my
>> system doesn't work off explicit definitions but rather through
>> context/usage/experience, captures what it is you've been trying to convey.
>> But then again, maybe I'm just filtering my perception of reality with my
>> own expectations.
>>
>>
>> On Tue, Oct 23, 2012 at 6:10 PM, Jim Bromer <[email protected]> wrote:
>>
>>> Aaron,
>>> You can't pin conceptual relativism down by providing an example of
>>> a 'thought' that someones unimplemented AGI program would be unable to
>>> consider without resorting to a profoundly imaginary example and abstract
>>> language.
>>>
>>> I have been saying that (many) old fashioned AI programs would be
>>> capable of dealing with conceptual relativism if they were given the
>>> capability to do so.  So I haven't been saying that your AGI model is
>>> definitely incapable of producing or even dealing with conceptual
>>> relativism. I am saying that you have to deal with sooner or later and
>>> sooner is better than later.
>>>
>>> You cannot pin conceptual relativism down with concrete examples because
>>> once you did you can come up with a way to "explain" the effect away
>>> (because you are capable of intelligent thought)  if that is your primary
>>> motivation.
>>>
>>> I cannot understand why this isn't obvious.
>>>
>>> The first step is to deal with conceptual relativism in your own
>>> thinking.
>>>
>>> My goal, for sometime now, has been to get the core elements of AGI and
>>> then start with a really simple model that I can then test and develop in a
>>> cyclical fashion.  So by starting with a system that enables conceptual
>>> relativism in an extremely simple model I can start working with the system
>>> sooner. It is a little like trying to make a semantic network (or perhaps I
>>> should call it a flexible definitive conceptual network) as simple as a
>>> neural network.  But because it is a definitive network, I can easily
>>> modify it to make it more sophisticated which is something that is almost
>>> impossible to do with a neural network.
>>>
>>> Jim Bromer
>>>
>>>
>>>>     <http://www.listbox.com/>
>>>>>>       <http://www.listbox.com/>
>>>>>>
>>>>>
>>>>>    <http://www.listbox.com/>
>>>>>
>>>>
>>>>          <http://www.listbox.com>
>>>>
>>>
>>>
>>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to