Aaron,

You told me that you are using a semantic net.  I looked semantic net up in
Wikipedia to make sure that I understood it.  A contemporary semantic net
would be a network of related 'meanings' of words if we take phrase
literally.  The value of a traditional semantic net was one that combined a
word with a characterization of that word and when you started mentioning
grammatical categories I assumed that was a good indication of how you were
going to use the semantic net in your AGI program.



So I came up with a few characterizations of -what I call- conceptual
relativity to describe some problems that you might relate to.

1. One concept (or word-concept) can have many different meanings.  That is
a well known reference to ambiguity when it refers to the word-concepts of
human language.

2.  Different words, phrases, sentences or accumulations of sentence
fragments can play a role which is similar to a grammatical phrase.  Or
more insightfully they can contain information which should lead to a
direct inference of a relationship or characterization which is similar or
the same as a grammatical word-concept.

3. The characterization of a word-concept in a grammatical semantic net is
itself a concept.



These are all examples of the problems of conceptual relativity.  They are
expressed as generalities and I did not provide more explicit detailed
examples (except for one).



4.  Another problem of conceptual relativity is that the level of
generality or specificity is relative.  So to go to the example of the
example that I just mentioned, these characteristics of conceptual
relativity may not seem like examples to some people because it is clear
that I should be able to find many specific examples relative to the
characteristics (1 through 4) that I have given.  But they are examples
regardless of your feelings about them.



One example that I did give you Aaron was to explain that you could find
relative positions from phrases, sentences and accumulations of fragments
of sentences.  That was an example of 2, and it was related to how the
traditional semantic net which used a characterization of prepositions
would not be able to identify preposition-like information in more
complicated sentences.  Yes I could give you a more detailed example to
support my example, but never the less this is an example of the second
characterization of conceptual relativity.



So to try to get this idea across:

5. An example may be abstract (relatively abstract) and still be an example
of something that would (usually) be more abstract or more general.  So
this (ie the previous sentence) is another example of conceptual relativity.
But wait a minute.  This last example (example 5) started out as an example
of 4 (example 4) above!  That makes perfect sense because:

6. A hierarchy may not be hierarchal when examined from a slightly
different perspective because a hierarchy is conceptually relative.



Even in traditional hierarchies an example of a generality which is a
subclass of a higher generality can be taken as an example of the higher
hierarchy.  The difference is that the defining characteristics of the
hierarchy may fail at crucial moments of examination of a concept.  This is
so commonplace that I refuse to number it as another example of the
characteristics of conceptual relativity.



I am not saying that nothing is real and that hierarchies are impossible.  But
the only way you can characterize something with some certainty is to put
some conceptual boundaries around the subject.

7. Conceptual Boundaries are not necessarily absolute and inviolable.  But
we can still examine the nature of absolute conceptual boundaries using the
same kinds of tools of thought that we use to think about other things.

Jim Bromer


On Tue, Oct 23, 2012 at 8:16 PM, Aaron Hosford <[email protected]> wrote:

> Is there a paper or document I can read on conceptual relativism? I think
> what I'm lacking is simply a full-bodied description of the idea. Either
> (1) I've already implemented it or (2) I'm just not getting it. A pair of
> words just isn't much to go by when trying to capture a concept that
> operates at this level of abstraction. When you say them, I immediately
> think of a long list of capabilities my system has, but you've rejected
> most of the ones I've named off.
>
> My system:
>   * is able to very flexibly deal with concepts, abstractions, and "truth".
>   * is built on a data structure, the semantic network, which is capable
> of modeling any other data structure. (Is there a class of data structures
> that is analogous to the class of Turing-complete languages? I think it's
> obvious, but I've never seen a paper on it. Someone ought to give it a go.
> I would, but it's not a priority for me given my limited schedule.)
>   * has a representational scheme within this semantic network is capable
> of fully capturing the meaning of an arbitrary sentence, verifiably so
> because meanings can be converted back into sentences without loss of
> information (or will shortly, when I've finished writing the code for it).
>   * is fully capable of representing meta meanings, meanings that
> correspond to statements about other meanings or statements.
>   * doesn't define kinds or classes of things by an explicit definition,
> but rather through context, usage, and experience, allowing the meaning of
> concepts to shift over time.
>
> I just don't see what's missing. Which of these comes closest to the mark
> on capturing the functionality you are getting at? Reading up on this, the
> ideas of paradigm shifts and reality filters come to mind. My personal
> motto has for some time been, "Expectation skews perception," which fits
> right in with what I see in the literature I have come across on this
> topic. Given this, I would expect the last of the points above, that my
> system doesn't work off explicit definitions but rather through
> context/usage/experience, captures what it is you've been trying to convey.
> But then again, maybe I'm just filtering my perception of reality with my
> own expectations.
>
>
> On Tue, Oct 23, 2012 at 6:10 PM, Jim Bromer <[email protected]> wrote:
>
>> Aaron,
>> You can't pin conceptual relativism down by providing an example of
>> a 'thought' that someones unimplemented AGI program would be unable to
>> consider without resorting to a profoundly imaginary example and abstract
>> language.
>>
>> I have been saying that (many) old fashioned AI programs would be capable
>> of dealing with conceptual relativism if they were given the capability to
>> do so.  So I haven't been saying that your AGI model is definitely
>> incapable of producing or even dealing with conceptual relativism. I am
>> saying that you have to deal with sooner or later and sooner is better than
>> later.
>>
>> You cannot pin conceptual relativism down with concrete examples because
>> once you did you can come up with a way to "explain" the effect away
>> (because you are capable of intelligent thought)  if that is your primary
>> motivation.
>>
>> I cannot understand why this isn't obvious.
>>
>> The first step is to deal with conceptual relativism in your own thinking.
>>
>> My goal, for sometime now, has been to get the core elements of AGI and
>> then start with a really simple model that I can then test and develop in a
>> cyclical fashion.  So by starting with a system that enables conceptual
>> relativism in an extremely simple model I can start working with the system
>> sooner. It is a little like trying to make a semantic network (or perhaps I
>> should call it a flexible definitive conceptual network) as simple as a
>> neural network.  But because it is a definitive network, I can easily
>> modify it to make it more sophisticated which is something that is almost
>> impossible to do with a neural network.
>>
>> Jim Bromer
>>
>>
>>>     <http://www.listbox.com/>
>>>>>       <http://www.listbox.com/>
>>>>>
>>>>
>>>>    <http://www.listbox.com/>
>>>>
>>>
>>>          <http://www.listbox.com>
>>>
>>
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to