I would love to know where you're going with this. I can see you have an
interesting insight. I don't think it's my faculties at fault, nor is it
your communication skills. Communicating concepts at this level of
abstraction is inherently difficult. I'm just looking for a clear,
detailed explanation.

I find it a little funny that you've grouped me in with Mike, considering
he is nay-saying the possibility while I am busy building it, albeit not
according to your liking, apparently. Also unlike Mike, I'm quite willing
(eager!) to listen to other views. I recently said on this list that I like
to learn about unfamiliar, orthogonal approaches because the more I learn
about them, the more robust my design becomes.

Maybe a few emails talking at a high level simply aren't enough for either
of us to fully communicate our ideas. Obviously we've both put years into
formulating our views, and to think that we can communicate the sum total
of those insights in such a short time is pretty ambitious. I get the
feeling you and I are mostly on the same page, unlike many of the others on
this list, but that I haven't convinced you of it yet because I've told you
precious little about the actual design of my system and I've made some
simplifications for the purpose of clarity.

I've considered phrases as variables already, and they are built into my
system. However, "variable" is an oversimplificiation, because there is a
degree of uncertainty involved in anaphora resolution. My system tracks
multiple "values" for a "bound" phrase (think of a mathematical constant),
keeping a certainty level for each. This means it can handle puns, not just
single meanings. This is how non-quantifying pronouns and determiners are
handled. On the other hand, quantifiers are going to be treated more like
true variables, where an entire compound phrase or clause can match against
objects and events recorded in the system's perceptual memory, or even
other phrases or clauses, generating new information about the matched
entities in the form of new phrases/clauses describing them. This part has
not yet been implemented and is the next thing on my list.

So what do you mean by using a sentence fragment as a category, if not
this? Can you give a (relatively) concrete example?

On Mon, Oct 22, 2012 at 4:45 PM, Jim Bromer <[email protected]> wrote:

>  [email protected] <[email protected]> wrote
>  As for recognizing a definite set of prepositions, you act as though I
> claimed the same preposition is treated the same way, regardless of
> context. If "in" means something different when talking about sets
> (containership, as in "it's in the box") than it does when talking about
> money (possession, as in "we're in the money")
>
> ------------------------
> I said that word-concepts can be used in different ways in different
> contexts and you understood that.
> I also was saying that word-concepts, sentence fragments, sentences
> and collections of sentences and or sentence fragments can be used as
> categories or categorical definitions and you weren't sure about what I was
> saying.
> I said that while it is probably true that there are only a few words
> which are grammatical prepositions, there are uncountable numbers of
> sentences or sentence fragments from which relative positions might be
> inferred and you did not react to it.
> Finally I've been pointing out that a word-phrase or sentence fragments or
> sentences or concepts can be used as variable-like things and again you did
> not react to it - as if you are not ready to deal with the implications.
>
> I am not saying that you don't understand what I am saying only that you
> choose not to go there for some reason. You reacted to one thing that I
> have been saying but you don't seem to get the central things.  You are not
> reacting to the same pieces of information that Mike Tintner is not
> reacting to.
>
> Jim Bromer
>
>
>
>
>
>
> On Mon, Oct 22, 2012 at 9:31 AM, [email protected] 
> <[email protected]>wrote:
>
>> You're shadow boxing, Jim. It's refreshing to see that someone else
>> besides myself has noticed the difference between the truth (what we
>> confidently believe) vs. The Truth (which is something we can never be sure
>> we've discovered). I built this into my system through soft, non-Boolean
>> truth values.
>>
>> As for recognizing a definite set of prepositions, you act as though I
>> claimed the same preposition is treated the same way, regardless of
>> context. If "in" means something different when talking about sets
>> (containership, as in "it's in the box") than it does when talking about
>> money (possession, as in "we're in the money"), why not recognize the
>> different meanings of "in" based on what it connects, instead of assuming a
>> universal set of inference rules for "in" relationships? I see no reason to
>> duplicate that information and complicate the implementation by making the
>> information explicit in the form of additional, more specialized link types.
>>
>> Aside from that, I used the same mechanism for prepositions as I did for
>> "kind" words like nouns and verbs: the word node is related to multiple
>> kind/class nodes, and the preposition itself is an instance of one of these
>> kinds. This means that I can easily differentiate between different
>> meanings of "in", by preferring one kind over another in the network's
>> structure. Thus prepositions are not among the list of hardcoded links in
>> my system. Hardcoded links consist primarily of grammatical relations such
>> as the link types corresponding to "is a modifier of" and "is the
>> complement of" with which I string together the 3 part
>> object/verb-preposition-complement prepositional relation, as I tried to
>> demonstrate with my ascii diagram.
>>
>>
>>
>>
>> -- Sent from my Palm Pre
>>
>> ------------------------------
>>  On Oct 22, 2012 7:19 AM, Jim Bromer <[email protected]> wrote:
>>
>> [email protected] <[email protected]> wrote:
>>  there are certain grammatical categories that have a very small set of
>> words that doesn't expand, and other categories where new words can be
>> easily created. Prepositions, determiners, and particles belong to the
>> first group...
>> --------
>>
>> This is a mistake that you should not have made.  When I talk about
>> conceptual relativity I don't mean that there are no fundamental truths.  I
>> mean that we have such limited cpacities that we cannot walk around
>> thinking that we can declare something to be a fundamental truth that we
>> completely understand.
>>
>> Look a prepositions...
>> A preposition (usually) refers to relative position of objects (objects
>> of thought.)  So anytime you mention position a preposition might be
>> inferred.  Since many things from our discourse about the real
>> world-universe exist or take place somewhere that means that position may
>> be inferred or even explicitly detailed in much of our comments.  Many
>> ideas which take place in our minds are stories that deal with place.  And
>> (this is where the trap door opens underneath your authoritative
>> declaration) since we can use metaphor to describe ideas that means that
>> most of our ideas contain references that can be inferred or which are
>> explicitly implied to refer to relative positioning.  So while the list of
>> propositions may be conveniently finite, the number of possible inferences
>> or implications of the character (or category) of relative position are
>> uncountable.  (According to my metaphor your authoritative declaration
>> about the small unexpanding set of words that constitute the category of
>> prepositions has been de-elevated.  Even if it is technically true that
>> there are only a few prepositions the number of phrases and sentences that
>> can refer to the character that a preposition defines are uncountable.)
>>
>> Jim Bromer
>>
>>
>>
>> On Sat, Oct 20, 2012 at 3:28 PM, [email protected] <[email protected]
>> > wrote:
>>
>>> Please expand a bit on how "a word-object can also become an abstraction
>>> of a relation or part of the definition of a process of abstraction". I'm
>>> not sure I follow you.
>>>
>>> As for the simplification process, I don't see why that's necessary.
>>> Using 3 link base-level types -- "source of link", "sink of link", and
>>> "type of link" -- I can easily represent an expanding set of link labels at
>>> a higher level of abstraction. The link and it's label/type become nodes of
>>> their own. (Call them "meta links" if you like.) This means I can even
>>> relate the links or link types to each other. Forgive the awful ascii art,
>>> but here's a visual representation:
>>>
>>> (source node)
>>>       |
>>> [source of link]
>>>       |
>>>       V
>>> (link node)<--[type of link]--(link type node)
>>>       ^
>>>       |
>>> [sink of link]
>>>       |
>>> (sink node)
>>>
>>> Nodes are in parens, (), and base level link labels are in square
>>> brackets, [], in case my diagramming skills make that less than obvious.
>>>
>>> Looking at human language, there are certain grammatical categories that
>>> have a very small set of words that doesn't expand, and other categories
>>> where new words can be easily created. Prepositions, determiners, and
>>> particles belong to the first group, and nouns, verbs, adjectives, and
>>> adverbs belong to the second. I would argue that prepositions represent
>>> built in relationships that the human mind recognizes, which correspond to
>>> standard link types in the semantic net. Determiners and particles become
>>> predefined properties or labels that get applied to nodes. And the nouns,
>>> verbs, etc. correspond to "kind" nodes, to which "instance" nodes can be
>>> connected. These instance nodes are then labeled and linked according to
>>> the limited set that the human mind recognizes.
>>>
>>> In my implementation, I use meta links for prepositions, but base level
>>> links for direct grammatical relationships like "is the subject of", "is
>>> the complement of", etc. There's no reason (aside from efficiency) why I
>>> couldn't switch to strictly using meta links.
>>>
>>>
>>>
>>>
>>> -- Sent from my Palm Pre
>>>
>>> ------------------------------
>>>  On Oct 20, 2012 8:25 AM, Jim Bromer <[email protected]> wrote:
>>>
>>>  Aaron <[email protected]> Hosford wrote:
>>> I do think that reasoning and learning should always be running in
>>> parallel to the behavioral and perceptual processes, and should be able to
>>> step in and make adjustments when appropriate. That's the reason for going
>>> with a universal format for all information processed by the system, namely
>>> semantic nets.
>>>
>>> I think that the data representation has to be simple because AGI is
>>> going to be so complicated.  However, I totally disagree with the
>>> -conventional notion of a semantic net-.  The idea of a semantic net is
>>> that of a network based on a simplification of the categorization of
>>> relations of the word-objects of the network using a few 'kinds' of
>>> abstractions to characterize those relations.  Now you might say that the
>>> idea of a semantic net could be improved on to make it capable of
>>> representing potentially more profound insights, but my view is that it
>>> cannot be made to fully accommodate the full extent of the meaning of words
>>> because if it did it would not be what we typically think of when we think
>>> of a semantic net.  I don't think you are *just* talking about a "universal
>>> format", but of a heavy simplification process.
>>>
>>> So whereas I do think that a simplifying process is necessary and I do
>>> think that a universal format and something like a semantic net is a
>>> good way to go, I am not talking about a traditional kind of semantic net
>>> in which the relationship between words is found by a single abstraction
>>> or by a handful of abstractions of the relations between words and
>>> referential objects of the words and sentences.  This kind of semantic
>>> net was based on a superficial analysis that indicated that the relations
>>> between word-objects might be simplified using an concise list
>>> of abstractions.  I am thinking of a relativistic semantic net where a
>>> word-object can also become an abstraction of a relation or part of the
>>> definition of a process of abstraction.
>>>
>>> Jim Bromer
>>>
>>>
>>>
>>> On Thu, Oct 18, 2012 at 10:27 AM, [email protected] <
>>> [email protected]> wrote:
>>>
>>>> Co-occurrence was really the wrong word. I forget it has the
>>>> bag-of-words connotation. I imagine an efficient lookup could be designed
>>>> by using a hash table with hash values based on a bag-of-words approach,
>>>> but actual recognition would have to be based on the structure of the
>>>> sentence, as you say.
>>>>
>>>> Anaphora resolution is designed into the system. The system doesn't
>>>> pick a single object that can be matched by a pronoun. It picks a list of
>>>> them based on recency of use, and links the pronoun to each of them via
>>>> links with strength based on recency. It then performs higher-level
>>>> analysis based on the object attributes indicated by the pronoun and the
>>>> context in which the pronoun is used. Reasoning, which is as yet
>>>> unimplemented, will be able to step in and further modify these link
>>>> strengths based on additional information garnered from inference.
>>>>
>>>> This approach does produce some combinatorics, but with a reasonable
>>>> upper bound dictated by the size of the recency list, which can be set to
>>>> something comparable to the limits of human pronomial references and still
>>>> be well within the computational constraints of the system.
>>>>
>>>> Interesting that you mention higher-level structure to the conversation
>>>> being important to understanding. I recently read an article about a
>>>> research team building a system that does exactly that, using a
>>>> template-based approach. I am probably wildly wrong, but I *think* it was a
>>>> fellow named Wilson and the system was named GENESYS. I'll look it back up
>>>> and get you something definite here in a bit.
>>>>
>>>> I do think that reasoning and learning should always be running in
>>>> parallel to the behavioral and perceptual processes, and should be able to
>>>> step in and make adjustments when appropriate. That's the reason for going
>>>> with a universal format for all information processed by the system, namely
>>>> semantic nets.
>>>>
>>>
>>>
>>>
>>>
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com/>
>>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com/>
>>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10561250-164650b2> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com/>
>>
>
>   *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to