You're shadow boxing, Jim. It's refreshing to see that someone else besides 
myself has noticed the difference between the truth (what we confidently 
believe) vs. The Truth (which is something we can never be sure we've 
discovered). I built this into my system through soft, non-Boolean truth values.

As for recognizing a definite set of prepositions, you act as though I claimed 
the same preposition is treated the same way, regardless of context. If "in" 
means something different when talking about sets (containership, as in "it's 
in the box") than it does when talking about money (possession, as in "we're in 
the money"), why not recognize the different meanings of "in" based on what it 
connects, instead of assuming a universal set of inference rules for "in" 
relationships? I see no reason to duplicate that information and complicate the 
implementation by making the information explicit in the form of additional, 
more specialized link types.

Aside from that, I used the same mechanism for prepositions as I did for "kind" 
words like nouns and verbs: the word node is related to multiple kind/class 
nodes, and the preposition itself is an instance of one of these kinds. This 
means that I can easily differentiate between different meanings of "in", by 
preferring one kind over another in the network's structure. Thus prepositions 
are not among the list of hardcoded links in my system. Hardcoded links consist 
primarily of grammatical relations such as the link types corresponding to "is 
a modifier of" and "is the complement of" with which I string together the 3 
part object/verb-preposition-complement prepositional relation, as I tried to 
demonstrate with my ascii diagram.



-- Sent from my Palm Pre
On Oct 22, 2012 7:19 AM, Jim Bromer <[email protected]> wrote: 

[email protected] <[email protected]> wrote:
 there are certain grammatical categories that have a very small set of 
words that doesn't expand, and other categories where new words can be easily 
created. Prepositions, determiners, and particles belong to the first group...
-------- This is a mistake that you should not have made.  When I 
talk about conceptual relativity I don't mean that there are no fundamental 
truths.  I mean that we have such limited cpacities that we cannot walk 
around thinking that we can declare something to be a fundamental truth that we 
completely understand.
 Look a prepositions...A preposition (usually) refers to relative position 
of objects (objects of thought.)  So anytime you mention position a 
preposition might be inferred.  Since many things from our discourse 
about the real world-universe exist or take place somewhere that means that 
position may be inferred or even explicitly detailed in much of our 
comments.  Many ideas which take place in our minds are stories that deal 
with place.  And (this is where the trap door opens underneath your 
authoritative declaration) since we can use metaphor to describe ideas that 
means that most of our ideas contain references that can be inferred or which 
are explicitly implied to refer to relative positioning.  So while the 
list of propositions may be conveniently finite, the number of possible 
inferences or implications of the character (or category) of relative 
position are uncountable.  (According to my metaphor your 
authoritative declaration about the small unexpanding set of words that 
constitute the category of prepositions has been de-elevated.  Even if it 
is technically true that there are only a few prepositions the number of 
phrases and sentences that can refer to the character that a preposition 
defines are uncountable.)
 Jim Bromer 
 On Sat, Oct 20, 2012 at 3:28 PM, [email protected] 
<[email protected]> wrote:

Please expand a bit on how "a word-object can also become an abstraction of a 
relation or part of the definition of a process of abstraction". I'm not sure I 
follow you.


As for the simplification process, I don't see why that's necessary. Using 3 
link base-level types -- "source of link", "sink of link", and "type of link" 
-- I can easily represent an expanding set of link labels at a higher level of 
abstraction. The link and it's label/type become nodes of their own. (Call them 
"meta links" if you like.) This means I can even relate the links or link types 
to each other. Forgive the awful ascii art, but here's a visual representation:


(source node)
      |
[source of link]
      |
      V
(link node)<--[type of link]--(link type node)
      ^
      |
[sink of link]
      |
(sink node)

Nodes are in parens, (), and base level link labels are in square brackets, [], 
in case my diagramming skills make that less than obvious.


Looking at human language, there are certain grammatical categories that have a 
very small set of words that doesn't expand, and other categories where new 
words can be easily created. Prepositions, determiners, and particles belong to 
the first group, and nouns, verbs, adjectives, and adverbs belong to the 
second. I would argue that prepositions represent built in relationships that 
the human mind recognizes, which correspond to standard link types in the 
semantic net. Determiners and particles become predefined properties or labels 
that get applied to nodes. And the nouns, verbs, etc. correspond to "kind" 
nodes, to which "instance" nodes can be connected. These instance nodes are 
then labeled and linked according to the limited set that the human mind 
recognizes.


In my implementation, I use meta links for prepositions, but base level links 
for direct grammatical relationships like "is the subject of", "is the 
complement of", etc. There's no reason (aside from efficiency) why I couldn't 
switch to strictly using meta links.




-- Sent from my Palm Pre

On Oct 20, 2012 8:25 AM, Jim Bromer <[email protected]> wrote: 

Aaron Hosford wrote:

I do think that reasoning and learning should always be running in parallel to 
the behavioral and perceptual processes, and should be able to step in and make 
adjustments when appropriate. That's the reason for going with a universal 
format for all information processed by the system, namely semantic nets.

   I think that the data representation has to be simple because AGI is 
going to be so complicated.  However, I totally disagree with the 
-conventional notion of a semantic net-.  The idea of a semantic net is 
that of a network based on a simplification of the categorization of relations 
of the word-objects of the network using a few 'kinds' of 
abstractions to characterize those relations.  Now you might say 
that the idea of a semantic net could be improved on to make it capable of 
representing potentially more profound insights, but my view is that it cannot 
be made to fully accommodate the full extent of the meaning of words because if 
it did it would not be what we typically think of when we think of a semantic 
net.  I don't think you are *just* talking about a "universal 
format", but of a heavy simplification process. 

 So whereas I do think that a simplifying process is necessary and I do 
think that a universal format and something like a semantic net is a 
good way to go, I am not talking about a traditional kind of semantic 
net in which the relationship between words is found by 
a single abstraction or by a handful of abstractions 
of the relations between words and referential objects of 
the words and sentences.  This kind of semantic 
net was based on a superficial analysis that indicated that the 
relations between word-objects might be simplified using an concise list 
of abstractions.  I am thinking of a relativistic semantic net 
where a word-object can also become an abstraction of a 
relation or part of the definition of a process of 
abstraction.  

 Jim Bromer  
 On Thu, Oct 18, 2012 at 10:27 AM, [email protected] 
<[email protected]> wrote:


Co-occurrence was really the wrong word. I forget it has the bag-of-words 
connotation. I imagine an efficient lookup could be designed by using a hash 
table with hash values based on a bag-of-words approach, but actual recognition 
would have to be based on the structure of the sentence, as you say.



Anaphora resolution is designed into the system. The system doesn't pick a 
single object that can be matched by a pronoun. It picks a list of them based 
on recency of use, and links the pronoun to each of them via links with 
strength based on recency. It then performs higher-level analysis based on the 
object attributes indicated by the pronoun and the context in which the pronoun 
is used. Reasoning, which is as yet unimplemented, will be able to step in and 
further modify these link strengths based on additional information garnered 
from inference.



This approach does produce some combinatorics, but with a reasonable upper 
bound dictated by the size of the recency list, which can be set to something 
comparable to the limits of human pronomial references and still be well within 
the computational constraints of the system.



Interesting that you mention higher-level structure to the conversation being 
important to understanding. I recently read an article about a research team 
building a system that does exactly that, using a template-based approach. I am 
probably wildly wrong, but I *think* it was a fellow named Wilson and the 
system was named GENESYS. I'll look it back up and get you something definite 
here in a bit.



I do think that reasoning and learning should always be running in parallel to 
the behavioral and perceptual processes, and should be able to step in and make 
adjustments when appropriate. That's the reason for going with a universal 
format for all information processed by the system, namely semantic nets.


     



  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to