I concur with your approach.   Cheers!
~PM.

Date: Thu, 18 Oct 2012 09:27:45 -0500
From: [email protected]
To: [email protected]
Subject: Re: [agi] Re: Superficiality Produces Misunderstanding - Not Good 
Enough

Co-occurrence was really the wrong word. I forget it has the bag-of-words 
connotation. I imagine an efficient lookup could be designed by using a hash 
table with hash values based on a bag-of-words approach, but actual recognition 
would have to be based on the structure of the sentence, as you say.

Anaphora resolution is designed into the system. The system doesn't pick a 
single object that can be matched by a pronoun. It picks a list of them based 
on recency of use, and links the pronoun to each of them via links with 
strength based on recency. It then performs higher-level analysis based on the 
object attributes indicated by the pronoun and the context in which the pronoun 
is used. Reasoning, which is as yet unimplemented, will be able to step in and 
further modify these link strengths based on additional information garnered 
from inference.

This approach does produce some combinatorics, but with a reasonable upper 
bound dictated by the size of the recency list, which can be set to something 
comparable to the limits of human pronomial references and still be well within 
the computational constraints of the system.

Interesting that you mention higher-level structure to the conversation being 
important to understanding. I recently read an article about a research team 
building a system that does exactly that, using a template-based approach. I am 
probably wildly wrong, but I *think* it was a fellow named Wilson and the 
system was named GENESYS. I'll look it back up and get you something definite 
here in a bit.

I do think that reasoning and learning should always be running in parallel to 
the behavioral and perceptual processes, and should be able to step in and make 
adjustments when appropriate. That's the reason for going with a universal 
format for all information processed by the system, namely semantic nets.


-- Sent from my Palm Pre
On Oct 18, 2012 5:20 AM, Jim Bromer <[email protected]> wrote: 

I think an idiom has to be recognized as a phrase.  Using co-occurrence is an 
interesting idea.  However, this is on the edge of the area where I feel a 
little uncomfortable with the over-reliance on a conventional superficial 
method like co-occurrence.
 Someone mentioned anaphora recently.  While your semantic parser might pick 
out some variable-like words (especially superficial variables like "he" or 
"it") it is going to miss a phrase which refers to someone or something that 
had been mentioned or will be mentioned in a subsequent remark.  While it is 
easy to figure a scheme to deal with this, the problem is that it could lead to 
a substantial increase in combinatorial complexity.  For example, even if you 
looked at possible relationships to and from a noun-phrase to other 
noun-phrases you might still miss the relationships that form a broader 
subject. (For example, a number of sentences might be used to build a structure 
of a subject being discussed by using a diverse set of examples, and your 
relational searches might miss this or lead to misleading conclusions).
 One simple explanation (but not necessarily a solution to the programming 
problems) is that intelligence is always an active process of learning.  So 
even when your mind is not working hard it is always figuring out relations.  
In one sense contemporary AGI programs are designed this way but in another 
sense they are not insightfully designed this way.  So (for example) while the 
underlying programming that searches for possible relations might be actively 
searching for these relations, the nature of these searches may not be foremost 
in the programmer's mind as he writes his code.  So the search might be 
constrained or broadened by the kinds of concepts that it is considering, but 
the search process itself is not being looked at by the programmer, or it is 
only being looked at in a superficial conventional way. I am saying that it 
might be possible to create something a little novel for the search process 
itself as well as rely on the more conventional kinds of searches which is 
essentially designed around a function that accepts a group of parameters.
 Jim Bromer

On Wed, Oct 17, 2012 at 10:00 PM, Aaron Hosford <[email protected]> wrote:

I agree fully with regards to handling multiple possible meanings to a 
sentence. My parser doesn't stop with a single interpretation. It builds a long 
list of possible interpretations, along with scores based on the parser's 
estimation of the likelihood of the interpretation. The client code then 
iterates over the list of interpretations in order of score, rejecting 
interpretations that aren't meaningful or don't fit the context. The parser 
modifies its scoring according to which interpretation was accepted. So it's 
possible to handle puns and other double entendres by accepting more than one 
meaning.


The client, which converts the parse tree into a semantic net, doesn't restrict 
itself to a single interpretation of a given parse tree, either. Links are 
given "soft" truth values, with frequency and certainty values inspired by Pei 
Wang's NARS system. This means that the same network can simultaneously 
represent multiple, competing interpretations through lowered certainty levels. 
As new information becomes available, the system updates the links' truth 
values and certainty levels until a clear winner emerges. I think this ability 
to revise interpretation based on contextual information received both before 
and after the sentence is parsed, as well as to weigh competing or 
contradictory information, is key to the sort of flexible thinking human beings 
exhibit.


One thing which I have not yet implemented and am still undecided on is the 
handling of idioms. I suspect this will end up being some sort of lookup based 
on word or concept co-occurrence, i.e. a connected subnet of the network 
representing the sentence will be encoded and used as a key into a table of 
idioms. Identifying the idiom rapidly through this or another approach is the 
hard part. Once an idiom is recognized, it's simply a matter of applying a 
graph rewrite to generate an alternate meaning for the sentence, and using 
links with soft truth values to identify the new, competing interpretation. The 
system can then resolve between the competing interpretations using the same 
mechanisms it would for more subtly different interpretations. I think many 
idioms originate as crystallized, highly re-usable analogies. Maybe that can 
provide some additional insight into the design when the time comes.



On Wed, Oct 17, 2012 at 5:20 PM, Jim Bromer <[email protected]> wrote:


Aaron Hosford wrote:
I get the impression that you're saying (both here & in your previous emails on 
Algorithmic Synthesis) that claiming two things are associated isn't enough -- 
that the *kind* of association is important too.


 -Yes I do feel that way although I probably wasn't thinking of that when I 
wrote my message.  An association may belong to many categories of a *kind*.  
This is important because we can usually abstract or generalize from an 'idea' 
or 'ideas' in many different ways and these different 'kinds' of abstractions 
are things that can become concepts of their own (referring to the nature of 
the abstraction).



Aaron Hosford wrote:Roger Schank has provided quite a bit of inspiration to me, 
based on how he represents meaning as semantic links connecting basic concepts 
together. From the natural language perspective, it is relatively easy to see 
how this can be implemented. I'm not alone in having successfully built a 
parser that extracts a semantic network from a sentence which represents that 
sentence's meaning with a fair degree of accuracy.


 -I really liked Schank's work as well.  I think that old world semantic 
networks simplified the potential for meaning too much. So while you might get 
closer to a single constrained meaning of a sentence, you would also lose many 
of the undertones, highlights and colors of the sentence that can help to make 
the sentence meaningful.  So here it is again, it is not enough to take the 
shallow level of meaning that might be derived from a tightly constraining 
analysis of the superficial sentence.  You have to be looking for other kinds 
of meanings to see if the words of the sentence (or 'ideas' of the sentences) 
can be better interpreted in other ways. 


 I will have more to say about this. Jim Bromer      
  On Tue, Oct 16, 2012 at 3:34 PM, Aaron Hosford <[email protected]> wrote:

Well, I'm not really clear what you're getting at, mainly because when talking 
about intelligence & thinking, all the terms we have to  use are so versatile & 
loosely defined that to narrow down what's being communicated to a sufficiently 
narrow set of interpretations, we have to say so much that the key point 
becomes a needle in a haystack of contextual information. I'm sure what you're 
saying here makes perfect sense to you, but the words you're using aren't 
sufficiently grounded (or are grounded differently for you than for me) that I 
don't follow.




I get the impression that you're saying (both here & in your previous emails on 
Algorithmic Synthesis) that claiming two things are associated isn't enough -- 
that the *kind* of association is important too. I agree with you here. It's 
not enough to say, these are the parts and they go together; how things connect 
must be considered to have productive thoughts about them. This is directly 
analogous to the treatment of sentences as bags of words: It's not enough to 
just look at the set of words to determine the sentence's meaning; the way they 
connect to each other matters. This is where I'm starting from in my system's 
design.




#1: Figure out how the human mind represents meaning.#2: Figure out how to work 
with meaning to produce intelligent thought.
#2 cannot proceed until #1 is effectively implemented. Roger Schank has 
provided quite a bit of inspiration to me, based on how he represents meaning 
as semantic links connecting basic concepts together. From the natural language 
perspective, it is relatively easy to see how this can be implemented. I'm not 
alone in having successfully built a parser that extracts a semantic network 
from a sentence which represents that sentence's meaning with a fair degree of 
accuracy.




>From the perceptual perspective, it is also fairly easy to see how semantic 
>networks can be used to represent information. The visual field can be broken 
>into chunks or fields, each representing an object or a part of an object. The 
>objects are semantically connected to each other according to the spatial or 
>behavioral interactions they are participating in, and the parts of objects 
>are semantically linked to the objects and other parts according to their 
>arrangement. Nodes representing objects and parts generated at a particular 
>time can then be interconnected across multiple time frames, resulting in a 
>narrative description of the field of vision as a sequence of events unfolds. 
>Other senses can be integrated directly with vision in the same manner.




Higher levels of abstraction can be generated by looking at patterns in objects 
(just as objects are generated by looking at patterns of parts) and adding 
additional nodes which serve to group together the lower level nodes into 
patterns based on link types. Memory stores only these higher-level nodes 
(parts, objects, & upward), not the lower levels which served in their 
construction, and memory fades from the lowest levels upward, causing us to 
lose detail but not gist.




Language (or rather the semantic nets which represent meaning) can then be 
treated as predicates which match the upper levels of the perceptual network, 
acquiring a non-Boolean or fuzzy truth value based on how well they match 
perceptual information retrieved from memory. Thinking is implemented at this 
level, as well. Thinking processes serve to generate truthful predicates based 
both on direct observation of higher-level perceptual subnets, and indirect 
reasoning based on observed patterns in these perceptual subnets. Reasoning can 
reach as far down the hierarchy of nodes as was stored in memory, but starts 
from the top-most level and does not reach down to these lower levels except 
when higher-level abstractions indicate that additional or finer-grained detail 
is needed. (This is how we avoid the combinatorial bottleneck.) Predicates 
generated by observation or reasoning can be directly read off and converted to 
natural language using the same mechanisms as the semantic parser, but in 
reverse. (I've got much of this mechanism working, too.)




I have yet to start work on the perceptual systems, but the semantic 
representation of meanings/predicates is rolling along nicely. Perception is 
going to take a lot more work, because there's a lot more data to process, but 
I'm watching the research as it unfolds, and I see a lot being done in the 
direction of object detection. Even if we create a perceptual system that isn't 
as detailed in representation as human perception (i.e. it represents objects 
and their interactions, but not their parts or lower level abstractions), it 
should be possible to start work on a reasoning system that handles 
higher-level abstractions and is able to communicate its thoughts verbally or 
in text. This is the key point at which artificial general intelligence gains 
traction as a technology worthy of financial investment.





On Sat, Oct 13, 2012 at 9:21 PM, Jim Bromer <[email protected]> wrote:




Well I just remembered why people have been so distracted by the analysis of 
superficial data.  Because it is easy.  Because it is easy for an automated 
program to analyze the superficial features of the input media and how the data 
environment of the medium is affected by the program's output but it is hard to 
figure out how the program would analyze hidden meaning.  But, most of the 
people in this group talk as if their ideas would be powerful enough to 
discover underlying meaning or underlying relations in the data environment.  
So then what started as a first response to a problem description simply became 
the dogma.  (Yes that is really what happened.  Does anyone disagree? (No?!.)) 




  So while the rehashing of the first step may have seemed like it was an 
important primitive to explain to the inexperts, as it became the reigning 
focus of all such conventions of presentation it became the dogma of the genre. 
 Because people somehow found a rationalization to avoid taking the next step 
(to explain how deeper relations between ideas, concepts or operations in the 
IO data environment could be integrated and discerned) it became a blocking 
dogma.  In order to join the club, so to speak, you had to start by avoiding 
the next question.  




 You often feel that you have already thought about an idea because you have 
examined a high-level concept which might be a categorizing principle of the 
idea.  For instance I was interested in 'associations' so when I encountered 
the word 'correlation' I simply felt that I had already considered the concept 
as a kind of association.  A correlation can be considered as a type of 
association so it seemed like I had already had handled that relation.  
However, it just is not the same thing.  A correlation may be a kind of 
association but it is bound with another association as well, the concepts that 
defines the nature of the correlation.  So a correlation is not -just- an 
association.




 You have to take it to the next level and it has to start in your mind. Jim 
Bromer 

 



On Sat, Oct 13, 2012 at 8:39 PM, Jim Bromer <[email protected]> wrote:

I think that misunderstandings can occur when one person presents an idea which 
possesses some features which resemble features of another idea that a listener 
has already considered. If the resemblance is somewhat superficial, especially 
if the superficial resemblances lie at a shallow underlying level, a person who 
is listening to the idea may feel certain that he was totally familiar with the 
idea even though he might not really get what the speaker was saying. The 
listener may casually miscategorize the presented idea by thinking that it was 
the same as the similar idea he had already considered. Good ideas are often 
unoriginal or unsurprising and this vague familiarity can strangely have a 
non-intuitive effect to further a misunderstanding.  The reason that this can 
occur is that ideas sometimes need to be emphasized or 'formalized' in some way 
in order for them to be fully appreciated.  






 I for one would like to be able to understand why people who should be 
interested in something I have said aren't. The answer to this question has 
always been somewhat elusive.  A primary characteristic that can produce this 
kind of misunderstanding is superficiality in the listener.  (Of course the new 
idea may be poorly presented and we all make a variety of mistakes, but I am 
often confronted with the experience where I have repeatedly presented a 
commonsense idea and I can't find anyone who acts like they understand what I 
am talking about).






 But is there anyway you can verify (at least for yourself) that someone who 
should be reacting intelligently to what you are saying is actually reacting at 
a too-superficial level?  I have found that there is a way in this group 
because we are constantly talking about artificial means of creating 
"personality" traits.  If someone repeatedly emphasizes superficiality of 
association as a presumption for the basis of intelligence then there is a 
chance that he might unitentionally be describing a method that commonly 
underlies his own thought processes.   






 For example, I have described a process of synthesis where a new idea is 
formed from the association of two pre-existing ideas based on a reason.  The 
reasons can be superficial, like a superficial co-occurrence (of time or 
position) or a superficial similarity, but then I also emphasized that ideas 
may be related by complimentary conceptual roles.  Furthermore, I have 
emphasized the importance of conceptual structure which is a term I use to 
stress that there may be a greater complexity to putting ideas together than 
just relying on one superficial feature.






 So now, if after expressing this and pointing out that the purpose of 
combining ideas is to create some semantic or operational structure, I see 
someone restating the insight that co-occurrence and similarity are the basis 
of correlation and association I will have some substantial evidence that my 
ideas were not appreciated by that person because he tends to be over reliant 
on superficial methods of thought.  Co-occurrence, similarity, simple 
association and analogy are all examples of relations between ideas that are 
typically shallow. The superficiality may not be at the surface level, but it 
is usually not going to be that deep.  The declaration of these relations are 
all ok but I feel that if the presenter is going to explain how intelligence 
works then he needs to take it to a deeper level.






 Of course, misunderstanding can also occur when a phrase is taken to refer to 
a superficial aspect of thought even though the speaker intended it to refer to 
deeper relations as well.  But I think the declaration that the basis of 
correlation is co occurrence, similarity and associativity has just been too 
over-used to still be considered sufficient as a presentation of the basis of 
thinking. Thinking gets a little deeper than that.






 Jim Bromer





  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  







  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  






  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to