Jim, I haven't been following this thread closely. But if you look at what we've been talking about in the OpenAI PR stunt thread, at one level I think it comes to much what you are talking about.
My old vector parser demo linked in that thread does something like this. You can see it happen. The meaning of a word is selected by its context. More interesting, and more relevant to the rest of that thread, is if you then substitute bits of a sentence into each other on the same basis. A new combination of words causes a new exact set of substitutions, with a new exact set of contexts, and thus new meaning. But you don't want to do it with vectors. My mistake back in the day. Vectors throw away some of the context information. You want to leave the words together with their context in a network. Finding shared contexts, for meaning or substitution, will then correspond to a kind of clique in the network. You can imagine a... sheaf(?) of links fanning out from shared contexts to the words that share them. If meaning were context free it would all just reduce to a grammar and GOFAI would work. So, viewed through this lens, the problem historically has been that we've assumed the system is context free, linear, when it is not. -Rob On Fri, Feb 22, 2019 at 12:19 PM Jim Bromer <[email protected]> wrote: > One more thing. I would not try to limit the meaning of a symbol within a > context. I would like to be able to find the best meaning for the symbol or > the best referential utilization for that symbol during interpretation or > understanding, but this is not the same as trying to limit the meaning of > symbols before hand. Well, we would like to use previous learning to > interpret a symbol sub-net (such as a string), but even here it is not true > that I want to limit the meaning of the symbol. For example, we want to be > able to interpret new applications of a symbol (like a word) in sentences > which we have never seen before. Someone might say that this definition of > how I would like to use symbols is only semantically different than what > nano was saying, that he actually meant the same kind of thing that I am > talking about. But I don't agree. The subtlety comes from the many > variations of possible meanings that we intend with our words, but that > does not necessarily indicate that we were saying the same thing. > Jim Bromer > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tcc0e554e7141c02f-M4cee8199c3edc864691b8026 Delivery options: https://agi.topicbox.com/groups/agi/subscription
