What you suggested is a reasoning system working on a conceptual structure, described in a knowledge representation language. Many AI/AGI systems have such a language. It surely share certain properties with natural languages like English, but the focus of such a design is not NLP, as what this phrase usually means in AI. For example, this kind of "labeled graphs" are used in NARS, Novamente, LIDA, SNePS, etc to support various kind of reasoning.
Pei On 6/30/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
NLP is often regarded as some sort of peripheral I/O system, potentially allowing AGI to communicate, but in itself not part of AGI, not even worth developing early on. But maybe NLP can be just an aspect of AGI reasoning, and can be teached as a natural part of AGI training? My take on AGI is a system that can perform "reflexive reasoning" [Shastri 1993]. Given some form of scene description (and memory about context), in single reflexive reasoning iteration system can infer (in a limited association-like way) many additional bits of information about the scene, including categories of present objects and structure of the scene. By iterating reflexive reasoning steps, it can implement complex schemes. Combinatorial explosion is controlled by limiting the amount of inferred information in result of each reflexive iteration. For scene description I planned to use a collection of labeled graphs, interacting with each other and episodic memory (also consisting of such graphs). Graph edges were to be labeled with (english) words, representing concepts. I even developed a tiny language to easily specify such graphs, in a form similar to plain english sentences (thing I wonder about absence of in so many AI projects). My view of reflexive step (minus learning, which I don't write here about anyway) happens to correlate with [Hofstadter 1995]. Main difference is that in Hofstadter's domains scene description is a _string of symbols_ (numbers, whatever). But reasoning step enables equally complex structure of that scene to be inferred. Structure of NL phrases in not overly complex and can as well be subject to this kind of inference. Even word boundaries may be left to be extracted by scene structure inference step. Words/concepts in this view are ordinary patterns, and basics of their extraction from phrases can be a good test of elementary AGI engine pattern extraction capabilities (as well as of effective teaching techniques to induce such pattern extraction). Are there any papers along these lines? NLP projects operating on character level, harbouring AGI ambitions (not language syntax extraction). Did someone on the list try to take this road? [Shastri 1993]. Lokendra Shastri, Venkat Ajjanagadde. (1993). From simple associations to systematic reasoning. [Hofstadter 1995]. Douglas Hofstadter. (1995). Fluid Concepts and Creative Analogies. -- Vladimir Nesov mailto:[EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e
