On Jan 11, 2008 3:01 PM, William Pearson <[EMAIL PROTECTED]> wrote: > Vladimir, > > > What do you mean by difference in processing here? > > I said the difference was after the initial processing. By processing > I meant syntactic and semantic processing. After processing the > syntax related sentence the realm of action is changing the system > itself, rather than knowledge of how to act on the outside world. I'm > fairly convinced that self-change/management/knowledge is the key > thing that has been lacking in AI, which is why I find it different > and interesting.
I fully agree with this sentiment, which is why I take it a step further. Instead of building explicit lexical and syntax processing (however mutable), I propose processing textual input the same way all other semantics is handled. In other words, text isn't preprocessed before it's taken to semantic level, it's dumped there without changes. The same processes that analyze semantics and extract high-level regularities would analyze sequences of symbols and extract words, syntactic structure, and so on. Because it's based on the same inevitably mutable knowledge representation, problem with integration and mutability of language processing doesn't exist. > >I think that both > > instructions can be perceived by AI in the same manner, using the same > > kind of internal representations, if IO is implemented on sufficiently > > low level, for example as a stream of letters (or even their binary > > codes). This way knowledge about spelling and syntax can work with > > low-level concepts influencing little chunks of IO perception and > > generation, and 'more semantic' knowledge can work with more > > high-level aspects. It's less convenient for quick dialog system setup > > or knowledge extraction from text corpus, but it should provide > > flexibility. > > I'm not quite sure of the representation or system you are describing > so I can't say what it can or cannot do. > > Would you expect it to be able to do the equivalent of switching to > think in a different language? > Certainly, including mixing of languages. (I'm not sure thinking itself is very language-dependent.) That is why it might be useful to supply binary codes of letters instead of just letters: this way any Unicode symbol can be fed in it, so that it would be able to learn new alphabets without needing to learn new separate modality. Representation I'm talking about, if you omit learning for simplicty, is basically a production system that produces (activates) a set of unique symbols (concepts) each tact, based on sets produced in previous k tacts. For IO there are special symbols, so that input corresponds to external activation of symbols, and output consists in detecting that special output symbols are activated by the system. Streamed input corresponds to sequential activation of letters of input text, so that first letter is externally activated at first tact, second letter at second tact, and so on. -- Vladimir Nesov mailto:[EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=84721989-16c7f9
