> We are developing QA-add-ons for our chat-software, we need semi-automatic > knowledge extraction. We have 1-2 years to build qa-stuff, imho its > AGI-ish. We use CLIPS, NARS-ideas and MuliNet. Its slow, today > 30s/sentence > on 1 GHz PC.
At what stage do you reckon you'll have a system that will be a) learning about language (how to comprehend & produce it) b) learning about some nonlinguistic domain (the "grounding domain"), to which some of the language it hears/uses pertains c) using learning about the grounding domain to enhance its knowledge of language production/comprehension pertinent to that domain d) using some kind of cognition to transfer linguistic knowledge gathered via the grounding domains, to language about other nonlinguistic domains These ingredients seem to me to be basic to any "AGI-ish" NLP system... Of course, i realize that's not the only way to think about it... > >I have doubts about how "robust" you can make narrow-AI chat software. > >Unlike narrow-AI chess software, I believe narrow-AI chat > software is always > >going to basically suck. However, it may serve some specific > >narrowly-defined business needs well, in spite of its suckage > judged by the > >standards of flexible human conversation... > > Look into community-chats and you'll see how real human > conversation sucks. > Look into bestselling newspaper and its not much better. With chatter-bot > we'll get a paid plattform to experiment with narrow-AI and > AGI-ish-Software. Well, the grammar is very poor in many human chat groups. However, a lot of meaning is sometimes being transmitted, using a common world-model shared by (most of) the participants. Even so, I agree, you could probably make a functional chat bot in the community-chat domain with a clever combination of narrow-AI methods. [Whether there's a scientific point in emulating idiotic human behavior is another question. And whether there's a business point is yet another question... presumably you've found that there is!] A USA Today article is a whole different matter. You won't create a narrow-AI system that will write a good USA Today article. Maybe a news summary (tho current text summarization software still falls significantly short of human standards). The problem isn't grammaticality, though current tech still can't do nearly as well as a human journalistic writer. The problem is the synergy between grammatical structure, theme, semantics & style... which all combine to give "meaning." I really doubt this can be achieved except by a program that in some way "understands the meaning" of what it's producing. Could a narrow-AI program produce English translations of foreign news articles that were worth reading? Probably, though no tech is there yet -- but that's an easier problem. Structural issues and choices of what to say are carried over from language to language, and a good reader can ignore occasional mistranslations & continual stylistic infelicities. -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/
