On 05/11/2007, Linas Vepstas <[EMAIL PROTECTED]> wrote: > On Sat, Nov 03, 2007 at 03:45:30AM -0400, Jiri Jelinek wrote: > > Are you aware in how many ways you can go wrong with: > > One problem I see with this mailing list is an almost intentional > desire to mis-interpret. I never claimed I was building an AGI, > or a problem solver, or a learning machine, or any of a dozen > other things for which there were replies. > > I asked a very simple question about conversational state. > My goal was to build something that was one step beyond > alicebot, by simply maintaining conversational state, and > drawing upon a KB to deal with various "common sense" > assertions as they show up. So criticisms along the lines of > "that won't be AGI" are rather pointless. >
It is amazing what some people think is going to be AGI capable.... Also you are posting on an AGI mailing list, so narrow AI discussion is slightly off-topic. Not to say it shouldn't be discussed, but flagging it heavily as such is probably a good idea. Talking about the age equivalence or IQ of your system is also not a good idea, if you want to give people the right impression that you are not going for AGI. I'm also wondering what you consider success in this case. For example do you want the system to be able to maintain conversational state such as would be needed to deal with the following. "For all following sentences take the first letter of each word and make English sentences out of it, reply in a similar fashion. How is the hair? Every rainy evening calms all nightingales. Yesterday, ornery ungulates stampeded past every agitated koala. Fine red eyebrows, new Chilean hoovers?" Will Pearson ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=61580236-1fc225
