If this had been posted to my "[EMAIL PROTECTED]" ML, I would
have declared it in violation of my NO CYC policy and put the sender on
notice. =\


[sender's name omitted to save him some emberasment]. 
> At 15:56 17.11.02 -0500, Ben wrote:
> >...
> >a) learning about language (how to comprehend & produce it)

> We are using data from our human2human chat rooms, this data is used to
> train a hidden markov model supertagger.
> We train 2 things normal utterances and discourse.

> It learnes about things a user tells, his job, where he comes from, his
> mood etc. We try to cluster this and find correlations. eg. young girl 
> like horses and a horse has a name. Next time an other girl talks about 
> her horse, the bot asks about horse-name ...

> We try to transfer simple things into a MulitNet-Database with 
> probability info (like NARS) A dolphin is a fish
> (dolphin SUB fish) + (* CTXT <folk theory>)
> Its easy to translate this MultiNet into other languages:
> Der Delphin ist ein Fisch.

> We use the positive/negative feedback of the human chatter to rate the
> bot-answers.
> (dophin SUB fish) becomes
> (dolphin SUB fish) + (* CTXT <folk theory>)

> This works for chat smalltalk.

> We are cheating with parsers and AIML, because the bot learning without
> cheating is too boring for chatters. We try to reduce the cheating (aka
> narrow-AI or AGI-ish tools).

> We can feed easy texts for kids into the system, many sentences are 
> parsed correctly and few are transfered correctly into MultiNet-DB. We 
> try to get a feedback from chatters, thats supervised learning by 
> chatters.
 
> For a new similar languange the bootstrap-process is much easier.

[...]

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to