hi, all

I did some experiments on IWSLT 2006 Chinese-English test set using moses.
In the read speech task, the bleu score of the 1-best input is 0.1778 and the 
score of confusion network is 0.1257.
It becomes much worse after using confusion network. 
Is it reasonable? or, does it mean there are something wrong in my experiments?
I used the SRI lattice-tool to convert the word-lattice into confusion network, 
and used moses's processPhraseTable program to get the binary phrase table.
finally, I set the -inputtype and -weight-i.

I analyzed the decoding procedure, and found confusion network only includes 
emission probabilities. 
As we know, when we search the 1-best of the ASR word-lattice, both the 
emission probabilities and transfer probabilities are considered; 
is the source language transfer probability the reason of my experiment results?
is there any good method to integrate source language model feature into moses' 
confusion network decoding?

thank you very much.

xiaoguang hu

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to