Hi,

I'm now trying to use word lattice decoding.
I observed performance drop (around 5% in BLEU) by simply concatenating
tokens in the source language into a linear *word lattice*. I thought it
would give almost identical results.

Since I'm using this technique for some artificial language pairs, I'm
wondering is it normal? Do you have also this problem for natural language
pairs?

If possible, how can I improve the word lattice decoding result?

Thanks in advance!

Best,
Wei Qiu
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to