Hello Raj,
Usually, nplm is used in addition to a back-off LM for best results.
That being said, your results indicate that nplm is performing poorly.
If you have little training data, a smaller vocabulary size and more
training epochs may be appropriate. I would advise to provide a
development set to the nplm training program so that you can track the
training progress, and compare perplexity with back-off models.
best wishes,
Rico
On 13/09/15 10:51, Rajnath Patel wrote:
Hi all,
I have tried Neural LM(nplm) with phrase based English-Hindi SMT, but
translation quality is kind of not good as compared to n-gram
LM(scores are given below). I have trained LM for 3-gram and 5-gram
with default setting(as mentioned on statmt.org/moses
<http://statmt.org/moses>). Kindly suggest, If some one has tried the
same English-Hindi SMT and got improved results. What may be probable
cause of degraded results?
BLEU scores:
n-gram(5-gram)=24.40
neural-lm(5-gram)=11.30
neural-lm(3-gram)=12.10
Thank you.
--
Regards:
Raj Nath Patel
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support