Dear all,
I have checked out the latest version of moses and nplm and compiled moses
successfully with the --with-nplm option.
I got a ton of warnings during compilation but in the end it all worked out
and all the desired binaries were created. Simply executing the moses
binary told me the the BilingualNPLM and NeuralLM features were available.

I trained an NPLM model based on the instructions here:
http://www.statmt.org/moses/?n=FactoredTraining.BuildingLanguageModel#ntoc33
The corpus size I used was about 600k lines (for Chinese-Japanese; Target
is Japanese)

I then integrated the resultant language model (after 10 iterations) into
the decoding process by moses.ini

I initiated tuning (standard parameters) and I got no errors, which means
that the neural language model (NPLM) was recognized and queried
appropriately.
I also ran tuning without a language model.

The strange thing is that the tuning and test BLEU scores for both these
cases are almost the same. I checked the weights and saw that the LM was
assigned a very low weight.

On the other hand when I used KENLM on the same data.... I had
comparatively higher BLEU scores.

Am I missing something? Am I using the NeuralLM in an incorrect way?

Thanks in advance.



-- 
Raj Dabre.
Doctoral Student,
Graduate School of Informatics,
Kyoto University.
CSE MTech, IITB., 2011-2014
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to