Bilingual LM model on the German-English baseline dataset ( wget
http://www.statmt.org/wmt13/training-parallel-nc-v8.tgz) and did not
improve the scores. I obtained the same score of 0.2266.
Thanks for your help.
Ergun
On Mon, Apr 15, 2019 at 5:52 PM Ergun Bicici wrote:
>
> Hi Rico,
>
> Thanks
Hi Rico,
Thanks for the links. Accordingly, I tried decreasing the learning rate to
0.25 and starting seeing numbers instead of nan in the log-likelihood.
vocabulary files are not needed using train_nplm.py.
I restarted tuning and 'nan' scores for bilingual lm disappeared as well in
the N-best li
Hello Ergun,
we've had the 'nan' issue reported before ( see
https://moses-support.mit.narkive.com/hs8LwsnT/blingual-neural-lm-log-likelihood-nan
https://moses-support.mit.narkive.com/fklzlBiW/bilingual-lm-nan-nan-nan ).
You can follow Nick's recommendation of lowering the learning rate, or
try
I found that training also produced 'nan' scores:
Training NCE log-likelihood: nan.
I used EMS training:
[LM:comb]
nplm-dir = "Programs/nplm/"
order = 5
source-window = 4
bilingual-lm = yes
bilingual-lm-settings = "--prune-source-vocab 10 --prune-target-vocab
10"
I am re-running train_npl
Dear moses-support,
I tried the nplm model on the German-English baseline dataset ( wget
http://www.statmt.org/wmt13/training-parallel-nc-v8.tgz) and it improved
the scores from 0.2266 to 0.2317 BLEU.
I tried the bilingual LM:
http://www.statmt.org/moses/?n=FactoredTraining.BuildingLanguageModel#