strange, how many iteration does it do? I use mert with hierarchical model
all the time, it works ok. This is the exact command for 1 of my runs:

/home/s0565741/workspace/github/mosesdecoder.hieu/scripts/training/
mert-moses.pl
/home/s0565741/workspace/experiment/nc/de-en/tuning/input.lc.1
/home/s0565741/workspace/experiment/nc/de-en/tuning/reference.lc.1
/home/s0565741/workspace/github/mosesdecoder.hieu/bin/moses
/home/s0565741/workspace/experiment/nc/de-en/tuning/moses.filtered.ini.2
--nbest 100 --working-dir
/home/s0565741/workspace/experiment/nc/de-en/tuning/tmp.2 --decoder-flags
"-threads 32 -v 0 " --rootdir
/home/s0565741/workspace/github/mosesdecoder.hieu/scripts -mertdir
/home/s0565741/workspace/github/mosesdecoder.hieu/bin -threads 32
--no-filter-phrase-table


Hieu Hoang
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu

On 9 April 2015 at 17:08, Shiman Guo <[email protected]> wrote:

> Hi all,
>
> I was trying to train and tune a baseline hierarchical phrase-based model,
> following the baseline tutorial (with additional options to `
> train-model.pl`). While the training was efficient and successful, the
> tuning process always gave me an all zero weights in the `moses.ini`.
>
> The command I used for tuning was:
>
> ${MOSES-ROOT}/scripts/training/mert-moses.pl \
>     ${CHN-DEVSET} ${ENG-DEVSET} \
>     ${MOSES-ROOT}/bin/moses ./model/moses.ini \
>     --mertdir ${MOSES-ROOT}/bin/
>
> It worked fine with a phrase-based model. So I think the installation
> should be good. And I also tried to use mose_chart as the decoder, but
> there was no gain.
>
> Has anyone else run into this issue? Please advise.
>
>
> Best,
> Shiman
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to