Re: [Moses-support] decoding a confusion network using Moses' API

2012-04-26 Thread Sylvain Raybaud
wild guessing here: in TranslationTask::Run, I see there are many alternatives for processing the sentence, like doLatticeMBR etc, not just runing Manager::ProcessSentence() Maybe one of these alternatives must be run for processing confusion networks? cheers Sylvain On 26/04/12 15:53, Sylvain

[Moses-support] Higher BLEU/METEOR score than usual for EN-DE

2012-04-26 Thread Daniel Schaut
Hi all, I'm running some experiments for my thesis and I've been told by a more experienced user that the achieved scores for BLEU/METEOR of my MT engine were too good to be true. Since this is the very first MT engine I've ever made and I am not experienced with interpreting scores, I really

Re: [Moses-support] Higher BLEU/METEOR score than usual for EN-DE

2012-04-26 Thread Barry Haddow
Hi Daniel BLEU scores do vary according to test set, but the scores you report are much higher than usual. The most likely thing is that you have some of your test set included in your training set, cheers - Barry On Thursday 26 April 2012 19:18:33 Daniel Schaut wrote: Hi all, I'm

Re: [Moses-support] Higher BLEU/METEOR score than usual for EN-DE

2012-04-26 Thread Francis Tyers
El dj 26 de 04 de 2012 a les 20:18 +0200, en/na Daniel Schaut va escriure: Hi all, I’m running some experiments for my thesis and I’ve been told by a more experienced user that the achieved scores for BLEU/METEOR of my MT engine were too good to be true. Since this is the very first MT

Re: [Moses-support] Higher BLEU/METEOR score than usual for EN-DE

2012-04-26 Thread John D Burger
I =think= I recall that pairwise BLEU scores for human translators are usually around 0.50, so anything much better than that is indeed suspect. - JB On Apr 26, 2012, at 14:18 , Daniel Schaut wrote: Hi all, I’m running some experiments for my thesis and I’ve been told by a more

Re: [Moses-support] Higher BLEU/METEOR score than usual for EN-DE

2012-04-26 Thread Miles Osborne
Very short sentences will give you high scores. Also multiple references will boost them Miles On Apr 26, 2012 8:13 PM, John D Burger j...@mitre.org wrote: I =think= I recall that pairwise BLEU scores for human translators are usually around 0.50, so anything much better than that is indeed

Re: [Moses-support] Merging language models with IRSTLM..?

2012-04-26 Thread Marcello Federico
Hi, we are currently working on a project that includes incremental training of LMs. Hence, there are plans to introduce quick adaptation in IRSTLM, but not soon. The question is indeed how often you need to adapt the LM. If you are working with large news LMs then it seems that adapting once