wild guessing here: in TranslationTask::Run, I see there are many
alternatives for processing the sentence, like doLatticeMBR etc, not
just runing Manager::ProcessSentence()
Maybe one of these alternatives must be run for processing confusion
networks?
cheers
Sylvain
On 26/04/12 15:53, Sylvain
Hi all,
I'm running some experiments for my thesis and I've been told by a more
experienced user that the achieved scores for BLEU/METEOR of my MT engine
were too good to be true. Since this is the very first MT engine I've ever
made and I am not experienced with interpreting scores, I really
Hi Daniel
BLEU scores do vary according to test set, but the scores you report are much
higher than usual.
The most likely thing is that you have some of your test set included in your
training set,
cheers - Barry
On Thursday 26 April 2012 19:18:33 Daniel Schaut wrote:
Hi all,
I'm
El dj 26 de 04 de 2012 a les 20:18 +0200, en/na Daniel Schaut va
escriure:
Hi all,
I’m running some experiments for my thesis and I’ve been told by a
more experienced user that the achieved scores for BLEU/METEOR of my
MT engine were too good to be true. Since this is the very first MT
I =think= I recall that pairwise BLEU scores for human translators are usually
around 0.50, so anything much better than that is indeed suspect.
- JB
On Apr 26, 2012, at 14:18 , Daniel Schaut wrote:
Hi all,
I’m running some experiments for my thesis and I’ve been told by a more
Very short sentences will give you high scores.
Also multiple references will boost them
Miles
On Apr 26, 2012 8:13 PM, John D Burger j...@mitre.org wrote:
I =think= I recall that pairwise BLEU scores for human translators are
usually around 0.50, so anything much better than that is indeed
Hi,
we are currently working on a project that includes incremental training of
LMs.
Hence, there are plans to introduce quick adaptation in IRSTLM, but not soon.
The question is indeed how often you need to adapt the LM. If you are working
with large news LMs then it seems that adapting once