Update: It still works if there is any number of language models for
factor 0. Once I add a single language model for factor 1, it fails.
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
I think you're onto something here, Marcin. If I remove all my language
models and leave just the translation model, it works for me.
Just for testing, what happens if you
remove the second phrase table and add a langauge model for factor
1. Usually this kind of setup fails
Hi,
we also have an experiment on truecasing, see Table 1 in
http://www.statmt.org/wmt13/pdf/WMT08.pdf
What works best for us is relying on the casing as guessed by the lemmatizer.
(Our lemmatizer recognizes names as separate lemmas and keeps the lemma
upcased; which we then cast to the token
Hi,
If your system output is lowercase, you could try SRILM's `disambig`
tool for predicting the correct casing in a postprocessing step.
http://www.speech.sri.com/projects/srilm/manpages/disambig.1.html
Cheers,
Matthias
On Fri, 2015-05-22 at 11:20 +0200, Ondrej Bojar wrote:
Hi,
we also
Thank you all. Can you explain further what does it mean that MERT won't
know that the feature exists? Does that mean that the tuneable feature
weights are optimized assuming that all non-tuneable feature weights are
equal to zero?
In fact, in my understanding this should lead to a dramatic
Hi Vito
Yes, that's basically what happens, and you're right that
tuneable=false can be harmful to MERT - hence my warning. I've heard
of people trying to keep the weights of a language model fixed through
it, and this didn't work at all.
MERT (but not MIRA) also supports the option -o to
Hello,
I'm trying to do incremental training with moses, training with multiple
translated files. I've been following steps at
http://www.statmt.org/moses/?n=Advanced.Incremental , but I'm stuck at
installing incremental giza https://code.google.com/p/inc-giza-pp/ .
I've been passing