Well, I installed Moses only a few months ago, so it should be the latest
version.
I find it really strange. I have tried everything - binarizing tables
(which finishes with no problems), using the --no-filter-phrase-table
parameter, adding language models for all the factors I have (this one
Oh also, use a small -S argument to the interpolate program because it
doesn't quite budget memory properly yet.
On 06/28/2016 05:08 PM, Kenneth Heafield wrote:
> Log-linear interpolation is in KenLM in the lm/interpolate directory.
> You'll want to get KenLM from github.com/kpu/kenlm and compile
Log-linear interpolation is in KenLM in the lm/interpolate directory.
You'll want to get KenLM from github.com/kpu/kenlm and compile with Eigen.
Tuning log-linear weights is super slow, but applying them is reasonably
fast. In total the tuning + applying weights time is comparable to SRILM.
Hi all
I have trained several language models and would like to combine them with
interpolate-lm.perl:
https://github.com/moses-smt/mosesdecoder/blob/master/scripts/ems/support/interpolate-lm.perl
As the language model tool, I always use KenLM, but looking at the code of
interpolate-lm.perl, it
I have managed to track more precisely where the segfault occurs but I did
not understand why yet. It happens at some point during the for loop in the
function
void ChartHypothesis::GetOutputPhrase(Phrase ) const
in the file moses/ChartHypothesis.cpp
Strangely, if the decoder is called in
Hi!
I have one question about pruning translation table during EMS training.
What method is better SALM or based on low scores (described here:
http://www.statmt.org/moses/?n=Advanced.RuleTables#ntoc5) ?
SALM filtering takes relatively more time than pruning while LM creation.
But I'm not