Hello,

I wonder why it takes lot of time to do language modelling with kenlm and
srilm when n goes beyond 6 (even on a relatively small dataset: 500 MB),
and is there a way to actually do high-order (6,7,8-gram) language
modelling with srilm and kenlm on a laptop (12GB RAM)? I assume there is a
flag somewhere that I need to set when creating the arpa or binary file, or
during the test (computing the perplexity etc...).

Thanks,
-K
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to