IRSTLM's support list is https://list.fbk.eu/sympa/subscribe/user-irstlm

Since your model appears to load fine with kenlm (which is what query 
uses), you can change your lmodel-file entry to 8 0 3 and it will work.

The problem appears to be a mismatch between your lmodel-file 
configuration (order 3 in 1 0 3) and your model (order 5?).

For popular Kneser-Ney smoothing it's incorrect to load models with a 
different order than they were built with.  So I've always considered it 
a bug that Moses & SRILM require users to specify the order twice (once 
in the ARPA, once in the moses.ini).  An option would be fine for the 
few use cases but it shouldn't be required.  KenLM always ignores the 
order you specify with lmodel-file.

Kenneth

On 04/13/13 14:31, Lukash Astalosh wrote:
> Hello, I have following problem and I was not able to find proper
> solution so far.
> I am using binarised language model which I believe has something to do
> with this problem. Otherwise, when it is not binarized, there is no problem.
> Do you have any suggestions what could be wrong?
>
> Lukas
>
> DECODE OUTPUT:
>
> Defined parameters (per moses.ini or switch):
> config: ../model/moses-bin.ini
> distortion-file: 0-0 wbe-msd-bidirectional-fe-allff 6
> /mnt/minerva1/nlp/projects/mt_sk2/largelm2/binarised-model/reordering-table
> distortion-limit: 6
> input-factors: 0
> lmodel-file: 1 0 3
> /mnt/minerva1/nlp/projects/mt_sk2/largelm2/binarised-model/prim5.blm.sk
> <http://prim5.blm.sk/>
> mapping: 0 T 0
> ttable-file: 1 0 0 5
> /mnt/minerva1/nlp/projects/mt_sk2/largelm2/binarised-model/phrase-table
> ttable-limit: 20
> weight-d: 0.3 0.3 0.3 0.3 0.3 0.3 0.3
> weight-l: 0.5000
> weight-t: 0.20 0.20 0.20 0.20 0.20
> weight-w: -1
> ../../tools/moses/bin
> ScoreProducer: Distortion start: 0 end: 1
> ScoreProducer: WordPenalty start: 1 end: 2
> ScoreProducer: !UnknownWordPenalty start: 2 end: 3
> Loading lexical distortion models...have 1 models
> ScoreProducer: LexicalReordering_wbe-msd-bidirectional-fe-allff start: 3
> end: 9
> Creating lexical reordering...
> weights: 0.300 0.300 0.300 0.300 0.300 0.300
> binary file loaded, default OFF_T: -1
> Start loading LanguageModel
> /mnt/minerva1/nlp/projects/mt_sk2/largelm2/binarised-model/prim5.blm.sk
> <http://prim5.blm.sk/> : [2.104] seconds
> In LanguageModelIRST::Load: nGramOrder = 3
> Language Model Type of
> /mnt/minerva1/nlp/projects/mt_sk2/largelm2/binarised-model/prim5.blm.sk
> <http://prim5.blm.sk/> is 1
> Language Model Type is 1
> mmap
> loadtxt_ram()
> 6-grams: reading 0 entries
> done level 6
> 6-grams: reading 0 entries
> done level 6
> 2-grams: reading 0 entries
> done level 2
> 0-grams: reading 0 entries
> done level 0
> 6-grams: reading 0 entries
> done level 6
> 1-grams: reading 0 entries
> done level 1
> 9-grams: reading 0 entries
> done level 9
> 5-grams: reading 0 entries
> done level 5
> 5-grams: reading 0 entries
> done level 5
> 1-grams: reading 0 entries
> done level 1
> 2-grams: reading 0 entries
> done level 2
> 1-grams: reading 0 entries
> done level 1
> 3-grams: reading 0 entries
> done level 3
> 2-grams: reading 0 entries
> done level 2
> 7-grams: reading 0 entries
> done level 7
> 1-grams: reading 0 entries
> done level 1
> 91-grams: reading 0 entries
> done level 91
> 6-grams: reading 0 entries
> done level 6
> 4-grams: reading 0 entries
> done level 4
> 4-grams: reading 0 entries
> done level 4
> 7-grams: reading 0 entries
> done level 7
> 7-grams: reading 0 entries
> done level 7
> 3-grams: reading 0 entries
> done level 3
> 8-grams: reading 0 entries
> done level 8
> 6-grams: reading 0 entries
> done level 6
> 8-grams: reading 0 entries
> done level 8
> 7-grams: reading 0 entries
> done level 7
> 7-grams: reading 0 entries
> done level 7
> 7-grams: reading 0 entries
> done level 7
> 8-grams: reading 0 entries
> done level 8
> 0-grams: reading 0 entries
> done level 0
> 4-grams: reading 0 entries
> done level 4
> 8-grams: reading 0 entries
> done level 8
> 8-grams: reading 0 entries
> done level 8
> 8-grams: reading 0 entries
> done level 8
> 1-grams: reading 0 entries
> done level 1
> 5-grams: reading 0 entries
> done level 5
> 8-grams: reading 0 entries
> done level 8
> 99-grams: reading 0 entries
> done level 99
> 5-grams: reading 0 entries
> done level 5
> 7-grams: reading 0 entries
> done level 7
> 5-grams: reading 0 entries
> done level 5
> 2-grams: reading 0 entries
> done level 2
> 8-grams: reading 0 entries
> done level 8
> 4-grams: reading 0 entries
> done level 4
> 3-grams: reading 0 entries
> done level 3
> 2-grams: reading 0 entries
> done level 2
> 8-grams: reading 0 entries
> done level 8
> 5-grams: reading 0 entries
> done level 5
> 5-grams: reading 0 entries
> done level 5
> 2-grams: reading 0 entries
> done level 2
> 8-grams: reading 0 entries
> done level 8
> 7-grams: reading 0 entries
> done level 7
> 8-grams: reading 0 entries
> done level 8
> 8-grams: reading 0 entries
> done level 8
> 40-grams: reading 509707446 entries
> moses: util.cpp:289: int parseline(std::istream&, int, ngram&, float&,
> float&): Assertion `howmany == (Order+ 1) || howmany == (Order + 2)' failed.
>
>
> I checked my language model by querying it:
> Loading statistics:
> user    0
> sys     11.6287
> VmPeak:  7604228 kB
> VmRSS:   7590424 kB
> je=16 1 -2.08718        toto=3194 2 -3.12603    slovenská=9354 2
> -5.49952       veta?=0 1 -4.02502      </s>=0 1 -2.31798       Total:
> -17.0557 OOV: 1
> After queries:
> user    0
> sys     11.6287
> VmPeak:  7604236 kB
> VmRSS:   7590424 kB
> Total time including destruction:
> user    0
> sys     13.1128
> VmPeak:  7604236 kB
> VmRSS:      1444 kB
>
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to