It never throws an error but it will run for days and make no progress past this point. Tweaking the LM and the tuning files slightly I was able to get it to finally give me an error, but I'm not sure how to address it.
Loading table into memory...done. Exception: moses/Phrase.cpp:214 in void Moses::Phrase::CreateFromString(Moses::FactorDirection, const std::vector<long unsigned int>&, const StringPiece&, Moses::Word**) threw util::Exception because `nextPos == string::npos'. Incorrect formatting of non-terminal. Should have 2 non-terms, eg. [X][X]. Current string: [/color] Exit code: 1 The decoder died. CONFIG WAS -weight-overwrite 'PhrasePenalty0= 0.043478 WordPenalty0= -0.217391 TranslationModel0= 0.043478 0.043478 0.043478 0.043478 Distortion0= 0.065217 LM0= 0.108696 LexicalReordering0= 0.065217 0.065217 0.065217 0.065217 0.065217 0.065217' ERROR cannot open weight-ini '/scratch/ae1541/EMNLP/evaluation/silverPhrases/eng-lev/tuning/mert/moses.ini': No such file or directory at /share/apps/NYUAD/mosesdecoder/3.0/scripts/ems/support/substitute-weights.perl line 29. On Mon, Mar 13, 2017 at 6:29 PM, Hieu Hoang <[email protected]> wrote: > it doesnt seem to be any errors > > * Looking for MT/NLP opportunities * > Hieu Hoang > http://moses-smt.org/ > > > On 13 March 2017 at 04:56, Alexander Erdmann <[email protected]> wrote: > >> Hi, >> >> I'm trying to do pivot translation between Arabic dialects via English. >> The Egyptian to English side trains and tunes fine in about 4 hours, but >> the English to Levantine side (although it is of comparable size and >> preprocessed in exactly the same way) stalls before completing the first >> run during the tuning step (training finished without error and the tuning >> step never yields an error, it just never finishes the first run). >> >> There are about 175,000 sentences in the training corpus for English to >> Levantine, all of this data coming from weblogs used in the BOLT corpus. >> The LM for Levantine is mostly comprised of the same data, with some >> additional data news commentary and twitter. I tried tuning originally with >> 2000 sentences from BOLT, then 1000, and finally 500, but I ran into the >> same stall each time while the Egyptian - English side never had an issue. >> >> Do you have any idea what is going on or how to resolve this? >> >> Here is the output file: >> >> run 1 start at Sun Mar 12 23:26:52 GST 2017 >> >> Parsing --decoder-flags: |-v 0| >> >> Saving new config to: ./run1.moses.ini >> >> (1) run decoder to produce n-best lists >> >> params = -v 0 >> >> decoder_config = -weight-overwrite 'PhrasePenalty0= 0.043478 >> WordPenalty0= -0.217391 TranslationModel0= 0.043478 0.043478 0.043478 >> 0.043478 Distortion0= 0.065217 LM0= 0.108696 LexicalReordering0= 0.065217 >> 0.065217 0.065217 0.065217 0.065217 0.065217' >> >> and here is the error file: >> >> Loading module 'mosesdecoder/3.0' >> >> Loading module 'mgiza/2015.01' >> >> mkdir: cannot create directory >> >> Using SCRIPTS_ROOTDIR: /share/apps/NYUAD3/mosesdecoder/3.0/scripts >> >> Assuming the tables are already filtered, reusing filtered/moses.ini >> >> Using cached features list: ./features.list >> >> MERT starting values and ranges for random generation: >> >> LexicalReordering0 = 0.300 ( 0.00 .. 1.00) >> >> LexicalReordering0 = 0.300 ( 0.00 .. 1.00) >> >> LexicalReordering0 = 0.300 ( 0.00 .. 1.00) >> >> LexicalReordering0 = 0.300 ( 0.00 .. 1.00) >> >> LexicalReordering0 = 0.300 ( 0.00 .. 1.00) >> >> LexicalReordering0 = 0.300 ( 0.00 .. 1.00) >> >> Distortion0 = 0.300 ( 0.00 .. 1.00) >> >> LM0 = 0.500 ( 0.00 .. 1.00) >> >> WordPenalty0 = -1.000 ( 0.00 .. 1.00) >> >> PhrasePenalty0 = 0.200 ( 0.00 .. 1.00) >> >> TranslationModel0 = 0.200 ( 0.00 .. 1.00) >> >> TranslationModel0 = 0.200 ( 0.00 .. 1.00) >> >> TranslationModel0 = 0.200 ( 0.00 .. 1.00) >> >> TranslationModel0 = 0.200 ( 0.00 .. 1.00) >> >> featlist: LexicalReordering0=0.300000 >> >> featlist: LexicalReordering0=0.300000 >> >> featlist: LexicalReordering0=0.300000 >> >> featlist: LexicalReordering0=0.300000 >> >> featlist: LexicalReordering0=0.300000 >> >> featlist: LexicalReordering0=0.300000 >> >> featlist: Distortion0=0.300000 >> >> featlist: LM0=0.500000 >> >> featlist: WordPenalty0=-1.000000 >> >> featlist: PhrasePenalty0=0.200000 >> >> featlist: TranslationModel0=0.200000 >> >> featlist: TranslationModel0=0.200000 >> >> featlist: TranslationModel0=0.200000 >> >> featlist: TranslationModel0=0.200000 >> >> Saved: ./run1.moses.ini >> >> Normalizing lambdas: 0.300000 0.300000 0.300000 0.300000 0.300000 >> 0.300000 0.300000 0.500000 -1.000000 0.200000 0.200000 0.200000 0.200000 >> 0.200000 >> >> DECODER_CFG = -weight-overwrite 'PhrasePenalty0= 0.043478 WordPenalty0= >> -0.217391 TranslationModel0= 0.043478 0.043478 0.043478 0.043478 >> Distortion0= 0.065217 LM0= 0.108696 LexicalReordering0= 0.065217 0.065217 >> 0.065217 0.065217 0.065217 0.065217' >> >> Executing: /share/apps/NYUAD/mosesdecoder/3.0/bin/moses -v 0 -config >> filtered/moses.ini -weight-overwrite 'PhrasePenalty0= 0.043478 >> WordPenalty0= -0.217391 TranslationModel0= 0.043478 0.043478 0.043478 >> 0.043478 Distortion0= 0.065217 LM0= 0.108696 LexicalReordering0= 0.065217 >> 0.065217 0.065217 0.065217 0.065217 0.065217' -n-best-list >> run1.best100.out 100 distinct -input-file /scratch/ae1541/unComparableCo >> rpora/evaluation/silverPhrases/eng-lev/train.bolt.lev.eng > run1.out >> >> Executing: /share/apps/NYUAD/mosesdecoder/3.0/bin/moses -v 0 -config >> filtered/moses.ini -weight-overwrite 'PhrasePenalty0= 0.043478 >> WordPenalty0= -0.217391 TranslationModel0= 0.043478 0.043478 0.043478 >> 0.043478 Distortion0= 0.065217 LM0= 0.108696 LexicalReordering0= 0.065217 >> 0.065217 0.065217 0.065217 0.065217 0.065217' -n-best-list >> run1.best100.out 100 distinct -input-file /scratch/ae1541/unComparableCo >> rpora/evaluation/silverPhrases/eng-lev/train.bolt.lev.eng > run1.out >> >> Initializing LexicalReordering.. >> >> Loading table into memory...done. >> >> Thanks, >> >> -- >> Alex Erdmann >> PhD Student in Linguistics at The Ohio State University >> Visiting Scholar at NYU Abu Dhabi >> >> _______________________________________________ >> Moses-support mailing list >> [email protected] >> http://mailman.mit.edu/mailman/listinfo/moses-support >> >> > -- Alex Erdmann PhD Student in Linguistics at The Ohio State University Visiting Scholar at NYU Abu Dhabi
_______________________________________________ Moses-support mailing list [email protected] http://mailman.mit.edu/mailman/listinfo/moses-support
