Hieu Hoang http://www.hoang.co.uk/hieu
On 29 July 2016 at 18:57, Bogdan Vasilescu <[email protected]> wrote: > Hi, > > I've trained a model and I'm trying to tune it using mert-moses.pl. > > I tried different size tuning corpora, and as soon as I exceed a > certain size (this seems to vary between consecutive runs, as well as > with other tuning parameters like --nbest), the process gets killed: > it should work with any size tuning corpora. The only thin I can think of is if the tuning corpora is very large (1,000,000 sentences say) or the n-best list is very large (1,000,000 say) then the decoder or the mert script may use a lot of memory > > Killed > Exit code: 137 > The decoder died. CONFIG WAS -weight-overwrite ... > > Looking into the kernel logs in /var/log/kern.log suggests I'm running > out of memory: > > kernel: [98464.080899] Out of memory: Kill process 15848 (moses) score > 992 or sacrifice child > kernel: [98464.080920] Killed process 15848 (moses) > total-vm:414130312kB, anon-rss:194915316kB, file-rss:0kB > > Is there a way to perform tuning incrementally? > > I'm thinking: > - tune on a sample of my original tuning corpora; this generates an > updated moses.ini, with "better" weights > - use this moses.ini as input for a second tuning phase, on another > sample of my tuning corpora > - repeat until there is convergence in the weights > > Would this work? > > Many thanks in advance, > Bogdan > > -- > Bogdan (博格丹) Vasilescu > Postdoctoral Researcher > Davis Eclectic Computational Analytics Lab > University of California, Davis > http://bvasiles.github.io > http://decallab.cs.ucdavis.edu/ > @b_vasilescu > > _______________________________________________ > Moses-support mailing list > [email protected] > http://mailman.mit.edu/mailman/listinfo/moses-support >
_______________________________________________ Moses-support mailing list [email protected] http://mailman.mit.edu/mailman/listinfo/moses-support
