Hello
Would any of you know which of the word aligners out there already has a dump
method implemented? So that I can train a model, save it, then load and use it
on new text. Even if only for the simpler models.
Thanks a lot
Loic
___
Moses-suppor
hello guyzz...i have designed SMT using moses for English-hindi i use
following command to run this...
~/mosesdecoder-RELEASE-3.0/bin/moses -f ~/working/train/model/moses.ini
after running above command...my LM, TM starts loading...
and then i type english word...and i get corresponding op hi
I have a branch, "unblockpt", those locks are gone and caches are
thread-local. Hieu claims there is still not speed up.
W dniu 08.10.2015 o 21:56, Kenneth Heafield pisze:
> Good point. I now blame this code from
> moses/TranslationModel/CompactPT/TargetPhraseCollectionCache.h
>
> Looks like a c
There's a ton of object/malloc churn in creating Moses::TargetPhrase
objects, most of which are thrown away. If PhraseDictionaryMemory
(which creates and keeps the objects) scales better than CompactPT,
that's the first thing I'd optimize.
On 10/08/2015 08:30 PM, Marcin Junczys-Dowmunt wrote:
> W
Good point. I now blame this code from
moses/TranslationModel/CompactPT/TargetPhraseCollectionCache.h
Looks like a case for a concurrent fixed-size hash table. Failing that,
banded locks instead of a single lock? Namely an array of hash tables,
each of which is independently locked.
/** retr
How is probing-pt avoiding the same problem then?
W dniu 08.10.2015 o 21:36, Kenneth Heafield pisze:
> There's a ton of object/malloc churn in creating Moses::TargetPhrase
> objects, most of which are thrown away. If PhraseDictionaryMemory
> (which creates and keeps the objects) scales better tha
We did quite a bit of experimenting with that, usually there is hardly
any measureable quality loss until you get below 1000. Good enough for
deployment systems. It seems however you can get up 0.4 BLEU increase
when going really high (about 5000 and beyond) with larger distortion
limits. But t
to my experience, below 400 I started to lose some BLEU point slightly.
Another thing to tune is the score-setting when building table.
score-settings = "--GoodTuring --MinScore 2:0.001"
standard value is 0.0001
I did not notice any BLEU score decrease with 0.001
But might be not relevant with MMA
Hi Vincent,
That definitely helps. I reran everything comparing the original 2000/2000
to your suggestion of 400/400. There isn't much difference for a single
multi-threaded instance, but there's about a 30% speedup when using all
single-threaded instances:
pop limit & stack
procs/
Hi all,
I extended the multi_moses.py script to support multi-threaded moses
instances for cases where memory limits the number of decoders that can run
in parallel. The threads arg now takes the form "--threads P:T:E" to run P
processes using T threads each and an optional extra process running
Hi Vincent,
I'm using cube pruning with the following options for all data points:
[search-algorithm]
1
[cube-pruning-deterministic-search]
true
[cube-pruning-pop-limit]
2000
[stack]
2000
Best,
Michael
___
Moses-support mailing list
Moses-support@mi
Michael,
what score-setting do you use to achieve these results ?
if search algo= 1 what cube pruning number ?
Le 08/10/2015 19:05, Michael Denkowski a écrit :
Hi all,
I extended the multi_moses.py script to support multi-threaded moses
instances for cases where memory limits the number of dec
oh, i forgot to attach results
1 5 10 15 20 25 30 35
Current master 1m50.835s real0m24.373s real0m14.991s real0m12.999s
real0m11.012s real0m10.012s real0m10.108s 0m11.226s
1m48.409s user1m51.587s user2m6.720s user2m37.313s user2m42.219s
I knew you would ask for results and hoped to unearth my notes but no luck.
I know I don't have models archived. What I remember:
- Moses command line
- Measured total time to translate test set
- Compared .91 (or whatever the long-term stable version was then) to 2.11
- Big enterprise server (4 x
thanks for all your comments. It may look like we'll keep both
multi-process and multi-thread for the time being. There may be use for
both further down the line.
Vito - no-one's written a wrapper to do multi-process, rather than
multi-thread, with mosesserver. I would think the speed gain wou
Hi all,
what about mosesserver? Do you think the same speed gains would occur?
Best,
Vito
2015-10-06 22:39 GMT+02:00 Michael Denkowski
:
> Hi Hieu and all,
>
> I just checked in a bug fix for the multi_moses.py script. I forgot to
> override the number of threads for each moses command, so if
16 matches
Mail list logo