Hi,
MT Monkey is neural machine translation and not Moses.
Moses does not run on a GPU, it uses only CPU.
When you state that speed is not "real time" what kind of speed are you
looking for?
The best way, as others in this thread have suggested, is to lower the beam
threshold and use the server
Hello,
Currently, I have created one fr-en translation model (size of
phrase-table.minphr and reordering-table.minlexr are 13 GB and 6.6 GB
respectively) by following the tutorial of Moses baseline system on a big
dataset. I have also used Cube Pruning method as suggested by Thomas. Now,
I use mos
Hi Shubham
You could start Moses in server mode:
$ moses -f /path/to/moses.ini --server --server-port 12345 --server-log
/path/to/log
This will load the models, keep them in memory and the server will wait for
client requests and serve them until you terminate the process. Translating
is a bit d
Hey Thomas,
Thanks for your reply.
Using Cube Pruning, the speed is littile bit high, but not that much. I
will try to play with these parameters.
I have binary moses2 which supports it aswell but it is taking more time to
than moses. Can you please send/share somewhere your binary moses2 file if
Hi,
If you want to speed up decoding time maybe you should consider changing
searching algorithm. I'm also using compact phrase tables and after some
test I realised that cube pruning gives almost exactly the same quality but
is much faster. For example you can add something like this to your confi