Hi,

I was using Moses v0.9 for a long time, and I recently decided to opt to
use the latest version (v2.1.1). In the test time, when loading the plain
models into Moses, there is no difficulty. But when I want to use the
binarized models, the translation time is much more than when I used the
binary models on the old version of Moses. I have performed the following
experiments.

model     |     decoder   |  time spent
------------+----------------+----------------
   old       |        old        |      x
   new     |        new      |      y>x
   old       |        new      |      z>x
   new     |        old        |      N/A


"Model" means if I have trained the binary model using the old version of
Moses or the new one. "Decoder" is the Moses version which I have used for
translating the test sentences.

I should also note that I have cached the binary models in OS memory using
the cat command. No other memory-intensive process was running on the
machine. However, I noticed that the translation time when translating a
sentence for the second time does not reduce. Therefore, I doubt that the
translations are being cached. Is there any argument I should consider for
this purpose?

Best Regards,
M.M. Mahsuli
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to