Dear Hieu,

Thank you for a prompt and detailed reply!

>>> So your server has 20 cores (40 hyperthreads) and 16GB RAM? If that's
correct, then the RAM size would be a problem - you need as much RAM as the
total size of your models, plus more for working memory and the OS.

The amount of memory is 256 Gb and not 16. There are a number of 16 Gb
plates installed.
To my knowledge the machine is not hyperthreaded but just has 40 cores,
although I am now getting a bit doubtful about that.

>> Do you run Moses command line, or the server? My timings are based on
the command line, the server is a little slower.

Both Moses and Moses2 are run in the console mode (not server). The model
loading time is excluded from the measurements. I could not manage to get
the asynchronous XML-RPC to work, so for my experiments that would be as if
I used Moses/Moses2 in a single-thread mode. Therefore I used the
command-line version.

>>> Do you run Moses directly, or is another evaluation process running it?
Are you sure that evaluation process is working as it should?

Moses is run from command time under the "time" command of Linux, and so
are other systems we used in comarison. We look at the runtime and not the
CPU times, but we perform a number of experiments to measure the average
times and control the standard deviations.

>>> Do you minimise the effect of disk read by pre-loading the models into
filesystem cache? This is usually done by running this before running the
decoder cat [binary model files] > /dev/null

Nop, we did not do pre-loading, for none of the tools but perhaps this is
not an issue as we just measure the average model loading times and
subtract them from the average run-time with decoding. So the model loading
times are excluded from the results. Our goal was to measure and compare
the decoding times and how they scale in the number of threads.

>>> it may take a while, but I can't replicate your results without it.
Alternatively, I can provide you with my models so you can try & replicate
my results.

The experiments are run on an internal server which is not visible from
outside. I shall explore the possibilities of sharing the models, but I am
doubtful it is possible. The university network is very restricted. Yet, I
am definitely open to re-running your experiments. If possible.

Kind regards,

Ivan

<http://www.tainichok.ru/>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to