Hi,
If you want to speed up decoding time maybe you should consider changing
searching algorithm. I'm also using compact phrase tables and after some
test I realised that cube pruning gives almost exactly the same quality but
is much faster. For example you can add something like this to your config
file:

# Cube Pruning
[search-algorithm]
1
[cube-pruning-pop-limit]
1000
[stack]
50

 If your model allows you may also try moses2 binary which is faster than
original.

Regards,
Thomas

----------------------------------------------------------------------

Message: 1
Date: Thu, 15 Dec 2016 19:12:01 +0530
From: Shubham Khandelwal <[email protected]>
Subject: Re: [Moses-support] Regarding Decoding Time
To: Hieu Hoang <[email protected]>
Cc: moses-support <[email protected]>
Message-ID:
        <cahwentvyealyrafjdgdih51t5_ahsprv0kwlcabc2td27yo...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hello,

Currently, I am using phrase-table.minphr , reordering-table.minlexr and
language model (total size of these 3 are 6 GB). Now, I tried to decode on
two different machines (8 core-16GB RAM  *&* 4 core-40GB RAM) using them.
So, During decoding of around 500 words, it took 90 seconds and 100 seconds
respectively on those machines. I am already using compact phrase and
reordering table representations for faster decoding. Is there any other way
to reduce this decoding time.

Also, In Moses, Do we have distributed way of decoding on multiple machines
?

Looking forward for your response.

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to