good to know that the constrained decoding works. And yes, the
reachability of the training data is only theoritical in the absence of
pruning such as cube pruning, beams etc.
On 15/11/2016 20:00, Shuoyang Ding wrote:
Hi Hieu,
I’ve made change 1, 2, 4 before emailing you, and the coverage
Hi Hieu,
I’ve made change 1, 2, 4 before emailing you, and the coverage didn’t change
much. It turns out the bottleneck is on beam-threshold — the default value was
1e-5, which is a pretty tough limit for constrained decoding.
After setting that to 0 I played around a little bit with
good point. The decoder is set up to translate quickly so there's a few
pruning parameters which throws out low scoring rules or hypotheses.
These are some of the pruning parameters you'll need to change (there
may be more):
1. [feature]
PhraseDictionaryWHATEVER table-limit=0
2.
Hi All,
I’m trying to do syntax-based constrained decoding on the same data from which
I extracted my rules, and I’m getting very low coverage (~12%). I’m using GHKM
rule extraction which in theory should be able to reconstruct the target
translation even only with minimal rules.
Judging from