Hi,

the syntax models are still pretty much research-grade and may
disappoint when using them out of the box.

Especially, tree-to-tree models will lead to much more restricted
models, with less rules, and rules that are less applicable.

You may see better results by starting with tree-to-string or
string-to-tree models. You should also look at the parse relaxation
which is inspired by work by Zollmann&Venugopal and the
tree binarization work at ISI.

-phi

On Fri, Apr 8, 2011 at 11:27 AM, Pratyush Banerjee
<[email protected]> wrote:
> Hi All,
>
> We have been trying to use Syntax Models for Moses for some time now.
> We have trained tree-to-tree models and HPB models and were trying to
> compare the results with standard PBSMT models.
>
> We use en-de as language pairs and about 1.2 million lines as training data.
> For tree to tree models we have used Berkeley Parser for parsing both
> languages.
>
> However i found that Tree-to-Tree scores were much lower(about 4 BLEU
> points) compared to the PBSMT models. The HPB model (hierarchical moses
> without syntax) is slightly better than the PBSMT models.
>
> Is this behaviour normal. I am quite new to syntax based models, hence the
> question.
>
> Also do we need to parse the devsets during tuning of the tree-to-tree
> models ?
>
> Thanks and regards,
>
> Pratyush Banerjee
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to