Dear Sara,

Thanks for your helps, it was very helpful and worthless for us.

But about this part:

the standard reordering model in Moses is inconsistent, since it is
based on word alignments at training time, but on phrase alignments at
decoding time.

I checked the link too, you are right. but:
Are you sure? Is it really the case?
You mean since 2008 till now the moses code is not updated?

On Thu, Sep 1, 2011 at 4:22 PM, Sara Stymne <[email protected]> wrote:

> Hi,
>
> I have the same experience, that the lexical reordering model does not
> always improve over only using the distance based model. I really think
> it depends to a large extent on the language pair and the corpus. Of
> course there is also the uncertainty of Mert, so it might be worthwhile
> to run several Mert runs if you haven't done that already.
>
> I was in a team at the Dublin MT Marathon who worked on extending the
> reordering models in Moses. Galley and Manning
> (http://www.aclweb.org/anthology/D/D08/D08-1089.pdf) pointed out that
> the standard reordering model in Moses is inconsistent, since it is
> based on word alignments at training time, but on phrase alignments at
> decoding time. Our group implemented both a consistent phrase-based
> model, and the hierarchical reordering model suggested by Galley and
> Manning.
>
> This code is in the Moses trunk, but the Moses webpage has not been
> updated. There is some documentation on the MT Marathon wiki, however:
> http://statmt.org/mtm4/?n=Main.HierarchicalReordering. I think it might
> be worth it to investigate these models as well.
>
> There is some smoothing going on when training the reordering models.
> There is a flag to train-model.perl: --reordering-smooth to set the
> behavior of the smoothing. There is no proper treatment of unseen
> phrases at decoding time though, which isn't that much of an issue if
> the same data is used for training the phrase table and the reordering
> model, since there won't be any unknown phrase pairs, except for
> pass-through OOVs, then. But I think it would be a good idea to look
> into that anyway. It was one thing we discussed at MT Marathon, but
> never had time to do something about.
>
> /Sara
>
>
>
>
> 2011-08-31 14:00, Neda NoorMohammadi skrev:
> > Hello all,
> >
> > We are testing different reordering models in our system ( source
> > language structure is sov and target is svo).
> > The results are amazing! The distance based model improves the results
> > in compare to lexical reordering models.
> >
> > we want to know is it true for other pair languages and also:
> > Isn't it true that lexical reordering is offered to improve the
> > reordering models? Then why we have get just the reverse?! How we can
> > argument why it has happend to our system?
> >
> > Also I have checked the LexicalReorderingState.cpp class. There is a
> > commented part in the code which shows it has uses no smoothing model
> > for estimating none-seen events of this model. Am I right?
> >
> > Thanks
> >
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to