Modeling reordering is usually helpful, even during alignment.  This
is especially true for lexical translation models (where words are
generated by other words, rather than phrases being generated from
other phrases).  The reordering models don't have to be particularly
complicated to achieve quite good results (especially in languages
that have similar structures, like English and French).  For a fairly
basic introduction to modeling reordering (or not) in alignment
models, the Peter Brown et al (1993) paper (the Mathematics of
Statistical Machine Translation), which describes IBM models 1 and 2
is a fine place to start.  For further examples that focus just on
alignment, add the HMM alignment model papers (Vogel et al 1996 and/or
Och and Ney 1999).

Chris

On Sat, Oct 31, 2009 at 3:35 PM, Mark Fishel <[email protected]> wrote:
> Dear list members,
>
> I have a general theoretical question: if a word alignment model is
> only used to generate the viterbi alignment of the data for further
> usage (like it is the case with Moses'es phrase-based translation), is
> it necessary or at all useful to model reordering/distortion in the
> word alignment phase? Naturally if a word alignment model is later
> used for decoding to generate a new output, reordering is crucial; but
> how about in case of phrase-based translation used by Moses, where
> even lexicalized reordering is learned based on the symmetrical
> alignment matrices? Does modelling the reordering make the learning
> more robust/stable? Are there any experiments or articles dealing with
> this question?
>
> Hope this isn't a troll question :)
>
> Mark Fishel
> Dept. of Computer Science
> University of Tartu
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to