Thanks for the responses, but that is not my question. All the three
mentioned papers evaluate the alignments either in comparison to a
reference alignment or in context of word-based SMT. What I am asking
is whether there's any work comparing the word alignment performance
in the phrase-based SMT pipeline. I just compared the default Moses
setup to my own implementation of IBM-1 (no reordering, no word
classes, etc.) substituting the 1st and 2nd steps (preparation and
GIZA++ word alignment parameter estimation) of
train-factored-phrase-model.perl: that resulted in only a small drop
of the BLEU score. It's only one test and one (relatively small)
corpus, so we can't draw conclusions from just that, but is there any
similar work?

Thanks in advance,
Mark

On Sun, Nov 1, 2009 at 12:31 AM, Adam Lopez <[email protected]> wrote:
> This is a great list, but I would add Och & Ney (CL 2003), which, in
> addition to synthesizing the papers below, contains substantial
> discussion and comprehensive experimental results on the benefits of
> modeling reordering.
> http://aclweb.org/anthology-new/J/J03/J03-1002.pdf
>
>
> On Sat, Oct 31, 2009 at 7:56 PM, Chris Dyer <[email protected]> wrote:
>> Modeling reordering is usually helpful, even during alignment.  This
>> is especially true for lexical translation models (where words are
>> generated by other words, rather than phrases being generated from
>> other phrases).  The reordering models don't have to be particularly
>> complicated to achieve quite good results (especially in languages
>> that have similar structures, like English and French).  For a fairly
>> basic introduction to modeling reordering (or not) in alignment
>> models, the Peter Brown et al (1993) paper (the Mathematics of
>> Statistical Machine Translation), which describes IBM models 1 and 2
>> is a fine place to start.  For further examples that focus just on
>> alignment, add the HMM alignment model papers (Vogel et al 1996 and/or
>> Och and Ney 1999).
>>
>> Chris
>>
>> On Sat, Oct 31, 2009 at 3:35 PM, Mark Fishel <[email protected]> wrote:
>>> Dear list members,
>>>
>>> I have a general theoretical question: if a word alignment model is
>>> only used to generate the viterbi alignment of the data for further
>>> usage (like it is the case with Moses'es phrase-based translation), is
>>> it necessary or at all useful to model reordering/distortion in the
>>> word alignment phase? Naturally if a word alignment model is later
>>> used for decoding to generate a new output, reordering is crucial; but
>>> how about in case of phrase-based translation used by Moses, where
>>> even lexicalized reordering is learned based on the symmetrical
>>> alignment matrices? Does modelling the reordering make the learning
>>> more robust/stable? Are there any experiments or articles dealing with
>>> this question?
>>>
>>> Hope this isn't a troll question :)
>>>
>>> Mark Fishel
>>> Dept. of Computer Science
>>> University of Tartu
>>> _______________________________________________
>>> Moses-support mailing list
>>> [email protected]
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>> _______________________________________________
>> Moses-support mailing list
>> [email protected]
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to