Hello all,
Coming back to Andy's questions:
1. Can papers on EBMT succeed in getting published (especially in
non-expert, i.e. MT-specific, conferences) without making direct
comparisons to SMT?
Certainly one giant step in this direction would be made if people doing SMT of the phrase-based flavor (i.e. just about everybody in SMT these days) acknowledged the fact that, as Harry and Ed rightly point out, what they are doing is really just EBMT with a statistical twist. Unfortunately, in my experience, SMT people seem very reluctant to admit this. Why this is so is not so clear, but I suspect that the EBMT label has a bit of a dubious reputation among the machine-learning crowd. And maybe one of the reasons for that is that very few EBMT papers report on quantitative evaluations.
While subjective human MT evaluations are very costly, there are now a number of alternatives (BLEU, NIST, recall/precision) which are cheap, easy to use, and available for anyone with an internet connection. And in spite of all the controversy that these methods stir, they are better than nothing.
2. Can anyone envisage a situation where an SMT paper was asked to
compare its results against an MT model?
I think an SMT paper that doesn't report on empirical comparisons with competing systems or approaches is just as likely to get rejected as Andy's. For better or for worse, that's the name of the game these days. But you can't empirically compare against approaches for which there are no empirical evaluations. So SMT papers report comparisons with what's available: other SMT systems.
Cheers,
Michel
_______________________________________________ MT-List mailing list [EMAIL PROTECTED] http://www.computing.dcu.ie/mailman/listinfo/mt-list
