Taylor,

 An automated approach to measuring MT quality is the holy grail. Most 
 approaches I know of combine automated measurements with (expensive) 
 human review.

 Academic Open Source projects like Moses, Joshua, etc, allow commercial 
 interests to generate MT aplenty. This has created a market where the 
 commercial interests must find their "value-add." PTTools helps LSPs 
 (and others) install and operate a Moses system. Safaba 
 (http://www.safaba.com) helps LSPs (and others) evaluate MT quality. 
 Others expose only the translation interfaces through Internet portals. 
 Some LSPs focus on managing post-editing of large MT projects with their 
 access to human resources.

 It's no surprise MT quality has become an active research field for 
 both academic and commercial interests. Both PTTools and Safaba 
 contribute to the Open Source community with projects like DoMY and 
 Meteor (http://www.cs.cmu.edu/~alavie/METEOR/) respectively while 
 retaining other capabilities for customers. We look forward to Language 
 Intelligence sharing what it can after you've come up with something.

 Tom



 On Thu, 15 Sep 2011 14:17:26 -0400, Taylor Rose 
 <[email protected]> wrote:
> Barry,
>
> Okay I was getting the impression that it is a current issue based on
> the research papers I've been reading. Maybe I'll just have to come 
> up
> with my own algorithms ;-)
>
> --
> Taylor Rose
> Machine Translation Intern
> Language Intelligence
>
>
> On Thu, 2011-09-15 at 18:52 +0100, Barry Haddow wrote:
>> Hi Taylor
>>
>> If I remember rightly, this paper made use of about 20-30k 
>> post-edited
>> sentences which are unlikely to be released. So there is no way to 
>> replicate
>> this work.
>>
>> Confidence estimation is an active research area in MT, but I don't 
>> think that
>> there are any really good answers yet. Check out the last couple of 
>> years' ACL
>> and EMNLP, as well as WMT, to see what's going on
>> (http://www.aclweb.org/anthology-new/)
>>
>> cheers - Barry
>>
>> On Thursday 15 September 2011 18:26:22 Taylor Rose wrote:
>> > Hey all,
>> >
>> > I've been researching how to judge the quality of a machine 
>> translation.
>> > I found this article about judging the "goodness" of translations. 
>> This
>> > is *exactly* what I've been trying to do. Does anyone know if 
>> their are
>> > implementations of their algorithm available? It would take me a
>> > substantial amount of time to try and replicate their process and 
>> even
>> > then I do not have the corpus assets nor the processing power they 
>> had.
>> >
>> > Also, does anyone know of other existing systems that can 
>> accurately
>> > compute the quality of translation without the need of an immense 
>> server
>> > farm?
>> >
>> > Thanks,
>> >
>>
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to