Dear colleagues, We have just made available a new version of PET, a stand-alone tool to post-edit and assess machine or human translations. This is a free, open-source tool under LGPL license. It was built using Java, so it runs on any platform:
http://pers-www.wlv.ac.uk/~in1676/pet If you are interested in evaluating translations through post-editing, this is an easy and cheap solution: you only need to provide source and translation segments (from one or multiple MT systems - it does not depend on any MT system) to set an experiment. While translators post-edit the translations, implicit quality indicators such as post-editing time, keystrokes, edit operations, and possibly others are stored for each segment. Explicit quality assessments can also be collected. The tool also works for monolingual revision. It can read from monolingual and bilingual dictionaries and render html for special markups. It also allows establishing constraints for jobs on a per segment basis (for example, the maximum time or length for a given post-edited segment). Our plan is to maintain and further develop the tool, so if you have any comments/suggestions on how to improve it or ideas for interesting experiments, let us know! Best, Lucia Specia (University of Sheffield) Wilker Aziz (University of Wolverhampton) _______________________________________________ Mt-list mailing list
