Hi Andrew,
I am sorry for the late reply. Regarding your question 3):
In the directory mert, there is a utility called "evaluator" which computes
various metrics for given candidate and reference files. To compute WER
score:
./evaluator --sctype WER --reference ref.txt --candidate cand.txt
For detailed usage:
./evaluator --help
Please contact me if you have any questions about this utility.
Best Regards,
Matous
2013/10/26 Andrew Shin <[email protected]>
> Dear support team,
>
> thank you for your previous reply which worked out for me.
> I have a few questions which I think should be simple but couldn't find
> relevant information on the website.
>
> 1) When you run Moses and type in a sentence, is there any way you could
> have the translation with
> the corresponding probability?
>
> 2) Also when you run and type in a sentence, is there a way to have not
> just one translation,
> but N-best candidates? (preferably with corresponding probabilities, which
> was the first question..)
>
> 3) I've done getting BLEU score using moses, but is there a way to also
> get word error rate to a reference?
>
> 4) After cleaning process, moses shows the number of lines in input and
> output text files,
> but I noticed that number of lines in output file decreased about
> 5%,resulting in non-matching number of lines
> for input and output.
> Looking at the translation results, it seems like it worked fine somehow,
> but it gets me concerned.
> Why is it, and does it affect the line-match of input-output and the
> training process?
>
>
> I truly appreciate your help in advance.
>
> best,
> Andrew
>
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support