Not to get too defensive haha but in the talk makes certain statements,
such as "Neural machine translation is more fluent and adequate compared to
RBMT" (not verbatim), and later on a comparison of post-editability, but
doesn't really comment on the status of these tools, i.e. how much data
were the NMT systems trained on, and how much is the English-Catalan system
of Apertium is worked on (which we know is not a lot), so I was just
surprised to see general statements against RBMT. Not saying this as a
member of Apertium but generally evaluations like this should be done
carefully and all the factors should be considered before comparing
multiple systems, without that, it's easy to arrive at false conclusions.

My two cents :)
*तन्मय खन्ना *
*Tanmai Khanna*


On Thu, Oct 15, 2020 at 9:27 PM Mikel L. Forcada <m...@dlsi.ua.es> wrote:

> Dear Apertiumers:
>
> here's a 20-minute talk from Vicent Briva where he evaluates Apertium
> English–Catalan in comparison with the SoftCatalà neural engine and
> Google Translate.
>
> I think the evaluation is quite well made.
>
> https://www.youtube.com/watch?v=IiRVhAYpecw
>
> We do not fare too well but, hey, we know this language pair needs love.
>
> All the best,
>
>
> Mikel
>
>
>
> _______________________________________________
> Apertium-stuff mailing list
> Apertium-stuff@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/apertium-stuff
>
_______________________________________________
Apertium-stuff mailing list
Apertium-stuff@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/apertium-stuff

Reply via email to