El dc 19 de 12 de 2012 a les 10:09 +0100, en/na Tino Didriksen va
escriure:
> On Wed, Dec 19, 2012 at 12:12 AM, <[email protected]> wrote:
>         > GramTrans
>         http://visl.sdu.dk/~eckhard/pdf/MTsummit07_final.pdf

> 
> GramTrans' translation chain goes through the stages of: Tokenization,
> morphology, syntax, semantics, dependency, and finally translation.
> Our translation engines are basically really strong analysis chains,
> which the translation program makes use of but is not part of.

This is similar to other closed-source systems such as MorphoLogic (?)
deep source analysis is done, and then the TL structure is directly
generated from that.

> There are some target language information injected into the analysis,
> but it's not vital.
> 
> The statistical part is almost non-existent - it is entirely optional.
> 
> If you want detailed information, ask Eckhard Bick
> <[email protected]> directly.
>  
>         I found that their system looked quite advanced. Could it be
>         considered
>         state of the art?
> 
> We certainly think so!
> The "problem" is that it takes a very long time to develop, but such
> is life for all rule-based systems.

Not for Apertium! One point of pride (for me at least) is that
developing a system with Apertium is actually a very quick process. Of
course, getting to the same coverage as GramTrans would take much
longer, but a basic system is a matter of months.  The idea that SMT =
quick development, RBMT = slow development is an insidious myth.

>         Myself not being familiar with the code of Apertium at all, is
>         this so?
>         And could a module with use of concept reletions be easily
>         included in the stack
>         of translation modules?
> 
> Someone more familiar with Apertium will have to answer that bit...

I explained how you could do this in a previous email. You would need to
write a module to go between the lexical-transfer output and the
structural transfer input.

>         How much are we doing of what GramTrans is doing and are there
>         plans to go
>         further that way?
> 
> I have wanted to do a proof of concept of turning Apertium into a
> similar analyse -> translate procedure, but haven't had time. There is
> really no reason that separate translation pairs all have their own
> source language analysis, when a single combined one would
> considerably reduce analysis errors.
> 
> What we have learned from GramTrans is that source language analysis
> errors account for the vast majority of translation errors.

I would add to this that sometimes it isn't necessarily an analysis
'error' that leads to a translation error, but rather a lack of
deep-enough analysis.

Fran



------------------------------------------------------------------------------
LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial
Remotely access PCs and mobile devices and provide instant support
Improve your efficiency, and focus on delivering more value-add services
Discover what IT Professionals Know. Rescue delivers
http://p.sf.net/sfu/logmein_12329d2d
_______________________________________________
Apertium-stuff mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/apertium-stuff

Reply via email to