Hi all,

regarding creating LPs for CJK using pretrained embeddings my plan is to
implement the approach from [1] using commons-maths (read models, perform
matrix multiplication / inverse, etc.) for getting a first word level
translation capability; then I'd like to dig into how to extend that to
phrases without incurring into having to train a neural net, because that
would require adding an explicit dependency to a DL framework (which I'd
like to avoid for now).
Thoughts ?

Regards,
Tommaso

[1] : https://arxiv.org/abs/1309.4168

Reply via email to