Thanks Hieu for pointing me to this section of your thesis. This is really
useful.
- Sriram
On Thu, Apr 18, 2013 at 2:30 PM, Hieu Hoang hieu.ho...@ed.ac.uk wrote:
'is good' -- is not good
On 18 April 2013 09:57, Hieu Hoang hieu.ho...@ed.ac.uk wrote:
If you are using multiple
If you are using multiple phrase-tables and generation tables, I don't
think there's much you can do about the speed of the decoding. Also, the
translation quality is good with this configuration
You can have a look analysis on page 40 here:
http://statmt.org/~s0565741/download/ddd.pdf
'is good' -- is not good
On 18 April 2013 09:57, Hieu Hoang hieu.ho...@ed.ac.uk wrote:
If you are using multiple phrase-tables and generation tables, I don't
think there's much you can do about the speed of the decoding. Also, the
translation quality is good with this configuration
You can
Thanks Philipp.
I had tried with very tight t-table limits, even with a limit of 1 for both
words and pos tags and still it didn't work for this example sequence. This
was surprising.
I hope I can avoid shorter phrase-lengths because the task I have in mind
would require me to have default
Hi,
the translation option expansion of factored models may explode in the
setup that you use above
(there are many possible lemma and pos mappings, and the product of
them is explored during
your first two decoding steps).
You could remedy this by:
- use shorter phrase lengths
- use tighter