Hi Jun,
Is it possible to use only the new mert implementation without having to
update the whole system? Could I use the new MER training scripts as
they come in the package?
Regards
jun li wrote:
Hi,
suggest you download moses from
Hi,
there may be only a small bug in the training code, so the easiest would be
to change the following:
[input-factors]
0
1
to
[input-factors]
0
-phi
On Thu, Nov 13, 2008 at 10:49 AM, Miguel José Hernández Vidal
[EMAIL PROTECTED] wrote:
Dear Mailing,
I've trained my English to
Dear Mailing,
I've trained my English to Spanish system as Amit did
(http://www.mail-archive.com/moses-support@mit.edu/msg00599.html). I got
the input error too:
[ERROR] Malformed input at
Expected input to have words composed of 2 factor(s) (form FAC1|FAC2|...)
but instead received input
Hi,
I wonder what is the format of the file produced by the
-output-search-graph option ?
The details in the documentation
(http://www.statmt.org/moses/?n=Moses.AdvancedFeatures#ntoc10) do not
match the format I obtain.
Here is a sample (actually first lines) of the output-search-graph file
The new MERT implementation is independent from the decoder.
You can use it without updating Moses
Unfortunately new implementation has not been tested on multiple
factors yet,
but hopefully it works fine.
Nicola
On Nov 13, 2008, at 1:09 PM, Miguel José Hernández Vidal wrote:
Hi Jun,
Is
Hi,
suggest you download moses from
http://sourceforge.net/project/showfiles.php?group_id=171520
That's 2008-07-11 version.
I encountered the same problem: the decoder die on the MERT process
when using the latest version check out from svn .
On Thu, Nov 13, 2008 at 6:49 PM, Miguel José
Hi Germán,
I was actually talking about -output-word-graph option, I apologize for
my mistake.
Thank you for your fast answer.
Regards,
Loïc
Germán Sanchis Trilles a écrit :
Hi Loic,
bare in mind that there are two options:
-output-search-graph
-output-word-graph.
I'm not sure about
Hi,
Moses does in fact adds a begin of sentence token at the beginning
of the input to provide proper language model context. However,
the recommended Kneser-Ney smoothed language model is also
not fully appropriate to compute unigram probabilities for the first
word of the phrase, due the way
I'm not 100% sure but I think that IRSTLM does not add sentence
boundary tokens. maybe that's an option?
jorg
On Thu, 13 Nov 2008 20:58:54 +0100
Felipe Sánchez Martínez [EMAIL PROTECTED] wrote:
Hi all,
I am using Moses to obtain translation candidates (in the form of
n-best
lists)
Hi,
no, IRST is doing the same thing.
It is a standard thing to do,
and it is a good thing.
-phi
On Thu, Nov 13, 2008 at 10:06 PM, J.Tiedemann [EMAIL PROTECTED] wrote:
I'm not 100% sure but I think that IRSTLM does not add sentence
boundary tokens. maybe that's an option?
jorg
On Thu,
Felipe,
correct, irstlm does not add sentence boundaries.
irstlm uses them only if you add them to the data.
srilm adds sentence boundaries by default around each
text line but you can disable this operation (check proper
option in the manual page of ngram-count and ngram).
i'm not sure about
11 matches
Mail list logo