submissions:
Paper submission deadline: August 15th, 2020
Notification of acceptance: September 29th, 2020
Camera-ready deadline: October 10th, 2020
Online conference: November 19-20th, 2020
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered
Notification of acceptance: September 29th, 2020
Camera-ready deadline: October 10th, 2020
Online conference: November 19-20th, 2020
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336
, and some initial MT results.
https://arxiv.org/abs/2001.09907
Barry Haddow and Faheem Kirefu
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
___
Moses-support mailing list
Moses-support
.
IMPORTANT DATES
Paper submissions:
Paper submission deadline: May 17th, 2019
Notification of acceptance: June 7th, 2019
Camera-ready deadline: June 17th, 2019
Conference in Florence : August 1-2, 2019
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body
are encouraged to register with the mailing list
for further announcements
(https://groups.google.com/forum/#!forum/wmt-tasks
<https://groups.google.com/forum/#%21forum/wmt-tasks>)
For all tasks, participants will also be invited to submit a short
paper describing their system.
Best wishes
.
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
___
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
are encouraged to register with the mailing list
for further announcements
(https://groups.google.com/forum/#!forum/wmt-tasks)
For all tasks, participants will also be invited to submit a short
paper describing their system.
Best wishes
Barry Haddow
(On behalf of the organisers
ntences each and loop
> over the files. I usually have bad experience with trying to translate
> large batches of text with moses.
>
> Is still trying to load the entire corpus into memory? It used to do that.
>
> W dniu 12.12.2017 o 10:16, Barry Haddow pisze:
>> Hi Lilin
Hi Liling
The short answer is you need need to prune/filter your phrase table
prior to creating the compact phrase table. I don't mean "filter model
given input", because that won't make much difference if you have a very
large input, I mean getting rid of rare translations which won't be
Hi All
I did produce a version of experiment.perl for Groundhog (remember
that?) but it's not much use for any other nmt system. The problem (well
actually the big advantage!) of nmt is that the pipeline is too simple
for a tool like experiment.perl. And the experiments that do need tool
Hi Jorg
Since the operation sequence model is based in minimal phrase pairs, its
training code should be able to do the extraction (although I'm not
familiar with this code)
cheers - Barry
On 08/11/17 19:12, Jorg Tiedemann wrote:
Hi,
Can I use moses extract or any other tool to extract
Hi Vincent
Looks fine to me:
> wc -l news-commentary-v12.de-en.*
> 270769 news-commentary-v12.de-en.de
> 270769 news-commentary-v12.de-en.en
> 541538 total
What are you running that shows you different line numbers?
cheers - Barry
On 12/09/17 10:06, Vincent Nguyen wrote:
> Hi,
> Is
15th, 2017
Notification of acceptance: June 30th, 2017
Camera-ready deadline: July 14th, 2017
Workshop in Copenhagen preceding EMNLP: September 7-8th, 2017
Barry Haddow
(On behalf of the organisers)
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration
Hi Amir
You could also try this paper for a derivation of the complexity of PBMT
decoding
https://www.aclweb.org/anthology/E/E09/E09-1061v2.pdf
cheers - Barry
On 27/02/17 15:54, Philipp Koehn wrote:
> Hi,
>
> I am not sure if you follow your question - in the formula you cite,
> there are
participants are encouraged to register
with the mailing list for further announcements
(https://groups.google.com/forum/#!forum/wmt-tasks)
For all tasks, participants will also be invited to submit a short
paper describing their system.
Best wishes
Barry Haddow
(On behalf
In steps/0
On 05/12/16 22:36, Fred Blain wrote:
hi Lane,
if you omit the '-exec' in your call to experiment.perl, it will only
generate the required scripts without running anything. you will find the
scripts under the steps/ folder.
best,
___
Hi Nat
Imagine it's a translator using MT and somehow he/she has translated
the sentence before and just wants the exact translation. A TM would
solve the problem and Moses surely could emulate the TM but NMT tends
to go overly creative and produces something else.
Then just use a TM for
tps://drive.google.com/file/d/0BxvJK3H5ZKsnYzJiZmhjUWI0Qlk/view?usp=drive_web>
2016-11-02 14:28 GMT+02:00 Barry Haddow <bhad...@staffmail.ed.ac.uk
<mailto:bhad...@staffmail.ed.ac.uk>>:
Adding
-first-step 5 -last-step 5
will just run step 5 (phrase extraction)
Adding
-first-step 5 -last-step 5
will just run step 5 (phrase extraction)
On 02/11/16 12:01, Hasan Sait ARSLAN wrote:
For instance, could you show me an example?
Thanks,
2016-11-02 13:57 GMT+02:00 Barry Haddow <bhad...@staffmail.ed.ac.uk
<mailto:bhad...@staffmail.ed.ac.uk>>:
.statmt.org/moses/?n=FactoredTraining.BuildReorderingModel>
* 8 Generation model
<http://www.statmt.org/moses/?n=FactoredTraining.BuildGenerationModel>
* 9 Configuration file
<http://www.statmt.org/moses/?n=FactoredTraining.CreateConfigurationFile>
, Hasan Sait ARSLAN wrote:
Hi Barry,
Unfortunately I didn't keep the log file. Is it really a hopeless
situation?
2016-11-02 13:10 GMT+02:00 Barry Haddow <bhad...@staffmail.ed.ac.uk
<mailto:bhad...@staffmail.ed.ac.uk>>:
Hi Hasan
You should have run train_model.perl so
trained my data for 5
days, and the folder "train" is 39 G, but there is no any phrases
saved on phrase table. It is annoying. What should I do now? I hope I
won't need to rerun everything from the scratch
2016-11-02 12:26 GMT+02:00 Barry Haddow <bhad...@staffmail.ed.ac.uk
Hi Hasan
The error message should be written into filterphrases.err, inside your
working directory,
cheers - Barry
On 02/11/16 10:02, Hasan Sait ARSLAN wrote:
Hi Hieu,
I did in the way, you want. Plus, I am sure the path and file names
are correctly spelt.
But still, I get the same
the difference ? (with the wmt11 files)
>
>
>
> Le 04/10/2016 à 21:46, Barry Haddow a écrit :
>> Hi Vincent
>>
>> Are you comparing compressed with uncompressed files?
>>
>> cheers - Barry
>>
>> On 04/10/16 14:40, Vincent Nguyen wrote:
>>> Hi,
&
Hi Vincent
Are you comparing compressed with uncompressed files?
cheers - Barry
On 04/10/16 14:40, Vincent Nguyen wrote:
> Hi,
>
> on this link:
>
> http://www.statmt.org/wmt11/translation-task.html
>
> on the download section for monolingual data, there is :
>
> one big file :
es Generation Time: : [401.645] seconds
Sentence Decoding Time: : [401.650] seconds
Translation took 1907.342 seconds
Should I check anything else?
Regards
Arefeh
On Sat, Aug 20, 2016 at 4:53 PM, Barry Haddow
<bhad...@staffmail.ed.ac.uk <mailto:bhad...@staffmail.ed.ac.uk>> wrote:
PM, Barry Haddow
<bhad...@staffmail.ed.ac.uk <mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi Arefeh
The quickest way to see if Moses is using your feature is to put a
debug message in it to see if it gets called. You can also
increase the debug of Moses (try -v 2) to
normally but weights file remains empty. It
seems moses doesn't use my feature.
Regards
Arefeh
On Wed, Aug 17, 2016 at 12:57 PM, Barry Haddow
<bhad...@staffmail.ed.ac.uk <mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi Arefeh
That seems OK. Tuning (with kbmira or pro) will crea
Hi Arefeh
That seems OK. Tuning (with kbmira or pro) will create a weights file
for the sparse features, which you can add with:
[weight-file]
/path/to/sparse/weights
What goes wrong when you run moses?
cheers - Barry
On 17/08/16 07:50, arefeh kazemi wrote:
Hi
This is just a kindly
Hi Bogdan
Why do you set the maximum phrase length to 20? Such long phrases are
unlikely to be useful, and could be the cause of the excessive resource
usage.
Other than that, the system you describe should not be using up 192G ram.
cheers - Barry
On 01/08/16 20:40, Bogdan Vasilescu wrote:
>
Hi Tomasz
The error message about missing the ini file is a consequence of the
tuning crash, so just ignore this.
To find out why Moses is failing, run it again in the console like this:
/home/moses/src/mosesdecoder/bin/moses -threads 16 -v 0 -config
Hi Joe
You could also look at the entropy of the distribution. I'll leave Matt
to post the one-liner for that one,
cheers - Barry
On 13/05/16 15:10, Matt Post wrote:
gzip -cd model/phrase-table.gz | cut -d\| -f1 | sort | uniq -c | sort
-nr | head -n5
(according to one definition of
Hi Dorra
I think this is the classic paper
http://dl.acm.org/citation.cfm?id=778824
Although a quick google turned up this paper, which is more specific to
your question
http://www.mt-archive.info/MTS-2007-Wu.pdf
cheers - Barry
On 10/05/16 23:51, haoua...@iro.umontreal.ca wrote:
> Hi,
>
>
Hi Ales
Well, bitPos=18446744073708512633 looks bogus. Marcin?
cheers - Barry
On 13/04/16 17:23, Aleš Tamchyna wrote:
Hi all,
sorry for the delay. I'm attaching the debug backtrace.
Best,
Ales
On Wed, Apr 13, 2016 at 1:49 PM, Barry Haddow
<bhad...@staffmail.ed.ac.uk <mailt
Hi
The backtrace would be more informative if you run with a debug build
(add variant=debug to bjam). Sometimes this makes bugs go away, or new
bugs appear, but if not then it will give more information. You can run
with core files enabled (ulimit -c unlimited) to save having to run
Moses
to the Moses directory. Which files are missing from the
*/tools/* directory?
Thanks.
Sergey
2016-03-19 17:01 GMT+02:00 Barry Haddow <bhad...@staffmail.ed.ac.uk
<mailto:bhad...@staffmail.ed.ac.uk>>:
Hi Sergey
It's looking for mgiza, which you don't have. Either install mgiza
Hi Sergey
It's looking for mgiza, which you don't have. Either install mgiza into
your tools directory, or remove the mgiza arguments from your
train-model.perl command line.
cheers - Barry
On 19/03/16 13:56, Sergey A. wrote:
Hello Hieu Hoang.
Thank you for your suggestion, everything
Hi Lane
SRILM is no longer required, since Nadir made some EMS updates last
October. Try upgrading to a recent version,
cheers - Barry
On 16/02/16 15:04, Lane Schwartz wrote:
Hi,
This is mostly an FYI, but I thought I'd point it out. The OSM
documentation
mplete sentences.
(I must admit that I didn't look into it in too much detail, but it
sho
uld be easy to confirm.)
Cheers,
Matthias
On Fri, 2016-01-29 at 20:28 +, Barry Haddow wrote:
Hi All
I think I see what happened now.
When you give the input "dies ist ein haus&q
ly
not
translate complete sentences.
(I must admit that I didn't look into it in too much detail, but it
sho
uld be easy to confirm.)
Cheers,
Matthias
On Fri, 2016-01-29 at 20:28 +, Barry Haddow wrote:
Hi All
I think I see what happened now.
When you give the input "dies is
Hi
When I run command-line Moses, I get the output below - i.e. no best
translation. The server crashes for me since it does not check for the
null pointer, but the command-line version does.
I think there should be a translation for this example.
cheers - Barry
[gna]bhaddow: echo 'dies ist
it should not crash.
In the log pasted by Martin, he passed "das ist ein haus" to
command-line Moses, which works, and gives a translation.
I think ideally the sample models should handle unknown words, and give
a translation. Maybe adding a glue rule would be sufficient?
cheers - B
Hi Dingyuan
What platform are you running on? I could not reproduce your error on
Ubuntu 12.04, and valgrind is clean,
cheers - Barry
On 19/01/16 16:31, Barry Haddow wrote:
> Hi Dingyuan
>
> I ran for over 200 iterations and saw no problem. I tried with your LANG
> and LANGUAGE
Hi
We are looking for a new researcher to join the statmt group in Edinburgh
Link to the advert:
https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=035233
About the group:
http://www.statmt.org/ued/
cheers - Barry
--
The University of Edinburgh is a
' which runs moses
> repeatedly until encoding error occurs.
>
> The file run7.best100.out and run7.out in the archive is the last run
> that produces the error.
>
> It seems that it is WordTranslationFeature that causes the problem.
>
> 在 2016年01月19日 00:03, Barry Haddow 写道:
>
; https://gist.github.com/gumblex/0d9d0848b435e4f9818f
>
> 在 2016年01月18日 20:42, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> The extractor expects feature names to contain an underscore (not sure
>> exactly why) but some of yours don't, and Moses skips them, interpreting
>&
Hi Dingyuan
Is it possible to attach the features.dat file that is causing the
error? Almost certainly Moses is failing to parse the line because of
the Asian characters in the feature names,
cheers - Barry
On 16/01/16 15:58, Dingyuan Wang wrote:
> I ran
>
> ~/software/moses/bin/kbmira -J 75
"target-word-insertion top 50, source-word-deletion
> top 50, word-translation top 50 50, phrase-length"
>
> I suspect there is something unexpected in the extractor.
>
>
> 在 2016年01月18日 19:03, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> In fact it is not the s
le of line 61 a few bytes are corrupted. Is that
a moses problem or my memory has a problem?
I also checked other files using iconv, they are all OK in UTF-8.
在 2016年01月18日 19:32, Barry Haddow 写道:
Hi Dingyuan
Yes, that's very possible. The error could be in extracting features.dat
from the
; Hi,
>
> I've attached that. The line number is 1694.
>
> 在 2016年01月18日 16:43, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> Is it possible to attach the features.dat file that is causing the
>> error? Almost certainly Moses is failing to parse the line because of
>>
t list occurs only in the
> feature list (3 different samples), without affecting translation
> result. Therefore, the phrase table or training corpus may not be the
> problem.
>
> 在 2016年01月18日 23:04, Barry Haddow 写道:
>> Hi Dingyuan
>>
>> Are these encoding errors present in your
Hi Lane
Can you get a stack trace to see which line the message is coming from?
That error message is repeated in a few files.
From looking at the code, I'd guess that the OutputFactorOrder is not
being initialised correctly. Possibly due to the refactoring of the
config code. Does your
:
Paper submission deadline: May 8th, 2016
Notification of acceptance: June 5th, 2016
Camera-ready deadline: June 22nd, 2016
Workshop in Berlin following ACL: August 11-12th, 2016
For shared task timetable, see website.
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh
ef van Genabith, Deutsches Forschungszentrum für Künstliche
Intelligenz (DFKI), Germany
Barry Haddow, University of Edinburgh, UK
Jan Hajic, Charles University in Prague, Czech Republic
Kim Harris, text, Germany
Matthias Heyn, SDL, Belgium
Philipp Koehn, Johns Hopkins University, USA, and Univ
The aim is to find translated document pairs from a large collection of
documents in two languages.
Best wishes
Barry Haddow
(On behalf of the organisers)
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336
0 0 0
Distortion0= 0 LM0= -57.5157 WordPenalty0= -2 PhrasePenalty0= 1
PhraseDictionaryMultiModel0= -1.09861 -1.4366 -1.53505 -1.59179 |||
-1.6079
Vito
2015-11-26 12:16 GMT+01:00 Barry Haddow <bhad...@inf.ed.ac.uk
<mailto:bhad...@inf.ed.ac.uk>>:
Hi Vito
The tcma
Hi Vito
The 0.2 difference is after retuning? That's normal then.
But a difference of 5 bleu without retuning suggests a bug. Did you say
that this only happens with PhraseDictionaryMultiModel?
cheers - Barry
On 25/11/15 13:53, Vito Mandorino wrote:
Thank you. In our tests it seems that
Hi Nick
The best solution is to use the compact phrase table, and for this just add
ttable-binarizer = $moses-bin-dir/processPhraseTableMin
to the general section.
If you need to use the ondisk phrase table (sparse features, properties
etc.) then replace the above with
ttable-binarizer =
Hi
You're better off using the Johnson pruning method
http://www.statmt.org/moses/?n=Advanced.RuleTables#ntoc5 . The relent
code is no longer maintained,
cheers - Barry
On 24/11/15 05:42, Sanjanashree Palanivel wrote:
Dear All,
I just tried to prune the phrase table using
a parameters -l -n.
On Nov 24, 2015 2:44 PM, "Barry Haddow" <bhad...@staffmail.ed.ac.uk
<mailto:bhad...@staffmail.ed.ac.uk>> wrote:
Hi
You're better off using the Johnson pruning method
http://www.statmt.org/moses/?n=Advanced.RuleTables#ntoc5 . The
relent code i
,
I tried it, I got decrease in BLEU Score say from 16.39 to 14.35,
But the size of PT was greatly reduced. When I tried some positive
values the BLEU Score varies. The following is a sample table.
Inline image 1
On Tue, Nov 24, 2015 at 3:40 PM, Barry Haddow
<bhad...@staffmail.ed.ac
Hi Tomasz
The moseserver is just the decoder, so it doesn't do any of the pre- and
post-processing steps that you also need. In particular it does not do
tokenisation. You need to send it tokenised text, and then de-tokenise
the output,
cheers - Barry
On 12/11/15 13:40, Tomasz Gawryl wrote:
Hi Davood
The first command you give has a quote missing at the end - is this correct?
Another difference is that you have "-v 0", so moses will run silently.
What was the actual output when you ran this command? What you have
below looks correct to me.
cheers - Barry
On 28/10/15 21:57,
Hi Hieu
That's exactly why I took to pre-pruning the phrase table, as I
mentioned on Friday. I had something like 750,000 translations of the
most common word, and it took half-an-hour to get the first sentence
translated.
cheers - Barry
On 05/10/15 15:48, Hieu Hoang wrote:
what pt
And there's prunePhraseTable, which prunes according to weighted TM
score (as Moses does at runtime).
Some day there will be one pruner to rule them all ...
On 02/10/15 18:39, Philipp Koehn wrote:
Hi,
there is also scripts/training/threshold-filter.perl
which filters out phrase pairs based
Hi Nakul
The Emille project released parallel corpora for several South Asian
languages
http://catalog.elra.info/product_info.php?products_id=696
cheers - Barry
On 27/09/15 15:45, nakul sharma wrote:
> Dear All,
>
> Is there any online repository of parallel corpus for Indian Regional
>
Hi Jian
You could also try using dropout. Adding something like
--dropout 0.8 --input_dropout 0.9 --null_index 1
to nplm training can help - look at your vocabulary file to see what the
null index should be set to. This works with the Moses version of nplm,
cheers - Barry
On 21/09/15
Hi Tomek
Yes, that's quite a low score. Have a look at the translation output, do
the sentences have lots of English words in them, are they very long,
very short, or scrambled in some other way?
The commonest problem is that something went wrong in corpus
preparation, for example the
You could try this tutorial
http://www.statmt.org/mtma15/uploads/mtma15-domain-adaptation.pdf
On 14/08/15 20:20, Vincent Nguyen wrote:
I had read this section, which deals with translation model combination.
not much on language model or tuning.
For instance : if I want to make sure that a
clue what signal 9 means ?
Le 04/08/2015 17:28, Barry Haddow a écrit :
Hi Vincent
If you are comparing to the results of WMT11, then you can look at
the system descriptions to see what the authors did. In fact it's
worth looking at the WMT14 descriptions (WMT15 will be available next
month
Hi Vincent
If you are comparing to the results of WMT11, then you can look at the
system descriptions to see what the authors did. In fact it's worth
looking at the WMT14 descriptions (WMT15 will be available next month)
to see how state-of-the-art systems are built.
For fr-en or en-fr, the
Hi John
Is there a reason the example weight file has this feature name that I’m
missing?
My fault I'm afraid. I streamlined bilingual-lm in EMS, but didn't
realise that the example bypassed tuning. I've fixed it now according to
your suggestion,
cheers - Barry
On 02/08/15 15:46, John
I took out the dash - does it work now?
On 03/08/15 18:55, John Morgan wrote:
Thanks Barry,
I think there's still a problem with the feature name.
I think the subroutine get_order_of_scores_from_nbestlist in
mert-moses.pl does not expect a dash in the feature name.
John
On 8/3/15, Barry
I have to binarize first or can I convert directly to Compact ?
(ie can I skip the CreateOnDisk stuff)
if so is there a predefined script or should do it manually ?
thanks
Le 28/07/2015 15:44, Barry Haddow a écrit :
Hi Vincent
I think the quotes are getting stripped off further down
Try using the -b option in the tokenizer / detokenizer to disable buffering.
On 29/07/15 18:47, Vincent Nguyen wrote:
Hi,
As is, it was working fine except the tokenizer / detokenizer .perl code
is outdated.
It causes problem with the apostrophe in French.
so I changed the translate.cgi
Hi Vincent
2 bugs report :
in the LM Corpus definition for Europarl : the $pair-extension is
missing before .$output-extension
in the step 5 (maybe for others too) generation of the moses.tuned.ini.5
file there is a missing .gz at the end of phrase-table.5
in the PhraseDictionaryMemory
Le 28/07/2015 14:47, Barry Haddow a écrit :
Hi Vincent
It could be a bug. Could you edit
mosesdecoder/scripts/ems/experiment.meta and change the line:
template: $binarize-all IN OUT -Binarizer $ttable-binarizer
to
template: $binarize-all IN OUT -Binarizer $ttable-binarizer
Note that I
Hi Vincent
If you look at the error log, you will see:
Usage: /home/moses/mosesdecoder/bin/CreateOnDiskPt numSourceFactors
numTargetFactors numScores tableLimit sortScoreIndex inputPath outputPath
You are missing the first 5 arguments to CreateOnDiskPt, as given in
config.basic.
cheers -
-binarizer somewhere
Le 28/07/2015 13:49, Barry Haddow a écrit :
Hi Vincent
If you look at the error log, you will see:
Usage: /home/moses/mosesdecoder/bin/CreateOnDiskPt numSourceFactors
numTargetFactors numScores tableLimit sortScoreIndex inputPath
outputPath
You are missing the first 5
Hi Fatma
I don't see any error in the file. What do you mean the output was
wrong. ?
cheers - Barry
On 28/07/15 19:13, fatma elzahraa Eltaher wrote:
Dear All,
I try to build a Model but I get an attached error file . is this mean
that there are a problem in model . Because I test it by
Hi Martin
Thanks for the detailed information. It's a bit strange since
command-line Moses uses the same threadpool, and we always overload the
threadpool since the entire test set is read in and queued.
The server was refactored somewhat recently - which git revision are you
using?
In
. abyss: 16)
client: shoots 10 threads = about 11 seconds, server shows busy CPU
workload = OK
5.)
server: --threads: 16 (i.e. abyss: 32)
client: shoots 20 threads = about 11 seconds, server shows busy CPU
workload = OK
Helps. :-)
Best wishes,
Martin
Am 24.07.2015 um 13:26 schrieb Barry Haddow
these
models into memory will require raising our already excessive RAM
requirements...
Thanks again for the help.
On Wednesday, July 22, 2015, Barry Haddow
bhad...@staffmail.ed.ac.uk
javascript:_e(%7B%7D,'cvml','bhad...@staffmail.ed.ac.uk'); wrote:
Hi Oren
I'm
if you use the command line
moses, rather than mosesserver?
Hieu Hoang
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu
On 21 July 2015 at 18:07, Barry Haddow
bhad...@staffmail.ed.ac.uk wrote:
On 21/07/15 14:51, Oren wrote:
I am
.
The slowness issue persists but in a different form. Most requests
return right away, even under heavy load, but some requests (about 5%)
take far longer - about 15-20seconds.
Perhaps there are other relevant switches?
Thanks again.
On Monday, July 20, 2015, Barry Haddow bhad...@staffmail.ed.ac.uk
to be
something configurable beyond the -threads switch. Am I missing something?
The commit enables you to set the maximum number of connections to be
the same as the maximum number of threads.
On Tuesday, July 21, 2015, Barry Haddow bhad...@staffmail.ed.ac.uk
mailto:bhad...@staffmail.ed.ac.uk wrote
Hi Oren
The threading model is different. In v1, the server created a new thread
for every request, v3 uses a thread pool. Try increasing the number of
threads.
Also, make sure you use the compact phrase table and KenLM as they are
normally faster, and pre-pruning your phrase table can
(Sent on behalf of Jan Hajic)
We cordially invite you to take part in the first Deep Machine
Translation Workshop, which will take place in Prague, Czech Republic,
on 3rd-4th September 2015.
https://ufal.mff.cuni.cz/events/deep-machine-translation-workshop
This is the first workshop on Deep
Just remove steps/1/TUNING_tune.1.DONE (replacing 1 with your experiment
id) and then re-run.
It would be nice if EMS supported multiple tuning runs without
intervention, but afaik it doesn't.
On 22/06/15 16:15, Lane Schwartz wrote:
Given a successful run of EMS, what do I need to do to
Hi Davood
From line 20113 onwards there's a whole bunch of error messages
indicating that the giza alignment didn't run properly, so then the
resulting phrase extraction didn't work. I can't actually see why giza
failed though - possibly the corpus was not preprocessed correctly. I'm
not
Do you think that my medium system is effective? (Core i5 2400 , 4GB
RAM, Ubuntu 32bit 14.04). Of course i wanted to train about 5
sentences.
For a small data set of 50k sentences, this should work. You could try
on 10k sentences to be sure.
On 17/06/15 13:46, Davood Mohammadifar wrote:
Hi Davood
It isn't normal to get such large differences in phrase table size or
quality, on the same data set, although small variations are possible.
You should check carefully that you used exactly the same settings in
each run, and check if anything went wrong during training (errors in
Hi All
The deadline for this has been extended to June 2nd. We are looking for
a research associate to join the Edinburgh SMT group, initially for 12
months, but could be extended if current funding applications are
successful.
The advert mentions specific projects, but we can be quite
Hi Marius
It looks like you're missing the bz2 package. Try installing libbz2-dev
(on debian-based systems) or bzip2-devel (rpm-based systems).
You're also using your own boost installation, as opposed to the system
one. Usually it's easier to use the system one as the correct
dependencies
the source of error? Thanks so much.
Cheers,
Marius
*From:* Barry Haddow bhad...@staffmail.ed.ac.uk
*To:* Marius Oliver Gheorghita redwir...@yahoo.com;
moses-support@mit.edu moses-support@mit.edu
*Sent:* Thursday, 21 May 2015, 12
/KES_newDev_placeholders/data/dev/KES10.dev.preproc.tok.true.en'.
at /home/hermesta/mosesdecoder/scripts/training/mert-moses.pl line 1719.
When checking line 1719 of mert-moses.pl, I realized it is also
related to qsub.
Thank you so much!
Carla
El 06.05.2015 15:27, Barry Haddow escribió:
Hi Carla
:18 BST, Jeremy Gwinnup jer...@gwinnup.org wrote:
How hard would it be to append the LMBR scores to the list of features
instead of overwriting it? Maybe I can tackle this at MTMA15 next week.
I’m not too worried about the long runtime at least initially.
On May 6, 2015, at 5:01 PM, Barry Haddow
branch.?
Thanks,
Tomas
*From:*Barry Haddow [mailto:bhad...@inf.ed.ac.uk]
*Sent:* Tuesday, May 5, 2015 9:27 PM
*To:* Hieu Hoang; moses-support
*Subject:* Re: [Moses-support] Fwd: Fwd: Server development
HI Tomas
There were some issues in v2 with the way that caching was done in the
binarised
,
thanks for your prompt reply. If I am not wrong the name of the server
is hermesta-Z10PE-D8-WS (I have taken it from the machine
information, I attach a screenshot). If I should look somewhere else
please let me know.
Thanks,
Carla
El 06.05.2015 14:10, Barry Haddow escribió:
Hi Carla
Hi Carla
What's your server called?
There's a hard-coded list of Edinburgh machines in ems, so I'm wondering
if it collides with one of them,
cheers - Barry
On 06/05/15 12:56, carla.pa...@hermestrans.com wrote:
Hi everyone,
First of all, thanks for reading and hopefully giving me some
1 - 100 of 707 matches
Mail list logo