Hi Lane, Barry, Sandra,
 
that was Mark Fishel and myself working on the batching LM requests.
 
The code was "somewhat finished" when leaving Dublin, however I recall
that there still were some memory leaks every now and then :( 
 
I have started working on the batched LM code again for use in MT Server Land.
So there's a chance that it can be released in the near future...
 
cheers,
   Christian
 
 

Barry Haddow <[email protected]> hat am 2. Februar 2011 um 15:22 geschrieben:

> Hi Lane
>
> That wasn't me I'm afraid. There was a group working on a way of batching LM
> requests so they could be sent to a server, but I don't the project was ever
> finished.
>
> cheers - Barry
>
> On Wednesday 02 February 2011 14:14, Lane Schwartz wrote:
> > I recall Barry working on the LM server at the Dublin MT Marathon. What is
> > the current status of that?
> >
> > On Wed, Feb 2, 2011 at 3:29 AM, Miles Osborne <[email protected]> wrote:
> > > to add to Barry's excellent answer, we are currently working on a
> > > client-server language model.  this will mean that a cluster of
> > > machines can be used, with a shared resource.  it should also work
> > > with multicore
> > >
> > > but in the short-term, you are probably better off with multicore
> > >
> > > Miles
> > >
> > > On 2 February 2011 06:06, Noubours, Sandra
> > >
> > > <[email protected]> wrote:
> > > > Hello Barry, hello Tom,
> > > >
> > > > thank you for your answers. I think I have a better idea about
> > > > different
> > >
> > > approaches to MOSES efficiency issues now.
> > >
> > > > Best regards,
> > > > Sandra
> > > >
> > > > -----Ursprüngliche Nachricht-----
> > > > Von: Barry Haddow [mailto:[email protected]]
> > > > Gesendet: Montag, 31. Januar 2011 10:52
> > > > An: [email protected]
> > > > Cc: Noubours, Sandra; Tom Hoar
> > > > Betreff: Re: [Moses-support] running moses on a cluster with sge
> > > >
> > > > Hi Sandra
> > > >
> > > > The short answer is that it really depends how big your models are.
> > >
> > > Running on
> > >
> > > > a cluster helps speed up tuning because most of the time in tuning is
> > >
> > > spent
> > >
> > > > decoding, which can be easily parallelised by splitting up the file
> > > > into chunks. So each of the individual machines should be capable of
> > > > loading
> > >
> > > your
> > >
> > > > models and running a decoder.
> > > >
> > > > The problem with using a cluster (as opposed to multicore) is that each
> > > > machine has to have its own ram, and if you want to load large models
> > >
> > > then
> > >
> > > > you need a lot of ram. Whereas with multicore, each thread can access
> > > > the same model. Sure, binarising saves a lot on ram usage, but it slows
> > > > you
> > >
> > > down
> > >
> > > > and puts a lot of load on the filesystem which can cause problems on
> > > > clusters.
> > > >
> > > > Our group's machines are a mixture of 8 and 16 core Xeon 2.67GHz, with
> > >
> > > 36-72G
> > >
> > > > ram, no sge. We also have access to the university cluster, but since
> > > > the most ram you can get is 16G and sge hold jobs don't work at the
> > > > moment we don't really use it for moses any more,
> > > >
> > > > hope that helps - regards - Barry
> > > >
> > > > On Monday 31 January 2011 07:42, Noubours, Sandra wrote:
> > > >> Hello,
> > > >>
> > > >>
> > > >>
> > > >> thanks for the tips! When talking about using a Sun Grid Engine I was
> > > >> referring tuning. Making use of a cluster is supposed to speed up the
> > > >> tuning process (see http://www.statmt.org/moses/?n=Moses.FAQ#ntoc10).
> > >
> > > In
> > >
> > > >> this context I wondered what hardware exactly is needed for such a
> > >
> > > cluster.
> > >
> > > >> Sandra
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> Von: Tom Hoar [mailto:[email protected]]
> > > >> Gesendet: Freitag, 28. Januar 2011 09:01
> > > >> An: Noubours, Sandra
> > > >> Cc: [email protected]
> > > >> Betreff: Re: [Moses-support] running moses on a cluster with sge
> > > >>
> > > >>
> > > >>
> > > >> Sandra,
> > > >>
> > > >> What kind of capacity do you need to support? I just finished
> > >
> > > translating
> > >
> > > >> 21,000 pages, over 1/2 million phrases, in 22 hours on an old Intel
> > > >> Core2Quad, 2.4 Ghz with 4 GB RAM and a 4-disk RAID-0. Moses was
> > >
> > > configured
> > >
> > > >> with binarized phrase/reordering tables and kenlm binarized language
> > >
> > > model.
> > >
> > > >> The advances in Moses supporting efficient binarized tables/models are
> > > >> great!
> > > >>
> > > >> We're planning tests for a 2-socket host with two Intel Xeon 5680
> > > >> 6-core 3.33 Ghz CPU's, 48 GB RAM and 4 1-TB disks as RAID0. With 12
> > > >> cores (totaling 24 simultaneous threads according to Intel specs),
> > > >> we're expecting to boot capacity to well over 15 million phrases per
> > > >> day on
> > >
> > > one
> > >
> > > >> host.
> > > >>
> > > >> What's the advantage of running Moses on a grid or cluster?
> > > >>
> > > >> Tom
> > > >>
> > > >>
> > > >>
> > > >> On Fri, 28 Jan 2011 08:40:22 +0100, "Noubours, Sandra"
> > > >> <[email protected]> wrote:
> > > >>
> > > >>       Hello,
> > > >>
> > > >>
> > > >>
> > > >>       I would like to run Moses on a cluster. I am yet inexperienced
> > > >> in
> > >
> > > using
> > >
> > > >> Sun Grid as well as clusters in common. Could you give me any
> > >
> > > instructions
> > >
> > > >> or tips for implementing a Linux-Cluster with Sun Grid Engine for
> > >
> > > running
> > >
> > > >> Moses?
> > > >>
> > > >>       a)      What kind of cluster would you recommend, i.e. how many
> > >
> > > machines,
> > >
> > > >> how many cpus, what memory, etc.?
> > > >>
> > > >>       b)      When tuning is performed with the multicore option it
> > > >> does
> > >
> > > not use
> > >
> > > >> more than one cpu. Does the tuning step use more than one cpu when run
> > >
> > > on a
> > >
> > > >> cluster?
> > > >>
> > > >>       c)       Can Sun Grid implement a cluster virtually on one
> > >
> > > computer, so
> > >
> > > >> that jobs are spread locally to different cpus of one computer?
> > > >>
> > > >>
> > > >>
> > > >>       Thank you and best regards!
> > > >>
> > > >>
> > > >>
> > > >>       Sandra
> > > >
> > > > --
> > > > The University of Edinburgh is a charitable body, registered in
> > > > Scotland, with registration number SC005336.
> > > >
> > > >
> > > > _______________________________________________
> > > > Moses-support mailing list
> > > > [email protected]
> > > > http://mailman.mit.edu/mailman/listinfo/moses-support
> > >
> > > --
> > > The University of Edinburgh is a charitable body, registered in
> > > Scotland, with registration number SC005336.
> > >
> > > _______________________________________________
> > > Moses-support mailing list
> > > [email protected]
> > > http://mailman.mit.edu/mailman/listinfo/moses-support
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
>
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
--
Dipl.-Inf. Christian Federmann, Researcher, Language Technology Lab
Office +1.09 -- Phone +49 (0)681/302-5353,  Fax +49 (0)681/302-5338
DFKI GmbH,  Campus D3 2,  Stuhlsatzenhausweg 3,  66123 Saarbruecken
http://www.dfki.de/~cfedermann
 
-------------------------------------------------------------------
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats:
Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313
-------------------------------------------------------------------

_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to