Hi Shubham

You could start Moses in server mode:

$ moses -f /path/to/moses.ini --server --server-port 12345 --server-log
/path/to/log

This will load the models, keep them in memory and the server will wait for
client requests and serve them until you terminate the process. Translating
is a bit different in this case, you have to send an XML-RPC request to the
server.

But first you'd have to make sure Moses is built with XML-RPC.

Regards and good luck
Mathias
—

Mathias Müller
AND-2-20
Institute of Computational Linguistics
University of Zurich
Switzerland
+41 44 635 75 81
[email protected]

On Fri, Dec 16, 2016 at 10:32 AM, Shubham Khandelwal <[email protected]>
wrote:

> Hey Thomas,
>
> Thanks for your reply.
> Using Cube Pruning, the speed is littile bit high, but not that much. I
> will try to play with these parameters.
>
> I have binary moses2 which supports it aswell but it is taking more time
> to than moses. Can you please send/share somewhere your binary moses2 file
> if possible ?
>
> Also, I do not wish to run this command ( ~/mosesdecoder/bin/moses
> -f moses.ini -threads all) every time for every input. Is there any way in
> Moses by which all models will load in memory for forever and I can just
> pass a input and get output in real time without using this command again
> and again.
>
> Looking forward for your response.
>
> Thanks again.
>
> On Fri, Dec 16, 2016 at 1:20 PM, Tomasz Gawryl <[email protected]
> > wrote:
>
>> Hi,
>> If you want to speed up decoding time maybe you should consider changing
>> searching algorithm. I'm also using compact phrase tables and after some
>> test I realised that cube pruning gives almost exactly the same quality
>> but
>> is much faster. For example you can add something like this to your config
>> file:
>>
>> # Cube Pruning
>> [search-algorithm]
>> 1
>> [cube-pruning-pop-limit]
>> 1000
>> [stack]
>> 50
>>
>>  If your model allows you may also try moses2 binary which is faster than
>> original.
>>
>> Regards,
>> Thomas
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Thu, 15 Dec 2016 19:12:01 +0530
>> From: Shubham Khandelwal <[email protected]>
>> Subject: Re: [Moses-support] Regarding Decoding Time
>> To: Hieu Hoang <[email protected]>
>> Cc: moses-support <[email protected]>
>> Message-ID:
>>         <[email protected]
>> ail.com>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Hello,
>>
>> Currently, I am using phrase-table.minphr , reordering-table.minlexr and
>> language model (total size of these 3 are 6 GB). Now, I tried to decode on
>> two different machines (8 core-16GB RAM  *&* 4 core-40GB RAM) using them.
>> So, During decoding of around 500 words, it took 90 seconds and 100
>> seconds
>> respectively on those machines. I am already using compact phrase and
>> reordering table representations for faster decoding. Is there any other
>> way
>> to reduce this decoding time.
>>
>> Also, In Moses, Do we have distributed way of decoding on multiple
>> machines
>> ?
>>
>> Looking forward for your response.
>>
>> _______________________________________________
>> Moses-support mailing list
>> [email protected]
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>
>
>
> --
> Yours Sincerely,
>
> Shubham Khandelwal
> Masters in Informatics (M2-MoSIG),
> University Joseph Fourier-Grenoble INP,
> Grenoble, France
> Webpage: https://sites.google.com/site/skhandelwl21/
>
> _______________________________________________
> Moses-support mailing list
> [email protected]
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
_______________________________________________
Moses-support mailing list
[email protected]
http://mailman.mit.edu/mailman/listinfo/moses-support

Reply via email to