Sorry, I realized you were referring to the API found here
http://incubator.apache.org/opennlp/documentation/manual/opennlp.html#opennlp
 Will be checking it.

On Thu, Sep 15, 2011 at 2:48 PM, György Chityil <gyorgy.chit...@gmail.com>wrote:

> Thanks Jörn, this sounds  interesting "you only need to do it once at start
> up. And then the model can be shared between all POS Tagger instances." Is
> there some kind of documentation on how to run multiple POS Tagger
> instances? So far it seemed to me (on linux) that I have to start a new
> instance for every tagging, meaning I have to execute the command "opennlp
> POSTagger en-maxent-pos.bin < myfile.txt > result.txt"
>
> Or perhaps what I just thought of it there is a way to load opennlp (and
> the tagger) with nohup command on linux so it stays active in the background
> waiting for requests.
>
>
> On Thu, Sep 15, 2011 at 2:42 PM, Jörn Kottmann <kottm...@gmail.com> wrote:
>
>> On 9/15/11 2:25 PM, György Chityil wrote:
>>
>>> Hello,
>>>
>>> On my comp it takes on average 2-3 secs to load the large POSTaggger
>>> model
>>> (en, circa 70MB)
>>>
>>> Here is a piece of the output:
>>> Loading POS Tagger model ... done (2.814s)
>>>
>>>
>>> Is there anyway to speed this up?
>>>
>>
>> No not really. We would need to optimize the code which is loading the
>> model.
>> You are invited to submit a patch which does that, maybe there are a few
>> easy ways
>> to make it faster, not sure.
>>
>> Which loading time would you like to have?
>>
>> In all the applications on which I worked the loading time didn't matter
>> because
>> you only need to do it once at start up. And then the model can be shared
>> between
>> all POS Tagger instances.
>>
>> Jörn
>>
>>
>
>
> --
> Gyuri
> 274 44 98
> 06 30 5888 744
>
>


-- 
Gyuri
274 44 98
06 30 5888 744

Reply via email to