Rupert Westenthaler created STANBOL-1424:
--------------------------------------------

             Summary: The commons.opennlp module can load the same model twice 
in parallel
                 Key: STANBOL-1424
                 URL: https://issues.apache.org/jira/browse/STANBOL-1424
             Project: Stanbol
          Issue Type: Bug
    Affects Versions: 0.12.0
            Reporter: Rupert Westenthaler
            Assignee: Rupert Westenthaler
            Priority: Minor
             Fix For: 1.0.0, 0.12.1


The commons.opennlp model allows to load models by their names via the 
DataFileProvider infrastructure. Loaded models are cached in memory. 

If two components do request the same model in a short time. Especially when 
the 2md request for a model comes before the first was completed the same model 
is loaded twice in parallel. This will result that two instances of the model 
are loaded.

While the 2nd request will override the cached model of the first the first 
component requesting the model might still hold a reference. In this case two 
instances of the model are holded in-memory.

To solve those situations the OpenNLP service needs to use lock while loading 
models.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to