[ 
https://issues.apache.org/jira/browse/LUCENE-842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Connolly closed LUCENE-842.
----------------------------------

       Resolution: Invalid
    Fix Version/s: 2.1

It was not a bug with lucene rather the netbeans 5.5 memory profiler was 
providing misleading information. I've attached two screenshots that show the 
heap and gc using the test case provided.

There may be an issue with netbeans around the profiling of joined threads, 
regardless it is not a bug in lucene.

I do need to open and close a searchers, in each call. Basically the index(s) 
can be changed based upon the other events. 

Please close the issue. Thank you for answering so quickly. It is a great piece 
of software.

> ParallelMultiSearcher memory leak
> ---------------------------------
>
>                 Key: LUCENE-842
>                 URL: https://issues.apache.org/jira/browse/LUCENE-842
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Search
>    Affects Versions: 2.1
>         Environment: Windows XP SP2 and Red Hat EL 4
>            Reporter: Thomas Connolly
>            Priority: Critical
>             Fix For: 2.1
>
>         Attachments: search_test_gc.PNG, search_test_heap.PNG, 
> TestParallelMultiSearcherMemLeak.java
>
>
> When using the org.apache.lucene.search.ParallelMultiSearcher to search on a 
> single searcher (reading a single index), continuous runs result in a memory 
> leak. 
> Substituting the MultiSearcher does not result in a memory leak. and is the 
> workaround currently used.
> And example of the code used is as follows. Note the close routine was added 
> for the individual searchers and the MultiSearcher otherwise the was a leak 
> in MultiSearcher.
>     private void doSearch(Search search)
>     {
>         IndexSearcher[] indexSearchers = null;
>         
>         MultiSearcher multiSearcher = null;
>         try
>         {
>             indexSearchers = getIndexSearcher();
>             
>             // aggregate the searches across multiple indexes
>             multiSearcher = new ParallelMultiSearcher(indexSearchers); // 
> causes LEAK BAD
>             //multiSearcher = new MultiSearcher(indexSearchers); // NO leak 
> GOOD
>             final QueryParser parser = new QueryParser("content", new 
> ExtendedStandardAnalyser());
>             final Query query = parser.parse(search.getQuery());
>             
>             final Hits hits = multiSearcher.search(query, 
> getFilter(search.getFilters()), getSort(search.getSort()));
>                       // process hits...
>         }
>         finally
>         {
>             close(indexSearchers);
>             close(multiSearcher);
>         }
>     }
>     /**
>      * Close the index searchers.
>      * 
>      * @param indexSearchers Index Searchers.
>      */
>     private static void close(IndexSearcher[] indexSearchers)
>     {
>         if (indexSearchers != null)
>         {
>             for (IndexSearcher indexSearcher : indexSearchers)
>             {
>                 try
>                 {
>                     indexSearcher.close();
>                 }
>                 catch (IOException ioex)
>                 {
>                     LOGGER.warn("Unable to close the index searcher!", ioex);
>                 }
>             }
>         }
>     }
>     
>     /**
>      * Close the multi-searcher.
>      * 
>      * @param aMultiSearcher Index Searchers.
>      */
>     private static void close(MultiSearcher aMultiSearcher)
>     {
>         try
>         {
>             aMultiSearcher.close();
>         }
>         catch (IOException ioex)
>         {
>             LOGGER.warn("Unable to close the multi searcher!", ioex);
>         }
>     }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to