RE: Memory Leaks in solr 4.8.1

2014-07-02 Thread Markus Jelsma
Hi, you can safely ignore this, it is shutting down anyway. Just don't reload 
the app a lot of times without actually restarting Tomcat. 
 
-Original message-
 From:Aman Tandon amantandon...@gmail.com
 Sent: Wednesday 2nd July 2014 7:22
 To: solr-user@lucene.apache.org
 Subject: Memory Leaks in solr 4.8.1
 
 Hi,
 
 When i am shutting down the solr i am gettng the Memory Leaks error in logs.
 
 Jul 02, 2014 10:49:10 AM org.apache.catalina.loader.WebappClassLoader
  checkThreadLocalMapForLeaks
  SEVERE: The web application [/solr] created a ThreadLocal with key of type
  [org.apache.solr.schema.DateField.ThreadLocalDateFormat] (value
  [org.apache.solr.schema.DateField$ThreadLocalDateFormat@1d987b2]) and a
  value of type [org.apache.solr.schema.DateField.ISO8601CanonicalDateFormat]
  (value 
  [org.apache.solr.schema.DateField$ISO8601CanonicalDateFormat@6b2ed43a])
  but failed to remove it when the web application was stopped. Threads are
  going to be renewed over time to try and avoid a probable memory leak.
 
 
 Please check.
 With Regards
 Aman Tandon
 


RE: Memory Leaks in solr 4.8.1

2014-07-02 Thread Aman Tandon
We reload at interval of 6/7 days and restart may be in 15/18 days if the
response becomes too slow
On Jul 2, 2014 7:09 PM, Markus Jelsma markus.jel...@openindex.io wrote:

 Hi, you can safely ignore this, it is shutting down anyway. Just don't
 reload the app a lot of times without actually restarting Tomcat.

 -Original message-
  From:Aman Tandon amantandon...@gmail.com
  Sent: Wednesday 2nd July 2014 7:22
  To: solr-user@lucene.apache.org
  Subject: Memory Leaks in solr 4.8.1
 
  Hi,
 
  When i am shutting down the solr i am gettng the Memory Leaks error in
 logs.
 
  Jul 02, 2014 10:49:10 AM org.apache.catalina.loader.WebappClassLoader
   checkThreadLocalMapForLeaks
   SEVERE: The web application [/solr] created a ThreadLocal with key of
 type
   [org.apache.solr.schema.DateField.ThreadLocalDateFormat] (value
   [org.apache.solr.schema.DateField$ThreadLocalDateFormat@1d987b2]) and
 a
   value of type
 [org.apache.solr.schema.DateField.ISO8601CanonicalDateFormat]
   (value
 [org.apache.solr.schema.DateField$ISO8601CanonicalDateFormat@6b2ed43a])
   but failed to remove it when the web application was stopped. Threads
 are
   going to be renewed over time to try and avoid a probable memory leak.
  
 
  Please check.
  With Regards
  Aman Tandon
 



RE: Memory Leaks in solr 4.8.1

2014-07-02 Thread Chris Hostetter

This is a long standing issue in solr, that has some suggested fixes (see 
jira comments), but no one has been seriously afected by it enough for 
anyone to invest time in trying to improve it...

https://issues.apache.org/jira/browse/SOLR-2357

In general, the fact that Solr is moving away from being a webapp, and 
towards being a stand alone java application, makes it even less likeley 
that this will ever really affect anyone.



: Date: Thu, 3 Jul 2014 07:37:03 +0530
: From: Aman Tandon amantandon...@gmail.com
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: RE: Memory Leaks in solr 4.8.1
: 
: We reload at interval of 6/7 days and restart may be in 15/18 days if the
: response becomes too slow
: On Jul 2, 2014 7:09 PM, Markus Jelsma markus.jel...@openindex.io wrote:
: 
:  Hi, you can safely ignore this, it is shutting down anyway. Just don't
:  reload the app a lot of times without actually restarting Tomcat.
: 
:  -Original message-
:   From:Aman Tandon amantandon...@gmail.com
:   Sent: Wednesday 2nd July 2014 7:22
:   To: solr-user@lucene.apache.org
:   Subject: Memory Leaks in solr 4.8.1
:  
:   Hi,
:  
:   When i am shutting down the solr i am gettng the Memory Leaks error in
:  logs.
:  
:   Jul 02, 2014 10:49:10 AM org.apache.catalina.loader.WebappClassLoader
:checkThreadLocalMapForLeaks
:SEVERE: The web application [/solr] created a ThreadLocal with key of
:  type
:[org.apache.solr.schema.DateField.ThreadLocalDateFormat] (value
:[org.apache.solr.schema.DateField$ThreadLocalDateFormat@1d987b2]) and
:  a
:value of type
:  [org.apache.solr.schema.DateField.ISO8601CanonicalDateFormat]
:(value
:  [org.apache.solr.schema.DateField$ISO8601CanonicalDateFormat@6b2ed43a])
:but failed to remove it when the web application was stopped. Threads
:  are
:going to be renewed over time to try and avoid a probable memory leak.
:   
:  
:   Please check.
:   With Regards
:   Aman Tandon
:  
: 
: 

-Hoss
http://www.lucidworks.com/


Re: Memory Leaks in solr 4.8.1

2014-07-02 Thread Aman Tandon
Thanks chris, independent of servlet container is good.

Eagerly waiting for solr 5 :)

With Regards
Aman Tandon


On Thu, Jul 3, 2014 at 7:58 AM, Chris Hostetter hossman_luc...@fucit.org
wrote:


 This is a long standing issue in solr, that has some suggested fixes (see
 jira comments), but no one has been seriously afected by it enough for
 anyone to invest time in trying to improve it...

 https://issues.apache.org/jira/browse/SOLR-2357

 In general, the fact that Solr is moving away from being a webapp, and
 towards being a stand alone java application, makes it even less likeley
 that this will ever really affect anyone.



 : Date: Thu, 3 Jul 2014 07:37:03 +0530
 : From: Aman Tandon amantandon...@gmail.com
 : Reply-To: solr-user@lucene.apache.org
 : To: solr-user@lucene.apache.org
 : Subject: RE: Memory Leaks in solr 4.8.1
 :
 : We reload at interval of 6/7 days and restart may be in 15/18 days if the
 : response becomes too slow
 : On Jul 2, 2014 7:09 PM, Markus Jelsma markus.jel...@openindex.io
 wrote:
 :
 :  Hi, you can safely ignore this, it is shutting down anyway. Just don't
 :  reload the app a lot of times without actually restarting Tomcat.
 : 
 :  -Original message-
 :   From:Aman Tandon amantandon...@gmail.com
 :   Sent: Wednesday 2nd July 2014 7:22
 :   To: solr-user@lucene.apache.org
 :   Subject: Memory Leaks in solr 4.8.1
 :  
 :   Hi,
 :  
 :   When i am shutting down the solr i am gettng the Memory Leaks error
 in
 :  logs.
 :  
 :   Jul 02, 2014 10:49:10 AM org.apache.catalina.loader.WebappClassLoader
 :checkThreadLocalMapForLeaks
 :SEVERE: The web application [/solr] created a ThreadLocal with key
 of
 :  type
 :[org.apache.solr.schema.DateField.ThreadLocalDateFormat] (value
 :[org.apache.solr.schema.DateField$ThreadLocalDateFormat@1d987b2])
 and
 :  a
 :value of type
 :  [org.apache.solr.schema.DateField.ISO8601CanonicalDateFormat]
 :(value
 :  [org.apache.solr.schema.DateField$ISO8601CanonicalDateFormat@6b2ed43a
 ])
 :but failed to remove it when the web application was stopped.
 Threads
 :  are
 :going to be renewed over time to try and avoid a probable memory
 leak.
 :   
 :  
 :   Please check.
 :   With Regards
 :   Aman Tandon
 :  
 : 
 :

 -Hoss
 http://www.lucidworks.com/



Memory Leaks in solr 4.8.1

2014-07-01 Thread Aman Tandon
Hi,

When i am shutting down the solr i am gettng the Memory Leaks error in logs.

Jul 02, 2014 10:49:10 AM org.apache.catalina.loader.WebappClassLoader
 checkThreadLocalMapForLeaks
 SEVERE: The web application [/solr] created a ThreadLocal with key of type
 [org.apache.solr.schema.DateField.ThreadLocalDateFormat] (value
 [org.apache.solr.schema.DateField$ThreadLocalDateFormat@1d987b2]) and a
 value of type [org.apache.solr.schema.DateField.ISO8601CanonicalDateFormat]
 (value [org.apache.solr.schema.DateField$ISO8601CanonicalDateFormat@6b2ed43a])
 but failed to remove it when the web application was stopped. Threads are
 going to be renewed over time to try and avoid a probable memory leak.


Please check.
With Regards
Aman Tandon


Re: leaks in solr

2014-03-26 Thread Shawn Heisey

On 3/25/2014 4:06 PM, harish.agarwal wrote:

I'm having a very similar issue to this currently on 4.6.0 (large
java.lang.ref.Finalizer usage, many open file handles to long gone files) --
were you able to make any progress diagnosing this issue?


A few questions:

Are you using any contrib or third-party jars with Solr?
Are you using non-default values for the Java classes seen in the config 
and schema?

What vendor and version is your JVM?

For the JVM, I'd recommend the latest Oracle Java 6 or Oracle Java 7u25.

Thanks,
Shawn



Re: leaks in solr

2014-03-26 Thread Harish Agarwal
Thanks for the help -- after fortuitously looking at a separate thread:

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201403.mbox/%3CCAJYe0M4qNKzqT4gB-qib0T6%3DY1KYr7vKcNEYDHWH1MnMoCLtYw%40mail.gmail.com%3E

I upgraded to 7u25 and all is well!

On a separate note, you'd mentioned DocValues facet'ing in my prior thread
 -- is this data structure meant to replace UnInvertedFields?


On Wed, Mar 26, 2014 at 12:11 PM, Shawn Heisey s...@elyograg.org wrote:

 On 3/25/2014 4:06 PM, harish.agarwal wrote:

 I'm having a very similar issue to this currently on 4.6.0 (large
 java.lang.ref.Finalizer usage, many open file handles to long gone files)
 --
 were you able to make any progress diagnosing this issue?


 A few questions:

 Are you using any contrib or third-party jars with Solr?
 Are you using non-default values for the Java classes seen in the config
 and schema?
 What vendor and version is your JVM?

 For the JVM, I'd recommend the latest Oracle Java 6 or Oracle Java 7u25.

 Thanks,
 Shawn




Re: leaks in solr

2014-03-25 Thread harish.agarwal
I'm having a very similar issue to this currently on 4.6.0 (large
java.lang.ref.Finalizer usage, many open file handles to long gone files) --
were you able to make any progress diagnosing this issue?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/leaks-in-solr-tp3992047p4127015.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: leaks in solr

2012-07-31 Thread Karthick Duraisamy Soundararaj
Just in case, if someone else is stomping on to the same kind of issue,
check the tomcat webapps directory and try deploying it after cleaning it
out..

I had a  version without subQueries.get(i).close(); deployed  earlier and
then added a new version with  subQueries.get(i).close();  But the tomcat
did not pick the new version. Once I flushed the work directly and
restarted tomcat, it seems to be happy!

On Fri, Jul 27, 2012 at 10:19 PM, Karthick Duraisamy Soundararaj 
karthick.soundara...@gmail.com wrote:

 subQueries.get(i).close() is nothing but pulling the refrence from the
 vector and closing it. So yes. it wouldnt throw exception.

 vectorLocalSolrQueryRequests subQueries

 Please let me know if you need any more information

 On Fri, Jul 27, 2012 at 10:14 PM, Karthick Duraisamy Soundararaj 
 karthick.soundara...@gmail.com wrote:

 SimpleOrderedMapObject commonRequestParams; //This holds the common
 request params.
 VectorSimpleOrderedMapObject subQueryRequestParams;  // This holds
 the request params of sub Queries

 I use the above to create multiple localQueryRequests. To add a little
 more information, I create new ResponseBuilder for each request

 I also hold a reference to query component as a private member in my
 CustomHandler. Considering that the component is initialized only once
 during the start up, I assume this isnt a cause of concernt.

 On Fri, Jul 27, 2012 at 9:49 PM, Karthick Duraisamy Soundararaj 
 karthick.soundara...@gmail.com wrote:

 First no. Because i do the following
 for(i=0;isubqueries.size();i++) {
   subQueries.get(i).close();
 }

 Second, I dont see any exception until the first searcher leak happens.


 On Fri, Jul 27, 2012 at 9:04 PM, Lance Norskog goks...@gmail.comwrote:

 A finally clause can throw exceptions. Can this throw an exception?
  subQueries.get(i).close();

  If so, each close() call should be in a try-catch block.

 On Fri, Jul 27, 2012 at 5:28 PM, Karthick Duraisamy Soundararaj
 karthick.soundara...@gmail.com wrote:
  Hello all,
  While running in my eclipse and run a set of queries, this
  works fine, but when I run it in test production server, the
 searchers are
  leaked. Any hint would be appreciated. I have not used CoreContainer.
 
  Considering that the SearchHandler is running fine, I am not able to
 think
  of a reason why my extended version wouldnt work.. Does anyone have
 any
  idea?
 
  On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
  karthick.soundara...@gmail.com wrote:
 
  I have tons of these open.
  searcherName : Searcher@24be0446 main
  caching : true
  numDocs : 1331167
  maxDoc : 1338549
  reader :
 SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
  ,refCnt=1,segments=18}
  readerDir : org.apache.lucene.store.NIOFSDirectory@
  /usr/local/solr/highlander/data/..@2f2d9d89
  indexVersion : 1336499508709
  openedAt : Fri Jul 27 09:45:16 EDT 2012
  registeredAt : Fri Jul 27 09:45:19 EDT 2012
  warmupTime : 0
 
  In my custom handler, I have the following code
  I have the following problem
  Although in my custom handler, I have the following
 implementation(its not
  the full code but it gives an overall idea of the implementation)
 and it
 
class CustomHandler extends SearchHandler {
 
  void handleRequestBody(SolrQueryRequest
 req,SolrQueryResponse
  rsp)
 
   SolrCore core= req.getCore();
   vectorSimpleOrderedMapObject
 requestParams =
  new   vectorSimpleOrderedMapObject();
  /*parse the params such a way that
   requestParams[i] -= parameter of
 the ith
  request
*/
  ..
 
try {
 vectorLocalSolrQueryRequests subQueries =
 new
   vectorLocalSolrQueryRequests(solrcore, requestParams[i]);
 
 for(i=0;isubQueryCount;i++) {
ResponseBuilder rb = new
 ResponseBuilder()
rb.req = req;
 
handlerRequestBody(req,rsp,rb); //this
 would
  call search handler's handler request body, whose signature, i have
 modified
   }
   } finally {
for(i=0; isubQueries.size();i++)
   subQueries.get(i).close();
   }
}
 
  *Search Handler Changes*
class SearchHandler {
  void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse
  rsp, ResponseBuilder rb, ArrayListComponent comps) {
 //  ResponseBuilder rb = new ResponseBuilder()  ;
 
 ..
   }
  void handleRequestBody(SolrQueryRequest req,
  SolrQueryResponse) {
   ResponseBuilder rb = new
 

Re: leaks in solr

2012-07-27 Thread roz dev
in my case, I see only 1 searcher, no field cache - still Old Gen is almost
full at 22 GB

Does it have to do with index or some other configuration

-Saroj

On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com wrote:

 What does the Statistics page in the Solr admin say? There might be
 several searchers open: org.apache.solr.search.SolrIndexSearcher

 Each searcher holds open different generations of the index. If
 obsolete index files are held open, it may be old searchers. How big
 are the caches? How long does it take to autowarm them?

 On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
 karthick.soundara...@gmail.com wrote:
  Mark,
  We use solr 3.6.0 on freebsd 9. Over a period of time, it
  accumulates lots of space!
 
  On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:
 
  Thanks Mark.
 
  We are never calling commit or optimize with openSearcher=false.
 
  As per logs, this is what is happening
 
 
 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
 
  --
  But, We are going to use 4.0 Alpha and see if that helps.
 
  -Saroj
 
 
 
 
 
 
 
 
 
 
  On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
  wrote:
 
   I'd take a look at this issue:
   https://issues.apache.org/jira/browse/SOLR-3392
  
   Fixed late April.
  
   On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
  
it was from 4/11/12
   
-Saroj
   
On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller markrmil...@gmail.com
 
   wrote:
   
   
On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:
   
Hi Guys
   
I am also seeing this problem.
   
I am using SOLR 4 from Trunk and seeing this issue repeat every
 day.
   
Any inputs about how to resolve this would be great
   
-Saroj
   
   
Trunk from what date?
   
- Mark
   
   
   
   
   
   
   
   
   
   
  
   - Mark Miller
   lucidimagination.com
  
  
  
  
  
  
  
  
  
  
  
  
 



 --
 Lance Norskog
 goks...@gmail.com



Re: leaks in solr

2012-07-27 Thread Karthick Duraisamy Soundararaj
I have tons of these open.
searcherName : Searcher@24be0446 main
caching : true
numDocs : 1331167
maxDoc : 1338549
reader : SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
,refCnt=1,segments=18}
readerDir : org.apache.lucene.store.NIOFSDirectory@
/usr/local/solr/highlander/data/..@2f2d9d89
indexVersion : 1336499508709
openedAt : Fri Jul 27 09:45:16 EDT 2012
registeredAt : Fri Jul 27 09:45:19 EDT 2012
warmupTime : 0

In my custom handler, I have the following code
I have the following problem
Although in my custom handler, I have the following implementation(its not
the full code but it gives an overall idea of the implementation) and it

  class CustomHandler extends SearchHandler {

void handleRequestBody(SolrQueryRequest req,SolrQueryResponse
rsp)

 SolrCore core= req.getCore();
 vectorSimpleOrderedMapObject requestParams =
new   vectorSimpleOrderedMapObject();
/*parse the params such a way that
requestParams[i] -= parameter of the ith
request
  */
..

  try {
   vectorLocalSolrQueryRequests subQueries = new
 vectorLocalSolrQueryRequests(solrcore, requestParams[i]);

   for(i=0;isubQueryCount;i++) {
  ResponseBuilder rb = new ResponseBuilder()
  rb.req = req;
   
  handlerRequestBody(req,rsp,rb); //this would
call search handler's handler request body, whose signature, i have modified
 }
 } finally {
  for(i=0; isubQueries.size();i++)
 subQueries.get(i).close();
 }
  }

*Search Handler Changes*
  class SearchHandler {
void handleRequestBody(SolrQueryRequest req, SolrQueryResponse
rsp, ResponseBuilder rb, ArrayListComponent comps) {
   //  ResponseBuilder rb = new ResponseBuilder()  ;

   ..
 }
void handleRequestBody(SolrQueryRequest req, SolrQueryResponse)
{
 ResponseBuilder rb = new ResponseBuilder(req,rsp, new
ResponseBuilder());
 handleRequestBody(req, rsp, rb, comps) ;
 }
  }


I don see the index old index searcher geting closed after warming up the
new guy... Because I replicate every 5 mintues, it crashes in 2 hours..

On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:

 in my case, I see only 1 searcher, no field cache - still Old Gen is almost
 full at 22 GB

 Does it have to do with index or some other configuration

 -Saroj

 On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com wrote:

  What does the Statistics page in the Solr admin say? There might be
  several searchers open: org.apache.solr.search.SolrIndexSearcher
 
  Each searcher holds open different generations of the index. If
  obsolete index files are held open, it may be old searchers. How big
  are the caches? How long does it take to autowarm them?
 
  On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
  karthick.soundara...@gmail.com wrote:
   Mark,
   We use solr 3.6.0 on freebsd 9. Over a period of time, it
   accumulates lots of space!
  
   On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:
  
   Thanks Mark.
  
   We are never calling commit or optimize with openSearcher=false.
  
   As per logs, this is what is happening
  
  
 
 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
  
   --
   But, We are going to use 4.0 Alpha and see if that helps.
  
   -Saroj
  
  
  
  
  
  
  
  
  
  
   On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
   wrote:
  
I'd take a look at this issue:
https://issues.apache.org/jira/browse/SOLR-3392
   
Fixed late April.
   
On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
   
 it was from 4/11/12

 -Saroj

 On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller 
 markrmil...@gmail.com
  
wrote:


 On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:

 Hi Guys

 I am also seeing this problem.

 I am using SOLR 4 from Trunk and seeing this issue repeat every
  day.

 Any inputs about how to resolve this would be great

 -Saroj


 Trunk from what date?

 - Mark










   
- Mark Miller
lucidimagination.com
   
   
   
   
   
   
   
   
   
   
   
   
  
 
 
 
  --
  Lance Norskog
  goks...@gmail.com
 



Re: leaks in solr

2012-07-27 Thread Karthick Duraisamy Soundararaj
Hello all,
While running in my eclipse and run a set of queries, this
works fine, but when I run it in test production server, the searchers are
leaked. Any hint would be appreciated. I have not used CoreContainer.

Considering that the SearchHandler is running fine, I am not able to think
of a reason why my extended version wouldnt work.. Does anyone have any
idea?

On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
karthick.soundara...@gmail.com wrote:

 I have tons of these open.
 searcherName : Searcher@24be0446 main
 caching : true
 numDocs : 1331167
 maxDoc : 1338549
 reader : SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
 ,refCnt=1,segments=18}
 readerDir : org.apache.lucene.store.NIOFSDirectory@
 /usr/local/solr/highlander/data/..@2f2d9d89
 indexVersion : 1336499508709
 openedAt : Fri Jul 27 09:45:16 EDT 2012
 registeredAt : Fri Jul 27 09:45:19 EDT 2012
 warmupTime : 0

 In my custom handler, I have the following code
 I have the following problem
 Although in my custom handler, I have the following implementation(its not
 the full code but it gives an overall idea of the implementation) and it

   class CustomHandler extends SearchHandler {

 void handleRequestBody(SolrQueryRequest req,SolrQueryResponse
 rsp)

  SolrCore core= req.getCore();
  vectorSimpleOrderedMapObject requestParams =
 new   vectorSimpleOrderedMapObject();
 /*parse the params such a way that
  requestParams[i] -= parameter of the ith
 request
   */
 ..

   try {
vectorLocalSolrQueryRequests subQueries = new
  vectorLocalSolrQueryRequests(solrcore, requestParams[i]);

for(i=0;isubQueryCount;i++) {
   ResponseBuilder rb = new ResponseBuilder()
   rb.req = req;

   handlerRequestBody(req,rsp,rb); //this would
 call search handler's handler request body, whose signature, i have modified
  }
  } finally {
   for(i=0; isubQueries.size();i++)
  subQueries.get(i).close();
  }
   }

 *Search Handler Changes*
   class SearchHandler {
 void handleRequestBody(SolrQueryRequest req, SolrQueryResponse
 rsp, ResponseBuilder rb, ArrayListComponent comps) {
//  ResponseBuilder rb = new ResponseBuilder()  ;

..
  }
 void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse) {
  ResponseBuilder rb = new ResponseBuilder(req,rsp, new
 ResponseBuilder());
  handleRequestBody(req, rsp, rb, comps) ;
  }
   }


 I don see the index old index searcher geting closed after warming up the
 new guy... Because I replicate every 5 mintues, it crashes in 2 hours..

  On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:

 in my case, I see only 1 searcher, no field cache - still Old Gen is
 almost
 full at 22 GB

 Does it have to do with index or some other configuration

 -Saroj

 On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com wrote:

  What does the Statistics page in the Solr admin say? There might be
  several searchers open: org.apache.solr.search.SolrIndexSearcher
 
  Each searcher holds open different generations of the index. If
  obsolete index files are held open, it may be old searchers. How big
  are the caches? How long does it take to autowarm them?
 
  On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
  karthick.soundara...@gmail.com wrote:
   Mark,
   We use solr 3.6.0 on freebsd 9. Over a period of time, it
   accumulates lots of space!
  
   On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:
  
   Thanks Mark.
  
   We are never calling commit or optimize with openSearcher=false.
  
   As per logs, this is what is happening
  
  
 
 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
  
   --
   But, We are going to use 4.0 Alpha and see if that helps.
  
   -Saroj
  
  
  
  
  
  
  
  
  
  
   On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
   wrote:
  
I'd take a look at this issue:
https://issues.apache.org/jira/browse/SOLR-3392
   
Fixed late April.
   
On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
   
 it was from 4/11/12

 -Saroj

 On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller 
 markrmil...@gmail.com
  
wrote:


 On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com
 wrote:

 Hi Guys

 I am also seeing this problem.

 I am using SOLR 4 from Trunk and seeing this 

Re: leaks in solr

2012-07-27 Thread Karthick Duraisamy Soundararaj
Just to clarify, the leak happens everytime a new searcher is opened.

On Fri, Jul 27, 2012 at 8:28 PM, Karthick Duraisamy Soundararaj 
karthick.soundara...@gmail.com wrote:

 Hello all,
 While running in my eclipse and run a set of queries, this
 works fine, but when I run it in test production server, the searchers are
 leaked. Any hint would be appreciated. I have not used CoreContainer.

 Considering that the SearchHandler is running fine, I am not able to think
 of a reason why my extended version wouldnt work.. Does anyone have any
 idea?

 On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
 karthick.soundara...@gmail.com wrote:

 I have tons of these open.
 searcherName : Searcher@24be0446 main
 caching : true
 numDocs : 1331167
 maxDoc : 1338549
 reader : SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
 ,refCnt=1,segments=18}
 readerDir : org.apache.lucene.store.NIOFSDirectory@
 /usr/local/solr/highlander/data/..@2f2d9d89
 indexVersion : 1336499508709
 openedAt : Fri Jul 27 09:45:16 EDT 2012
 registeredAt : Fri Jul 27 09:45:19 EDT 2012
 warmupTime : 0

 In my custom handler, I have the following code
 I have the following problem
 Although in my custom handler, I have the following implementation(its
 not the full code but it gives an overall idea of the implementation) and it

   class CustomHandler extends SearchHandler {

 void handleRequestBody(SolrQueryRequest req,SolrQueryResponse
 rsp)

  SolrCore core= req.getCore();
  vectorSimpleOrderedMapObject requestParams =
 new   vectorSimpleOrderedMapObject();
 /*parse the params such a way that
  requestParams[i] -= parameter of the
 ith request
   */
 ..

   try {
vectorLocalSolrQueryRequests subQueries = new
  vectorLocalSolrQueryRequests(solrcore, requestParams[i]);

for(i=0;isubQueryCount;i++) {
   ResponseBuilder rb = new ResponseBuilder()
   rb.req = req;

   handlerRequestBody(req,rsp,rb); //this
 would call search handler's handler request body, whose signature, i have
 modified
  }
  } finally {
   for(i=0; isubQueries.size();i++)
  subQueries.get(i).close();
  }
   }

 *Search Handler Changes*
   class SearchHandler {
 void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse rsp, ResponseBuilder rb, ArrayListComponent comps) {
//  ResponseBuilder rb = new ResponseBuilder()  ;

..
  }
 void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse) {
  ResponseBuilder rb = new ResponseBuilder(req,rsp,
 new ResponseBuilder());
  handleRequestBody(req, rsp, rb, comps) ;
  }
   }


 I don see the index old index searcher geting closed after warming up the
 new guy... Because I replicate every 5 mintues, it crashes in 2 hours..

  On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:

 in my case, I see only 1 searcher, no field cache - still Old Gen is
 almost
 full at 22 GB

 Does it have to do with index or some other configuration

 -Saroj

 On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com
 wrote:

  What does the Statistics page in the Solr admin say? There might be
  several searchers open: org.apache.solr.search.SolrIndexSearcher
 
  Each searcher holds open different generations of the index. If
  obsolete index files are held open, it may be old searchers. How big
  are the caches? How long does it take to autowarm them?
 
  On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
  karthick.soundara...@gmail.com wrote:
   Mark,
   We use solr 3.6.0 on freebsd 9. Over a period of time, it
   accumulates lots of space!
  
   On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:
  
   Thanks Mark.
  
   We are never calling commit or optimize with openSearcher=false.
  
   As per logs, this is what is happening
  
  
 
 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
  
   --
   But, We are going to use 4.0 Alpha and see if that helps.
  
   -Saroj
  
  
  
  
  
  
  
  
  
  
   On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
 
   wrote:
  
I'd take a look at this issue:
https://issues.apache.org/jira/browse/SOLR-3392
   
Fixed late April.
   
On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
   
 it was from 4/11/12

 -Saroj

 On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller 
 markrmil...@gmail.com
  
wrote:


Re: leaks in solr

2012-07-27 Thread Lance Norskog
A finally clause can throw exceptions. Can this throw an exception?
 subQueries.get(i).close();

 If so, each close() call should be in a try-catch block.

On Fri, Jul 27, 2012 at 5:28 PM, Karthick Duraisamy Soundararaj
karthick.soundara...@gmail.com wrote:
 Hello all,
 While running in my eclipse and run a set of queries, this
 works fine, but when I run it in test production server, the searchers are
 leaked. Any hint would be appreciated. I have not used CoreContainer.

 Considering that the SearchHandler is running fine, I am not able to think
 of a reason why my extended version wouldnt work.. Does anyone have any
 idea?

 On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
 karthick.soundara...@gmail.com wrote:

 I have tons of these open.
 searcherName : Searcher@24be0446 main
 caching : true
 numDocs : 1331167
 maxDoc : 1338549
 reader : SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
 ,refCnt=1,segments=18}
 readerDir : org.apache.lucene.store.NIOFSDirectory@
 /usr/local/solr/highlander/data/..@2f2d9d89
 indexVersion : 1336499508709
 openedAt : Fri Jul 27 09:45:16 EDT 2012
 registeredAt : Fri Jul 27 09:45:19 EDT 2012
 warmupTime : 0

 In my custom handler, I have the following code
 I have the following problem
 Although in my custom handler, I have the following implementation(its not
 the full code but it gives an overall idea of the implementation) and it

   class CustomHandler extends SearchHandler {

 void handleRequestBody(SolrQueryRequest req,SolrQueryResponse
 rsp)

  SolrCore core= req.getCore();
  vectorSimpleOrderedMapObject requestParams =
 new   vectorSimpleOrderedMapObject();
 /*parse the params such a way that
  requestParams[i] -= parameter of the ith
 request
   */
 ..

   try {
vectorLocalSolrQueryRequests subQueries = new
  vectorLocalSolrQueryRequests(solrcore, requestParams[i]);

for(i=0;isubQueryCount;i++) {
   ResponseBuilder rb = new ResponseBuilder()
   rb.req = req;

   handlerRequestBody(req,rsp,rb); //this would
 call search handler's handler request body, whose signature, i have modified
  }
  } finally {
   for(i=0; isubQueries.size();i++)
  subQueries.get(i).close();
  }
   }

 *Search Handler Changes*
   class SearchHandler {
 void handleRequestBody(SolrQueryRequest req, SolrQueryResponse
 rsp, ResponseBuilder rb, ArrayListComponent comps) {
//  ResponseBuilder rb = new ResponseBuilder()  ;

..
  }
 void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse) {
  ResponseBuilder rb = new ResponseBuilder(req,rsp, new
 ResponseBuilder());
  handleRequestBody(req, rsp, rb, comps) ;
  }
   }


 I don see the index old index searcher geting closed after warming up the
 new guy... Because I replicate every 5 mintues, it crashes in 2 hours..

  On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:

 in my case, I see only 1 searcher, no field cache - still Old Gen is
 almost
 full at 22 GB

 Does it have to do with index or some other configuration

 -Saroj

 On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com wrote:

  What does the Statistics page in the Solr admin say? There might be
  several searchers open: org.apache.solr.search.SolrIndexSearcher
 
  Each searcher holds open different generations of the index. If
  obsolete index files are held open, it may be old searchers. How big
  are the caches? How long does it take to autowarm them?
 
  On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
  karthick.soundara...@gmail.com wrote:
   Mark,
   We use solr 3.6.0 on freebsd 9. Over a period of time, it
   accumulates lots of space!
  
   On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:
  
   Thanks Mark.
  
   We are never calling commit or optimize with openSearcher=false.
  
   As per logs, this is what is happening
  
  
 
 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
  
   --
   But, We are going to use 4.0 Alpha and see if that helps.
  
   -Saroj
  
  
  
  
  
  
  
  
  
  
   On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
   wrote:
  
I'd take a look at this issue:
https://issues.apache.org/jira/browse/SOLR-3392
   
Fixed late April.
   
On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
   
 it was from 4/11/12

 -Saroj

 On Thu, 

Re: leaks in solr

2012-07-27 Thread Karthick Duraisamy Soundararaj
First no. Because i do the following
for(i=0;isubqueries.size();i++) {
  subQueries.get(i).close();
}

Second, I dont see any exception until the first searcher leak happens.

On Fri, Jul 27, 2012 at 9:04 PM, Lance Norskog goks...@gmail.com wrote:

 A finally clause can throw exceptions. Can this throw an exception?
  subQueries.get(i).close();

  If so, each close() call should be in a try-catch block.

 On Fri, Jul 27, 2012 at 5:28 PM, Karthick Duraisamy Soundararaj
 karthick.soundara...@gmail.com wrote:
  Hello all,
  While running in my eclipse and run a set of queries, this
  works fine, but when I run it in test production server, the searchers
 are
  leaked. Any hint would be appreciated. I have not used CoreContainer.
 
  Considering that the SearchHandler is running fine, I am not able to
 think
  of a reason why my extended version wouldnt work.. Does anyone have any
  idea?
 
  On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
  karthick.soundara...@gmail.com wrote:
 
  I have tons of these open.
  searcherName : Searcher@24be0446 main
  caching : true
  numDocs : 1331167
  maxDoc : 1338549
  reader :
 SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
  ,refCnt=1,segments=18}
  readerDir : org.apache.lucene.store.NIOFSDirectory@
  /usr/local/solr/highlander/data/..@2f2d9d89
  indexVersion : 1336499508709
  openedAt : Fri Jul 27 09:45:16 EDT 2012
  registeredAt : Fri Jul 27 09:45:19 EDT 2012
  warmupTime : 0
 
  In my custom handler, I have the following code
  I have the following problem
  Although in my custom handler, I have the following implementation(its
 not
  the full code but it gives an overall idea of the implementation) and it
 
class CustomHandler extends SearchHandler {
 
  void handleRequestBody(SolrQueryRequest
 req,SolrQueryResponse
  rsp)
 
   SolrCore core= req.getCore();
   vectorSimpleOrderedMapObject requestParams
 =
  new   vectorSimpleOrderedMapObject();
  /*parse the params such a way that
   requestParams[i] -= parameter of the
 ith
  request
*/
  ..
 
try {
 vectorLocalSolrQueryRequests subQueries = new
   vectorLocalSolrQueryRequests(solrcore, requestParams[i]);
 
 for(i=0;isubQueryCount;i++) {
ResponseBuilder rb = new ResponseBuilder()
rb.req = req;
 
handlerRequestBody(req,rsp,rb); //this
 would
  call search handler's handler request body, whose signature, i have
 modified
   }
   } finally {
for(i=0; isubQueries.size();i++)
   subQueries.get(i).close();
   }
}
 
  *Search Handler Changes*
class SearchHandler {
  void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse
  rsp, ResponseBuilder rb, ArrayListComponent comps) {
 //  ResponseBuilder rb = new ResponseBuilder()  ;
 
 ..
   }
  void handleRequestBody(SolrQueryRequest req,
  SolrQueryResponse) {
   ResponseBuilder rb = new ResponseBuilder(req,rsp,
 new
  ResponseBuilder());
   handleRequestBody(req, rsp, rb, comps) ;
   }
}
 
 
  I don see the index old index searcher geting closed after warming up
 the
  new guy... Because I replicate every 5 mintues, it crashes in 2 hours..
 
   On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:
 
  in my case, I see only 1 searcher, no field cache - still Old Gen is
  almost
  full at 22 GB
 
  Does it have to do with index or some other configuration
 
  -Saroj
 
  On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com
 wrote:
 
   What does the Statistics page in the Solr admin say? There might be
   several searchers open: org.apache.solr.search.SolrIndexSearcher
  
   Each searcher holds open different generations of the index. If
   obsolete index files are held open, it may be old searchers. How big
   are the caches? How long does it take to autowarm them?
  
   On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
   karthick.soundara...@gmail.com wrote:
Mark,
We use solr 3.6.0 on freebsd 9. Over a period of time, it
accumulates lots of space!
   
On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com
 wrote:
   
Thanks Mark.
   
We are never calling commit or optimize with openSearcher=false.
   
As per logs, this is what is happening
   
   
  
 
 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}
   
--
   

Re: leaks in solr

2012-07-27 Thread Karthick Duraisamy Soundararaj
SimpleOrderedMapObject commonRequestParams; //This holds the common
request params.
VectorSimpleOrderedMapObject subQueryRequestParams;  // This holds the
request params of sub Queries

I use the above to create multiple localQueryRequests. To add a little more
information, I create new ResponseBuilder for each request

I also hold a reference to query component as a private member in my
CustomHandler. Considering that the component is initialized only once
during the start up, I assume this isnt a cause of concernt.

On Fri, Jul 27, 2012 at 9:49 PM, Karthick Duraisamy Soundararaj 
karthick.soundara...@gmail.com wrote:

 First no. Because i do the following
 for(i=0;isubqueries.size();i++) {
   subQueries.get(i).close();
 }

 Second, I dont see any exception until the first searcher leak happens.


 On Fri, Jul 27, 2012 at 9:04 PM, Lance Norskog goks...@gmail.com wrote:

 A finally clause can throw exceptions. Can this throw an exception?
  subQueries.get(i).close();

  If so, each close() call should be in a try-catch block.

 On Fri, Jul 27, 2012 at 5:28 PM, Karthick Duraisamy Soundararaj
 karthick.soundara...@gmail.com wrote:
  Hello all,
  While running in my eclipse and run a set of queries, this
  works fine, but when I run it in test production server, the searchers
 are
  leaked. Any hint would be appreciated. I have not used CoreContainer.
 
  Considering that the SearchHandler is running fine, I am not able to
 think
  of a reason why my extended version wouldnt work.. Does anyone have any
  idea?
 
  On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
  karthick.soundara...@gmail.com wrote:
 
  I have tons of these open.
  searcherName : Searcher@24be0446 main
  caching : true
  numDocs : 1331167
  maxDoc : 1338549
  reader :
 SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
  ,refCnt=1,segments=18}
  readerDir : org.apache.lucene.store.NIOFSDirectory@
  /usr/local/solr/highlander/data/..@2f2d9d89
  indexVersion : 1336499508709
  openedAt : Fri Jul 27 09:45:16 EDT 2012
  registeredAt : Fri Jul 27 09:45:19 EDT 2012
  warmupTime : 0
 
  In my custom handler, I have the following code
  I have the following problem
  Although in my custom handler, I have the following implementation(its
 not
  the full code but it gives an overall idea of the implementation) and
 it
 
class CustomHandler extends SearchHandler {
 
  void handleRequestBody(SolrQueryRequest
 req,SolrQueryResponse
  rsp)
 
   SolrCore core= req.getCore();
   vectorSimpleOrderedMapObject
 requestParams =
  new   vectorSimpleOrderedMapObject();
  /*parse the params such a way that
   requestParams[i] -= parameter of the
 ith
  request
*/
  ..
 
try {
 vectorLocalSolrQueryRequests subQueries = new
   vectorLocalSolrQueryRequests(solrcore, requestParams[i]);
 
 for(i=0;isubQueryCount;i++) {
ResponseBuilder rb = new
 ResponseBuilder()
rb.req = req;
 
handlerRequestBody(req,rsp,rb); //this
 would
  call search handler's handler request body, whose signature, i have
 modified
   }
   } finally {
for(i=0; isubQueries.size();i++)
   subQueries.get(i).close();
   }
}
 
  *Search Handler Changes*
class SearchHandler {
  void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse
  rsp, ResponseBuilder rb, ArrayListComponent comps) {
 //  ResponseBuilder rb = new ResponseBuilder()  ;
 
 ..
   }
  void handleRequestBody(SolrQueryRequest req,
  SolrQueryResponse) {
   ResponseBuilder rb = new ResponseBuilder(req,rsp,
 new
  ResponseBuilder());
   handleRequestBody(req, rsp, rb, comps) ;
   }
}
 
 
  I don see the index old index searcher geting closed after warming up
 the
  new guy... Because I replicate every 5 mintues, it crashes in 2 hours..
 
   On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:
 
  in my case, I see only 1 searcher, no field cache - still Old Gen is
  almost
  full at 22 GB
 
  Does it have to do with index or some other configuration
 
  -Saroj
 
  On Thu, Jul 26, 2012 at 7:41 PM, Lance Norskog goks...@gmail.com
 wrote:
 
   What does the Statistics page in the Solr admin say? There might
 be
   several searchers open: org.apache.solr.search.SolrIndexSearcher
  
   Each searcher holds open different generations of the index. If
   obsolete index files are held 

Re: leaks in solr

2012-07-27 Thread Karthick Duraisamy Soundararaj
subQueries.get(i).close() is nothing but pulling the refrence from the
vector and closing it. So yes. it wouldnt throw exception.

vectorLocalSolrQueryRequests subQueries

Please let me know if you need any more information

On Fri, Jul 27, 2012 at 10:14 PM, Karthick Duraisamy Soundararaj 
karthick.soundara...@gmail.com wrote:

 SimpleOrderedMapObject commonRequestParams; //This holds the common
 request params.
 VectorSimpleOrderedMapObject subQueryRequestParams;  // This holds the
 request params of sub Queries

 I use the above to create multiple localQueryRequests. To add a little
 more information, I create new ResponseBuilder for each request

 I also hold a reference to query component as a private member in my
 CustomHandler. Considering that the component is initialized only once
 during the start up, I assume this isnt a cause of concernt.

 On Fri, Jul 27, 2012 at 9:49 PM, Karthick Duraisamy Soundararaj 
 karthick.soundara...@gmail.com wrote:

 First no. Because i do the following
 for(i=0;isubqueries.size();i++) {
   subQueries.get(i).close();
 }

 Second, I dont see any exception until the first searcher leak happens.


 On Fri, Jul 27, 2012 at 9:04 PM, Lance Norskog goks...@gmail.com wrote:

 A finally clause can throw exceptions. Can this throw an exception?
  subQueries.get(i).close();

  If so, each close() call should be in a try-catch block.

 On Fri, Jul 27, 2012 at 5:28 PM, Karthick Duraisamy Soundararaj
 karthick.soundara...@gmail.com wrote:
  Hello all,
  While running in my eclipse and run a set of queries, this
  works fine, but when I run it in test production server, the searchers
 are
  leaked. Any hint would be appreciated. I have not used CoreContainer.
 
  Considering that the SearchHandler is running fine, I am not able to
 think
  of a reason why my extended version wouldnt work.. Does anyone have any
  idea?
 
  On Fri, Jul 27, 2012 at 10:19 AM, Karthick Duraisamy Soundararaj 
  karthick.soundara...@gmail.com wrote:
 
  I have tons of these open.
  searcherName : Searcher@24be0446 main
  caching : true
  numDocs : 1331167
  maxDoc : 1338549
  reader :
 SolrIndexReader{this=5585c0de,r=ReadOnlyDirectoryReader@5585c0de
  ,refCnt=1,segments=18}
  readerDir : org.apache.lucene.store.NIOFSDirectory@
  /usr/local/solr/highlander/data/..@2f2d9d89
  indexVersion : 1336499508709
  openedAt : Fri Jul 27 09:45:16 EDT 2012
  registeredAt : Fri Jul 27 09:45:19 EDT 2012
  warmupTime : 0
 
  In my custom handler, I have the following code
  I have the following problem
  Although in my custom handler, I have the following
 implementation(its not
  the full code but it gives an overall idea of the implementation) and
 it
 
class CustomHandler extends SearchHandler {
 
  void handleRequestBody(SolrQueryRequest
 req,SolrQueryResponse
  rsp)
 
   SolrCore core= req.getCore();
   vectorSimpleOrderedMapObject
 requestParams =
  new   vectorSimpleOrderedMapObject();
  /*parse the params such a way that
   requestParams[i] -= parameter of
 the ith
  request
*/
  ..
 
try {
 vectorLocalSolrQueryRequests subQueries = new
   vectorLocalSolrQueryRequests(solrcore, requestParams[i]);
 
 for(i=0;isubQueryCount;i++) {
ResponseBuilder rb = new
 ResponseBuilder()
rb.req = req;
 
handlerRequestBody(req,rsp,rb); //this
 would
  call search handler's handler request body, whose signature, i have
 modified
   }
   } finally {
for(i=0; isubQueries.size();i++)
   subQueries.get(i).close();
   }
}
 
  *Search Handler Changes*
class SearchHandler {
  void handleRequestBody(SolrQueryRequest req,
 SolrQueryResponse
  rsp, ResponseBuilder rb, ArrayListComponent comps) {
 //  ResponseBuilder rb = new ResponseBuilder()  ;
 
 ..
   }
  void handleRequestBody(SolrQueryRequest req,
  SolrQueryResponse) {
   ResponseBuilder rb = new
 ResponseBuilder(req,rsp, new
  ResponseBuilder());
   handleRequestBody(req, rsp, rb, comps) ;
   }
}
 
 
  I don see the index old index searcher geting closed after warming up
 the
  new guy... Because I replicate every 5 mintues, it crashes in 2
 hours..
 
   On Fri, Jul 27, 2012 at 3:36 AM, roz dev rozde...@gmail.com wrote:
 
  in my case, I see only 1 searcher, no field cache - still Old Gen is
  almost
  full at 22 GB
 
  Does it have to do with index or some other 

Re: leaks in solr

2012-07-26 Thread Karthick Duraisamy Soundararaj
Did you find any more clues? I have this problem in my machines as well..

On Fri, Jun 29, 2012 at 6:04 AM, Bernd Fehling 
bernd.fehl...@uni-bielefeld.de wrote:

 Hi list,

 while monitoring my solr 3.6.1 installation I recognized an increase of
 memory usage
 in OldGen JVM heap on my slave. I decided to force Full GC from jvisualvm
 and
 send optimize to the already optimized slave index. Normally this helps
 because
 I have monitored this issue over the past. But not this time. The Full GC
 didn't free any memory. So I decided to take a heap dump and see what
 MemoryAnalyzer
 is showing. The heap dump is about 23 GB in size.

 1.)
 Report Top consumers - Biggest Objects:
 Total: 12.3 GB
 org.apache.lucene.search.FieldCacheImpl : 8.1 GB
 class java.lang.ref.Finalizer   : 2.1 GB
 org.apache.solr.util.ConcurrentLRUCache : 1.5 GB
 org.apache.lucene.index.ReadOnlySegmentReader : 622.5 MB
 ...

 As you can see, Finalizer has already reached 2.1 GB!!!

 * java.util.concurrent.ConcurrentHashMap$Segment[16] @ 0x37b056fd0
   * segments java.util.concurrent.ConcurrentHashMap @ 0x39b02d268
 * map org.apache.solr.util.ConcurrentLRUCache @ 0x398f33c30
   * referent java.lang.ref.Finalizer @ 0x37affa810
 * next java.lang.ref.Finalizer @ 0x37affa838
 ...

 Seams to be org.apache.solr.util.ConcurrentLRUCache
 The attributes are:

 Type   |Name  | Value
 -
 boolean| isDestroyed  |  true
 -
 ref| cleanupThread|  null
 
 ref| evictionListener |  null
 ---
 long   | oldestEntry  | 0
 --
 int| acceptableWaterMark |  9500
 --
 ref| stats| org.apache.solr.util.ConcurrentLRUCache$Stats
 @ 0x37b074dc8
 
 boolean| islive   |  true
 -
 boolean| newThreadForCleanup | false
 
 boolean| isCleaning   | false

 
 ref| markAndSweepLock | java.util.concurrent.locks.ReentrantLock @
 0x39bf63978
 -
 int| lowerWaterMark   |  9000
 -
 int| upperWaterMark   | 1
 -
 ref|  map | java.util.concurrent.ConcurrentHashMap @
 0x39b02d268
 --




 2.)
 While searching for open files and their references I noticed that there
 are references to
 index files which are already deleted from disk.
 E.g. recent index files are data/index/_2iqw.frq and
 data/index/_2iqx.frq.
 But I also see references to data/index/_2hid.frq which are quite old
 and are deleted way back
 from earlier replications.
 I have to analyze this a bit deeper.


 So far my report, I go on analyzing this huge heap dump.
 If you need any other info or even the heap dump, let me know.


 Regards
 Bernd




Re: leaks in solr

2012-07-26 Thread roz dev
Hi Guys

I am also seeing this problem.

I am using SOLR 4 from Trunk and seeing this issue repeat every day.

Any inputs about how to resolve this would be great

-Saroj


On Thu, Jul 26, 2012 at 8:33 AM, Karthick Duraisamy Soundararaj 
karthick.soundara...@gmail.com wrote:

 Did you find any more clues? I have this problem in my machines as well..

 On Fri, Jun 29, 2012 at 6:04 AM, Bernd Fehling 
 bernd.fehl...@uni-bielefeld.de wrote:

  Hi list,
 
  while monitoring my solr 3.6.1 installation I recognized an increase of
  memory usage
  in OldGen JVM heap on my slave. I decided to force Full GC from jvisualvm
  and
  send optimize to the already optimized slave index. Normally this helps
  because
  I have monitored this issue over the past. But not this time. The Full GC
  didn't free any memory. So I decided to take a heap dump and see what
  MemoryAnalyzer
  is showing. The heap dump is about 23 GB in size.
 
  1.)
  Report Top consumers - Biggest Objects:
  Total: 12.3 GB
  org.apache.lucene.search.FieldCacheImpl : 8.1 GB
  class java.lang.ref.Finalizer   : 2.1 GB
  org.apache.solr.util.ConcurrentLRUCache : 1.5 GB
  org.apache.lucene.index.ReadOnlySegmentReader : 622.5 MB
  ...
 
  As you can see, Finalizer has already reached 2.1 GB!!!
 
  * java.util.concurrent.ConcurrentHashMap$Segment[16] @ 0x37b056fd0
* segments java.util.concurrent.ConcurrentHashMap @ 0x39b02d268
  * map org.apache.solr.util.ConcurrentLRUCache @ 0x398f33c30
* referent java.lang.ref.Finalizer @ 0x37affa810
  * next java.lang.ref.Finalizer @ 0x37affa838
  ...
 
  Seams to be org.apache.solr.util.ConcurrentLRUCache
  The attributes are:
 
  Type   |Name  | Value
  -
  boolean| isDestroyed  |  true
  -
  ref| cleanupThread|  null
  
  ref| evictionListener |  null
  ---
  long   | oldestEntry  | 0
  --
  int| acceptableWaterMark |  9500
 
 --
  ref| stats| org.apache.solr.util.ConcurrentLRUCache$Stats
  @ 0x37b074dc8
  
  boolean| islive   |  true
  -
  boolean| newThreadForCleanup | false
  
  boolean| isCleaning   | false
 
 
 
  ref| markAndSweepLock | java.util.concurrent.locks.ReentrantLock @
  0x39bf63978
  -
  int| lowerWaterMark   |  9000
  -
  int| upperWaterMark   | 1
  -
  ref|  map | java.util.concurrent.ConcurrentHashMap @
  0x39b02d268
  --
 
 
 
 
  2.)
  While searching for open files and their references I noticed that there
  are references to
  index files which are already deleted from disk.
  E.g. recent index files are data/index/_2iqw.frq and
  data/index/_2iqx.frq.
  But I also see references to data/index/_2hid.frq which are quite old
  and are deleted way back
  from earlier replications.
  I have to analyze this a bit deeper.
 
 
  So far my report, I go on analyzing this huge heap dump.
  If you need any other info or even the heap dump, let me know.
 
 
  Regards
  Bernd
 
 



Re: leaks in solr

2012-07-26 Thread Mark Miller

On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:

 Hi Guys
 
 I am also seeing this problem.
 
 I am using SOLR 4 from Trunk and seeing this issue repeat every day.
 
 Any inputs about how to resolve this would be great
 
 -Saroj


Trunk from what date?

- Mark











Re: leaks in solr

2012-07-26 Thread roz dev
it was from 4/11/12

-Saroj

On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller markrmil...@gmail.com wrote:


 On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:

  Hi Guys
 
  I am also seeing this problem.
 
  I am using SOLR 4 from Trunk and seeing this issue repeat every day.
 
  Any inputs about how to resolve this would be great
 
  -Saroj


 Trunk from what date?

 - Mark












Re: leaks in solr

2012-07-26 Thread Mark Miller
I'd take a look at this issue: https://issues.apache.org/jira/browse/SOLR-3392

Fixed late April.

On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:

 it was from 4/11/12
 
 -Saroj
 
 On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller markrmil...@gmail.com wrote:
 
 
 On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:
 
 Hi Guys
 
 I am also seeing this problem.
 
 I am using SOLR 4 from Trunk and seeing this issue repeat every day.
 
 Any inputs about how to resolve this would be great
 
 -Saroj
 
 
 Trunk from what date?
 
 - Mark
 
 
 
 
 
 
 
 
 
 

- Mark Miller
lucidimagination.com













Re: leaks in solr

2012-07-26 Thread roz dev
Thanks Mark.

We are never calling commit or optimize with openSearcher=false.

As per logs, this is what is happening

openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}

--
But, We are going to use 4.0 Alpha and see if that helps.

-Saroj










On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com wrote:

 I'd take a look at this issue:
 https://issues.apache.org/jira/browse/SOLR-3392

 Fixed late April.

 On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:

  it was from 4/11/12
 
  -Saroj
 
  On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller markrmil...@gmail.com
 wrote:
 
 
  On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:
 
  Hi Guys
 
  I am also seeing this problem.
 
  I am using SOLR 4 from Trunk and seeing this issue repeat every day.
 
  Any inputs about how to resolve this would be great
 
  -Saroj
 
 
  Trunk from what date?
 
  - Mark
 
 
 
 
 
 
 
 
 
 

 - Mark Miller
 lucidimagination.com














Re: leaks in solr

2012-07-26 Thread Karthick Duraisamy Soundararaj
Mark,
We use solr 3.6.0 on freebsd 9. Over a period of time, it
accumulates lots of space!

On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:

 Thanks Mark.

 We are never calling commit or optimize with openSearcher=false.

 As per logs, this is what is happening

 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}

 --
 But, We are going to use 4.0 Alpha and see if that helps.

 -Saroj










 On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
 wrote:

  I'd take a look at this issue:
  https://issues.apache.org/jira/browse/SOLR-3392
 
  Fixed late April.
 
  On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
 
   it was from 4/11/12
  
   -Saroj
  
   On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller markrmil...@gmail.com
  wrote:
  
  
   On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:
  
   Hi Guys
  
   I am also seeing this problem.
  
   I am using SOLR 4 from Trunk and seeing this issue repeat every day.
  
   Any inputs about how to resolve this would be great
  
   -Saroj
  
  
   Trunk from what date?
  
   - Mark
  
  
  
  
  
  
  
  
  
  
 
  - Mark Miller
  lucidimagination.com
 
 
 
 
 
 
 
 
 
 
 
 



Re: leaks in solr

2012-07-26 Thread Lance Norskog
What does the Statistics page in the Solr admin say? There might be
several searchers open: org.apache.solr.search.SolrIndexSearcher

Each searcher holds open different generations of the index. If
obsolete index files are held open, it may be old searchers. How big
are the caches? How long does it take to autowarm them?

On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
karthick.soundara...@gmail.com wrote:
 Mark,
 We use solr 3.6.0 on freebsd 9. Over a period of time, it
 accumulates lots of space!

 On Thu, Jul 26, 2012 at 8:47 PM, roz dev rozde...@gmail.com wrote:

 Thanks Mark.

 We are never calling commit or optimize with openSearcher=false.

 As per logs, this is what is happening

 openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false}

 --
 But, We are going to use 4.0 Alpha and see if that helps.

 -Saroj










 On Thu, Jul 26, 2012 at 5:12 PM, Mark Miller markrmil...@gmail.com
 wrote:

  I'd take a look at this issue:
  https://issues.apache.org/jira/browse/SOLR-3392
 
  Fixed late April.
 
  On Jul 26, 2012, at 7:41 PM, roz dev rozde...@gmail.com wrote:
 
   it was from 4/11/12
  
   -Saroj
  
   On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller markrmil...@gmail.com
  wrote:
  
  
   On Jul 26, 2012, at 3:18 PM, roz dev rozde...@gmail.com wrote:
  
   Hi Guys
  
   I am also seeing this problem.
  
   I am using SOLR 4 from Trunk and seeing this issue repeat every day.
  
   Any inputs about how to resolve this would be great
  
   -Saroj
  
  
   Trunk from what date?
  
   - Mark
  
  
  
  
  
  
  
  
  
  
 
  - Mark Miller
  lucidimagination.com
 
 
 
 
 
 
 
 
 
 
 
 




-- 
Lance Norskog
goks...@gmail.com


leaks in solr

2012-06-29 Thread Bernd Fehling
Hi list,

while monitoring my solr 3.6.1 installation I recognized an increase of memory 
usage
in OldGen JVM heap on my slave. I decided to force Full GC from jvisualvm and
send optimize to the already optimized slave index. Normally this helps because
I have monitored this issue over the past. But not this time. The Full GC
didn't free any memory. So I decided to take a heap dump and see what 
MemoryAnalyzer
is showing. The heap dump is about 23 GB in size.

1.)
Report Top consumers - Biggest Objects:
Total: 12.3 GB
org.apache.lucene.search.FieldCacheImpl : 8.1 GB
class java.lang.ref.Finalizer   : 2.1 GB
org.apache.solr.util.ConcurrentLRUCache : 1.5 GB
org.apache.lucene.index.ReadOnlySegmentReader : 622.5 MB
...

As you can see, Finalizer has already reached 2.1 GB!!!

* java.util.concurrent.ConcurrentHashMap$Segment[16] @ 0x37b056fd0
  * segments java.util.concurrent.ConcurrentHashMap @ 0x39b02d268
* map org.apache.solr.util.ConcurrentLRUCache @ 0x398f33c30
  * referent java.lang.ref.Finalizer @ 0x37affa810
* next java.lang.ref.Finalizer @ 0x37affa838
...

Seams to be org.apache.solr.util.ConcurrentLRUCache
The attributes are:

Type   |Name  | Value
-
boolean| isDestroyed  |  true
-
ref| cleanupThread|  null

ref| evictionListener |  null
---
long   | oldestEntry  | 0
--
int| acceptableWaterMark |  9500
--
ref| stats| org.apache.solr.util.ConcurrentLRUCache$Stats @ 
0x37b074dc8

boolean| islive   |  true
-
boolean| newThreadForCleanup | false

boolean| isCleaning   | false

ref| markAndSweepLock | java.util.concurrent.locks.ReentrantLock @ 
0x39bf63978
-
int| lowerWaterMark   |  9000
-
int| upperWaterMark   | 1
-
ref|  map | java.util.concurrent.ConcurrentHashMap @ 0x39b02d268
--




2.)
While searching for open files and their references I noticed that there are 
references to
index files which are already deleted from disk.
E.g. recent index files are data/index/_2iqw.frq and data/index/_2iqx.frq.
But I also see references to data/index/_2hid.frq which are quite old and are 
deleted way back
from earlier replications.
I have to analyze this a bit deeper.


So far my report, I go on analyzing this huge heap dump.
If you need any other info or even the heap dump, let me know.


Regards
Bernd



Re: JVM Heap utilization Memory leaks with Solr

2009-08-20 Thread Rahul R
All these 3700 fields are single valued non-boolean fields. Thanks

Regards
Rahul

On Wed, Aug 19, 2009 at 8:33 PM, Fuad Efendi f...@efendi.ca wrote:


 Hi Rahul,

 JRockit could be used at least in a test environment to monitor JVM (and
 troubleshoot SOLR, licensed for-free for developers!); they have even
 Eclipse plugin now, and it is licensed by Oracle (BEA)... But, of course,
 in
 large companies test environment is in hands of testers :)


 But... 3700 fields will create (over time) 3700 arrays  each of size
 5,000,000!!! Even if most of fields are empty for most of documents...
 Applicable to non-tokenized single-valued non-boolean fields only, Lucene
 internals, FieldCache... and it won't be GC-collected after user log-off...
 prefer dedicated box for SOLR.

 -Fuad


 -Original Message-
 From: Rahul R [mailto:rahul.s...@gmail.com]
 Sent: August-19-09 6:19 AM
 To: solr-user@lucene.apache.org
  Subject: Re: JVM Heap utilization  Memory leaks with Solr

 Fuad,
 We have around 5 million documents and around 3700 fields. All documents
 will not have values for all the fields JRockit is not approved for use
 within my organization. But thanks for the info anyway.

 Regards
 Rahul

 On Tue, Aug 18, 2009 at 9:41 AM, Funtick f...@efendi.ca wrote:

 
  BTW, you should really prefer JRockit which really rocks!!!
 
  Mission Control has necessary toolongs; and JRockit produces _nice_
  exception stacktrace (explaining almost everything) in case of even OOM
  which SUN JVN still fails to produce.
 
 
  SolrServlet still catches Throwable:
 
 } catch (Throwable e) {
   SolrException.log(log,e);
   sendErr(500, SolrException.toStr(e), request, response);
 } finally {
 
 
 
 
 
  Rahul R wrote:
  
   Otis,
   Thank you for your response. I know there are a few variables here but
  the
   difference in memory utilization with and without shards somehow leads
 me
   to
   believe that the leak could be within Solr.
  
   I tried using a profiling tool - Yourkit. The trial version was free
 for
   15
   days. But I couldn't find anything of significance.
  
   Regards
   Rahul
  
  
   On Tue, Aug 4, 2009 at 7:35 PM, Otis Gospodnetic
   otis_gospodne...@yahoo.com
   wrote:
  
   Hi Rahul,
  
   A) There are no known (to me) memory leaks.
   I think there are too many variables for a person to tell you what
   exactly
   is happening, plus you are dealing with the JVM here. :)
  
   Try jmap -histo:live PID-HERE | less and see what's using your memory.
  
   Otis
   --
   Sematext is hiring -- http://sematext.com/about/jobs.html?mls
   Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
  
  
  
   - Original Message 
From: Rahul R rahul.s...@gmail.com
To: solr-user@lucene.apache.org
Sent: Tuesday, August 4, 2009 1:09:06 AM
Subject: JVM Heap utilization  Memory leaks with Solr
   
I am trying to track memory utilization with my Application that
 uses
   Solr.
Details of the setup :
-3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr
 1.3.0
- Hardware : 12 CPU, 24 GB RAM
   
For testing during PSR I am using a smaller subset of the actual
 data
   that I
want to work with. Details of this smaller sub-set :
- 5 million records, 4.5 GB index size
   
Observations during PSR:
A) I have allocated 3.2 GB for the JVM(s) that I used. After all
 users
logout and doing a force GC, only 60 % of the heap is reclaimed. As
   part
   of
the logout process I am invalidating the HttpSession and doing a
   close()
   on
CoreContainer. From my application's side, I don't believe I am
  holding
   on
to any resource. I wanted to know if there are known issues
  surrounding
memory leaks with Solr ?
B) To further test this, I tried deploying with shards. 3.2 GB was
   allocated
to each JVM. All JVMs had 96 % free heap space after start up. I got
   varying
results with this.
Case 1 : Used 6 weblogic domains. My application was deployed one 1
   domain.
I split the 5 million index into 5 parts of 1 million each and used
   them
   as
shards. After multiple users used the system and doing a force GC,
   around
   94
- 96 % of heap was reclaimed in all the JVMs.
Case 2: Used 2 weblogic domains. My application was deployed on 1
   domain.
   On
the other, I deployed the entire 5 million part index as one shard.
   After
multiple users used the system and doing a gorce GC, around 76 % of
  the
   heap
was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM
  where
   my
application was running. This result further convinces me that my
application can be absolved of holding on to memory resources.
   
I am not sure how to interpret these results ? For searching, I am
   using
Without Shards : EmbeddedSolrServer
With Shards :CommonsHttpSolrServer
In terms of Solr objects this is what differs in my code between
  normal
search and shards

Re: JVM Heap utilization Memory leaks with Solr

2009-08-19 Thread Rahul R
Fuad,
We have around 5 million documents and around 3700 fields. All documents
will not have values for all the fields JRockit is not approved for use
within my organization. But thanks for the info anyway.

Regards
Rahul

On Tue, Aug 18, 2009 at 9:41 AM, Funtick f...@efendi.ca wrote:


 BTW, you should really prefer JRockit which really rocks!!!

 Mission Control has necessary toolongs; and JRockit produces _nice_
 exception stacktrace (explaining almost everything) in case of even OOM
 which SUN JVN still fails to produce.


 SolrServlet still catches Throwable:

} catch (Throwable e) {
  SolrException.log(log,e);
  sendErr(500, SolrException.toStr(e), request, response);
} finally {





 Rahul R wrote:
 
  Otis,
  Thank you for your response. I know there are a few variables here but
 the
  difference in memory utilization with and without shards somehow leads me
  to
  believe that the leak could be within Solr.
 
  I tried using a profiling tool - Yourkit. The trial version was free for
  15
  days. But I couldn't find anything of significance.
 
  Regards
  Rahul
 
 
  On Tue, Aug 4, 2009 at 7:35 PM, Otis Gospodnetic
  otis_gospodne...@yahoo.com
  wrote:
 
  Hi Rahul,
 
  A) There are no known (to me) memory leaks.
  I think there are too many variables for a person to tell you what
  exactly
  is happening, plus you are dealing with the JVM here. :)
 
  Try jmap -histo:live PID-HERE | less and see what's using your memory.
 
  Otis
  --
  Sematext is hiring -- http://sematext.com/about/jobs.html?mls
  Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
 
 
 
  - Original Message 
   From: Rahul R rahul.s...@gmail.com
   To: solr-user@lucene.apache.org
   Sent: Tuesday, August 4, 2009 1:09:06 AM
   Subject: JVM Heap utilization  Memory leaks with Solr
  
   I am trying to track memory utilization with my Application that uses
  Solr.
   Details of the setup :
   -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
   - Hardware : 12 CPU, 24 GB RAM
  
   For testing during PSR I am using a smaller subset of the actual data
  that I
   want to work with. Details of this smaller sub-set :
   - 5 million records, 4.5 GB index size
  
   Observations during PSR:
   A) I have allocated 3.2 GB for the JVM(s) that I used. After all users
   logout and doing a force GC, only 60 % of the heap is reclaimed. As
  part
  of
   the logout process I am invalidating the HttpSession and doing a
  close()
  on
   CoreContainer. From my application's side, I don't believe I am
 holding
  on
   to any resource. I wanted to know if there are known issues
 surrounding
   memory leaks with Solr ?
   B) To further test this, I tried deploying with shards. 3.2 GB was
  allocated
   to each JVM. All JVMs had 96 % free heap space after start up. I got
  varying
   results with this.
   Case 1 : Used 6 weblogic domains. My application was deployed one 1
  domain.
   I split the 5 million index into 5 parts of 1 million each and used
  them
  as
   shards. After multiple users used the system and doing a force GC,
  around
  94
   - 96 % of heap was reclaimed in all the JVMs.
   Case 2: Used 2 weblogic domains. My application was deployed on 1
  domain.
  On
   the other, I deployed the entire 5 million part index as one shard.
  After
   multiple users used the system and doing a gorce GC, around 76 % of
 the
  heap
   was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM
 where
  my
   application was running. This result further convinces me that my
   application can be absolved of holding on to memory resources.
  
   I am not sure how to interpret these results ? For searching, I am
  using
   Without Shards : EmbeddedSolrServer
   With Shards :CommonsHttpSolrServer
   In terms of Solr objects this is what differs in my code between
 normal
   search and shards search (distributed search)
  
   After looking at Case 1, I thought that the CommonsHttpSolrServer was
  more
   memory efficient but Case 2 proved me wrong. Or could there still be
  memory
   leaks in my application ? Any thoughts, suggestions would be welcome.
  
   Regards
   Rahul
 
 
 
 

 --
 View this message in context:
 http://www.nabble.com/JVM-Heap-utilization---Memory-leaks-with-Solr-tp24802380p25018165.html
  Sent from the Solr - User mailing list archive at Nabble.com.




RE: JVM Heap utilization Memory leaks with Solr

2009-08-19 Thread Fuad Efendi

Hi Rahul,

JRockit could be used at least in a test environment to monitor JVM (and
troubleshoot SOLR, licensed for-free for developers!); they have even
Eclipse plugin now, and it is licensed by Oracle (BEA)... But, of course, in
large companies test environment is in hands of testers :)


But... 3700 fields will create (over time) 3700 arrays  each of size
5,000,000!!! Even if most of fields are empty for most of documents...
Applicable to non-tokenized single-valued non-boolean fields only, Lucene
internals, FieldCache... and it won't be GC-collected after user log-off...
prefer dedicated box for SOLR.

-Fuad


-Original Message-
From: Rahul R [mailto:rahul.s...@gmail.com] 
Sent: August-19-09 6:19 AM
To: solr-user@lucene.apache.org
Subject: Re: JVM Heap utilization  Memory leaks with Solr

Fuad,
We have around 5 million documents and around 3700 fields. All documents
will not have values for all the fields JRockit is not approved for use
within my organization. But thanks for the info anyway.

Regards
Rahul

On Tue, Aug 18, 2009 at 9:41 AM, Funtick f...@efendi.ca wrote:


 BTW, you should really prefer JRockit which really rocks!!!

 Mission Control has necessary toolongs; and JRockit produces _nice_
 exception stacktrace (explaining almost everything) in case of even OOM
 which SUN JVN still fails to produce.


 SolrServlet still catches Throwable:

} catch (Throwable e) {
  SolrException.log(log,e);
  sendErr(500, SolrException.toStr(e), request, response);
} finally {





 Rahul R wrote:
 
  Otis,
  Thank you for your response. I know there are a few variables here but
 the
  difference in memory utilization with and without shards somehow leads
me
  to
  believe that the leak could be within Solr.
 
  I tried using a profiling tool - Yourkit. The trial version was free for
  15
  days. But I couldn't find anything of significance.
 
  Regards
  Rahul
 
 
  On Tue, Aug 4, 2009 at 7:35 PM, Otis Gospodnetic
  otis_gospodne...@yahoo.com
  wrote:
 
  Hi Rahul,
 
  A) There are no known (to me) memory leaks.
  I think there are too many variables for a person to tell you what
  exactly
  is happening, plus you are dealing with the JVM here. :)
 
  Try jmap -histo:live PID-HERE | less and see what's using your memory.
 
  Otis
  --
  Sematext is hiring -- http://sematext.com/about/jobs.html?mls
  Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
 
 
 
  - Original Message 
   From: Rahul R rahul.s...@gmail.com
   To: solr-user@lucene.apache.org
   Sent: Tuesday, August 4, 2009 1:09:06 AM
   Subject: JVM Heap utilization  Memory leaks with Solr
  
   I am trying to track memory utilization with my Application that uses
  Solr.
   Details of the setup :
   -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
   - Hardware : 12 CPU, 24 GB RAM
  
   For testing during PSR I am using a smaller subset of the actual data
  that I
   want to work with. Details of this smaller sub-set :
   - 5 million records, 4.5 GB index size
  
   Observations during PSR:
   A) I have allocated 3.2 GB for the JVM(s) that I used. After all
users
   logout and doing a force GC, only 60 % of the heap is reclaimed. As
  part
  of
   the logout process I am invalidating the HttpSession and doing a
  close()
  on
   CoreContainer. From my application's side, I don't believe I am
 holding
  on
   to any resource. I wanted to know if there are known issues
 surrounding
   memory leaks with Solr ?
   B) To further test this, I tried deploying with shards. 3.2 GB was
  allocated
   to each JVM. All JVMs had 96 % free heap space after start up. I got
  varying
   results with this.
   Case 1 : Used 6 weblogic domains. My application was deployed one 1
  domain.
   I split the 5 million index into 5 parts of 1 million each and used
  them
  as
   shards. After multiple users used the system and doing a force GC,
  around
  94
   - 96 % of heap was reclaimed in all the JVMs.
   Case 2: Used 2 weblogic domains. My application was deployed on 1
  domain.
  On
   the other, I deployed the entire 5 million part index as one shard.
  After
   multiple users used the system and doing a gorce GC, around 76 % of
 the
  heap
   was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM
 where
  my
   application was running. This result further convinces me that my
   application can be absolved of holding on to memory resources.
  
   I am not sure how to interpret these results ? For searching, I am
  using
   Without Shards : EmbeddedSolrServer
   With Shards :CommonsHttpSolrServer
   In terms of Solr objects this is what differs in my code between
 normal
   search and shards search (distributed search)
  
   After looking at Case 1, I thought that the CommonsHttpSolrServer was
  more
   memory efficient but Case 2 proved me wrong. Or could there still be
  memory
   leaks in my application ? Any thoughts, suggestions would be welcome.
  
   Regards
   Rahul

Re: JVM Heap utilization Memory leaks with Solr

2009-08-17 Thread Funtick

Can you tell me please how many non-tokenized single-valued fields your
schema uses, and how many documents?
Thanks,
Fuad


Rahul R wrote:
 
 My primary issue is not Out of Memory error at run time. It is memory
 leaks:
 heap space not being released after doing a force GC also. So after
 sometime
 as progressively more heap gets utilized, I start running out of
 memory
 The verdict however seems unanimous that there are no known memory leak
 issues within Solr. I am still looking at my application to analyse the
 problem. Thank you.
 
 On Thu, Aug 13, 2009 at 10:58 PM, Fuad Efendi f...@efendi.ca wrote:
 
 Most OutOfMemoryException (if not 100%) happening with SOLR are because
 of

 http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/search/FieldCache.
 html
 - it is used internally in Lucene to cache Field value and document ID.

 My very long-term observations: SOLR can run without any problems few
 days/months and unpredictable OOM happens just because someone tried
 sorted
 search which will populate array with IDs of ALL documents in the index.

 The only solution: calculate exactly amount of RAM needed for
 FieldCache...
 For instance, for 100,000,000 documents single instance of FieldCache may
 require 8*100,000,000 bytes (8 bytes per document ID?) which is almost
 1Gb
 (at least!)


 I didn't notice any memory leaks after I started to use 16Gb RAM for SOLR
 instance (almost a year without any restart!)




 -Original Message-
 From: Rahul R [mailto:rahul.s...@gmail.com]
 Sent: August-13-09 1:25 AM
 To: solr-user@lucene.apache.org
  Subject: Re: JVM Heap utilization  Memory leaks with Solr

 *You should try to generate heap dumps and analyze the heap using a tool
 like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
 objects holding a large amount of memory*

 The tool that I used also allows to capture heap snap shots. Eclipse had
 a
 lot of pre-requisites. You need to apply some three or five patches
 before
 you can start using it My observations with this tool were that
 some
 Hashmaps were taking up a lot of space. Although I could not pin it down
 to
 the exact HashMap. These would either be weblogic's or Solr's I will
 anyway give eclipse's a try and see how it goes. Thanks for your input.

 Rahul

 On Wed, Aug 12, 2009 at 2:15 PM, Gunnar Wagenknecht
 gun...@wagenknecht.orgwrote:

  Rahul R schrieb:
   I tried using a profiling tool - Yourkit. The trial version was free
 for
  15
   days. But I couldn't find anything of significance.
 
  You should try to generate heap dumps and analyze the heap using a tool
  like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
  objects holding a large amount of memory.
 
  -Gunnar
 
  --
  Gunnar Wagenknecht
  gun...@wagenknecht.org
  http://wagenknecht.org/
 
 



 
 

-- 
View this message in context: 
http://www.nabble.com/JVM-Heap-utilization---Memory-leaks-with-Solr-tp24802380p25017767.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: JVM Heap utilization Memory leaks with Solr

2009-08-17 Thread Funtick

BTW, you should really prefer JRockit which really rocks!!!

Mission Control has necessary toolongs; and JRockit produces _nice_
exception stacktrace (explaining almost everything) in case of even OOM
which SUN JVN still fails to produce.


SolrServlet still catches Throwable:

} catch (Throwable e) {
  SolrException.log(log,e);
  sendErr(500, SolrException.toStr(e), request, response);
} finally {





Rahul R wrote:
 
 Otis,
 Thank you for your response. I know there are a few variables here but the
 difference in memory utilization with and without shards somehow leads me
 to
 believe that the leak could be within Solr.
 
 I tried using a profiling tool - Yourkit. The trial version was free for
 15
 days. But I couldn't find anything of significance.
 
 Regards
 Rahul
 
 
 On Tue, Aug 4, 2009 at 7:35 PM, Otis Gospodnetic
 otis_gospodne...@yahoo.com
 wrote:
 
 Hi Rahul,

 A) There are no known (to me) memory leaks.
 I think there are too many variables for a person to tell you what
 exactly
 is happening, plus you are dealing with the JVM here. :)

 Try jmap -histo:live PID-HERE | less and see what's using your memory.

 Otis
 --
 Sematext is hiring -- http://sematext.com/about/jobs.html?mls
 Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR



 - Original Message 
  From: Rahul R rahul.s...@gmail.com
  To: solr-user@lucene.apache.org
  Sent: Tuesday, August 4, 2009 1:09:06 AM
  Subject: JVM Heap utilization  Memory leaks with Solr
 
  I am trying to track memory utilization with my Application that uses
 Solr.
  Details of the setup :
  -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
  - Hardware : 12 CPU, 24 GB RAM
 
  For testing during PSR I am using a smaller subset of the actual data
 that I
  want to work with. Details of this smaller sub-set :
  - 5 million records, 4.5 GB index size
 
  Observations during PSR:
  A) I have allocated 3.2 GB for the JVM(s) that I used. After all users
  logout and doing a force GC, only 60 % of the heap is reclaimed. As
 part
 of
  the logout process I am invalidating the HttpSession and doing a
 close()
 on
  CoreContainer. From my application's side, I don't believe I am holding
 on
  to any resource. I wanted to know if there are known issues surrounding
  memory leaks with Solr ?
  B) To further test this, I tried deploying with shards. 3.2 GB was
 allocated
  to each JVM. All JVMs had 96 % free heap space after start up. I got
 varying
  results with this.
  Case 1 : Used 6 weblogic domains. My application was deployed one 1
 domain.
  I split the 5 million index into 5 parts of 1 million each and used
 them
 as
  shards. After multiple users used the system and doing a force GC,
 around
 94
  - 96 % of heap was reclaimed in all the JVMs.
  Case 2: Used 2 weblogic domains. My application was deployed on 1
 domain.
 On
  the other, I deployed the entire 5 million part index as one shard.
 After
  multiple users used the system and doing a gorce GC, around 76 % of the
 heap
  was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM where
 my
  application was running. This result further convinces me that my
  application can be absolved of holding on to memory resources.
 
  I am not sure how to interpret these results ? For searching, I am
 using
  Without Shards : EmbeddedSolrServer
  With Shards :CommonsHttpSolrServer
  In terms of Solr objects this is what differs in my code between normal
  search and shards search (distributed search)
 
  After looking at Case 1, I thought that the CommonsHttpSolrServer was
 more
  memory efficient but Case 2 proved me wrong. Or could there still be
 memory
  leaks in my application ? Any thoughts, suggestions would be welcome.
 
  Regards
  Rahul


 
 

-- 
View this message in context: 
http://www.nabble.com/JVM-Heap-utilization---Memory-leaks-with-Solr-tp24802380p25018165.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: JVM Heap utilization Memory leaks with Solr

2009-08-16 Thread Rahul R
My primary issue is not Out of Memory error at run time. It is memory leaks:
heap space not being released after doing a force GC also. So after sometime
as progressively more heap gets utilized, I start running out of memory
The verdict however seems unanimous that there are no known memory leak
issues within Solr. I am still looking at my application to analyse the
problem. Thank you.

On Thu, Aug 13, 2009 at 10:58 PM, Fuad Efendi f...@efendi.ca wrote:

 Most OutOfMemoryException (if not 100%) happening with SOLR are because of

 http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/search/FieldCache.
 html
 - it is used internally in Lucene to cache Field value and document ID.

 My very long-term observations: SOLR can run without any problems few
 days/months and unpredictable OOM happens just because someone tried sorted
 search which will populate array with IDs of ALL documents in the index.

 The only solution: calculate exactly amount of RAM needed for FieldCache...
 For instance, for 100,000,000 documents single instance of FieldCache may
 require 8*100,000,000 bytes (8 bytes per document ID?) which is almost 1Gb
 (at least!)


 I didn't notice any memory leaks after I started to use 16Gb RAM for SOLR
 instance (almost a year without any restart!)




 -Original Message-
 From: Rahul R [mailto:rahul.s...@gmail.com]
 Sent: August-13-09 1:25 AM
 To: solr-user@lucene.apache.org
  Subject: Re: JVM Heap utilization  Memory leaks with Solr

 *You should try to generate heap dumps and analyze the heap using a tool
 like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
 objects holding a large amount of memory*

 The tool that I used also allows to capture heap snap shots. Eclipse had a
 lot of pre-requisites. You need to apply some three or five patches before
 you can start using it My observations with this tool were that
 some
 Hashmaps were taking up a lot of space. Although I could not pin it down to
 the exact HashMap. These would either be weblogic's or Solr's I will
 anyway give eclipse's a try and see how it goes. Thanks for your input.

 Rahul

 On Wed, Aug 12, 2009 at 2:15 PM, Gunnar Wagenknecht
 gun...@wagenknecht.orgwrote:

  Rahul R schrieb:
   I tried using a profiling tool - Yourkit. The trial version was free
 for
  15
   days. But I couldn't find anything of significance.
 
  You should try to generate heap dumps and analyze the heap using a tool
  like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
  objects holding a large amount of memory.
 
  -Gunnar
 
  --
  Gunnar Wagenknecht
  gun...@wagenknecht.org
  http://wagenknecht.org/
 
 





RE: JVM Heap utilization Memory leaks with Solr

2009-08-13 Thread Fuad Efendi
Most OutOfMemoryException (if not 100%) happening with SOLR are because of
http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/search/FieldCache.
html
- it is used internally in Lucene to cache Field value and document ID. 

My very long-term observations: SOLR can run without any problems few
days/months and unpredictable OOM happens just because someone tried sorted
search which will populate array with IDs of ALL documents in the index.

The only solution: calculate exactly amount of RAM needed for FieldCache...
For instance, for 100,000,000 documents single instance of FieldCache may
require 8*100,000,000 bytes (8 bytes per document ID?) which is almost 1Gb
(at least!)


I didn't notice any memory leaks after I started to use 16Gb RAM for SOLR
instance (almost a year without any restart!)




-Original Message-
From: Rahul R [mailto:rahul.s...@gmail.com] 
Sent: August-13-09 1:25 AM
To: solr-user@lucene.apache.org
Subject: Re: JVM Heap utilization  Memory leaks with Solr

*You should try to generate heap dumps and analyze the heap using a tool
like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
objects holding a large amount of memory*

The tool that I used also allows to capture heap snap shots. Eclipse had a
lot of pre-requisites. You need to apply some three or five patches before
you can start using it My observations with this tool were that some
Hashmaps were taking up a lot of space. Although I could not pin it down to
the exact HashMap. These would either be weblogic's or Solr's I will
anyway give eclipse's a try and see how it goes. Thanks for your input.

Rahul

On Wed, Aug 12, 2009 at 2:15 PM, Gunnar Wagenknecht
gun...@wagenknecht.orgwrote:

 Rahul R schrieb:
  I tried using a profiling tool - Yourkit. The trial version was free for
 15
  days. But I couldn't find anything of significance.

 You should try to generate heap dumps and analyze the heap using a tool
 like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
 objects holding a large amount of memory.

 -Gunnar

 --
 Gunnar Wagenknecht
 gun...@wagenknecht.org
 http://wagenknecht.org/






Re: JVM Heap utilization Memory leaks with Solr

2009-08-12 Thread Gunnar Wagenknecht
Rahul R schrieb:
 I tried using a profiling tool - Yourkit. The trial version was free for 15
 days. But I couldn't find anything of significance.

You should try to generate heap dumps and analyze the heap using a tool
like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
objects holding a large amount of memory.

-Gunnar

-- 
Gunnar Wagenknecht
gun...@wagenknecht.org
http://wagenknecht.org/



Re: JVM Heap utilization Memory leaks with Solr

2009-08-12 Thread Rahul R
*You should try to generate heap dumps and analyze the heap using a tool
like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
objects holding a large amount of memory*

The tool that I used also allows to capture heap snap shots. Eclipse had a
lot of pre-requisites. You need to apply some three or five patches before
you can start using it My observations with this tool were that some
Hashmaps were taking up a lot of space. Although I could not pin it down to
the exact HashMap. These would either be weblogic's or Solr's I will
anyway give eclipse's a try and see how it goes. Thanks for your input.

Rahul

On Wed, Aug 12, 2009 at 2:15 PM, Gunnar Wagenknecht
gun...@wagenknecht.orgwrote:

 Rahul R schrieb:
  I tried using a profiling tool - Yourkit. The trial version was free for
 15
  days. But I couldn't find anything of significance.

 You should try to generate heap dumps and analyze the heap using a tool
 like the Eclipse Memory Analyzer. Maybe it helps spotting a group of
 objects holding a large amount of memory.

 -Gunnar

 --
 Gunnar Wagenknecht
 gun...@wagenknecht.org
 http://wagenknecht.org/




Re: JVM Heap utilization Memory leaks with Solr

2009-08-04 Thread Otis Gospodnetic
Hi Rahul,

A) There are no known (to me) memory leaks.
I think there are too many variables for a person to tell you what exactly is 
happening, plus you are dealing with the JVM here. :)

Try jmap -histo:live PID-HERE | less and see what's using your memory.

Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR



- Original Message 
 From: Rahul R rahul.s...@gmail.com
 To: solr-user@lucene.apache.org
 Sent: Tuesday, August 4, 2009 1:09:06 AM
 Subject: JVM Heap utilization  Memory leaks with Solr
 
 I am trying to track memory utilization with my Application that uses Solr.
 Details of the setup :
 -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
 - Hardware : 12 CPU, 24 GB RAM
 
 For testing during PSR I am using a smaller subset of the actual data that I
 want to work with. Details of this smaller sub-set :
 - 5 million records, 4.5 GB index size
 
 Observations during PSR:
 A) I have allocated 3.2 GB for the JVM(s) that I used. After all users
 logout and doing a force GC, only 60 % of the heap is reclaimed. As part of
 the logout process I am invalidating the HttpSession and doing a close() on
 CoreContainer. From my application's side, I don't believe I am holding on
 to any resource. I wanted to know if there are known issues surrounding
 memory leaks with Solr ?
 B) To further test this, I tried deploying with shards. 3.2 GB was allocated
 to each JVM. All JVMs had 96 % free heap space after start up. I got varying
 results with this.
 Case 1 : Used 6 weblogic domains. My application was deployed one 1 domain.
 I split the 5 million index into 5 parts of 1 million each and used them as
 shards. After multiple users used the system and doing a force GC, around 94
 - 96 % of heap was reclaimed in all the JVMs.
 Case 2: Used 2 weblogic domains. My application was deployed on 1 domain. On
 the other, I deployed the entire 5 million part index as one shard. After
 multiple users used the system and doing a gorce GC, around 76 % of the heap
 was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM where my
 application was running. This result further convinces me that my
 application can be absolved of holding on to memory resources.
 
 I am not sure how to interpret these results ? For searching, I am using
 Without Shards : EmbeddedSolrServer
 With Shards :CommonsHttpSolrServer
 In terms of Solr objects this is what differs in my code between normal
 search and shards search (distributed search)
 
 After looking at Case 1, I thought that the CommonsHttpSolrServer was more
 memory efficient but Case 2 proved me wrong. Or could there still be memory
 leaks in my application ? Any thoughts, suggestions would be welcome.
 
 Regards
 Rahul



Re: JVM Heap utilization Memory leaks with Solr

2009-08-04 Thread Rahul R
Otis,
Thank you for your response. I know there are a few variables here but the
difference in memory utilization with and without shards somehow leads me to
believe that the leak could be within Solr.

I tried using a profiling tool - Yourkit. The trial version was free for 15
days. But I couldn't find anything of significance.

Regards
Rahul


On Tue, Aug 4, 2009 at 7:35 PM, Otis Gospodnetic otis_gospodne...@yahoo.com
 wrote:

 Hi Rahul,

 A) There are no known (to me) memory leaks.
 I think there are too many variables for a person to tell you what exactly
 is happening, plus you are dealing with the JVM here. :)

 Try jmap -histo:live PID-HERE | less and see what's using your memory.

 Otis
 --
 Sematext is hiring -- http://sematext.com/about/jobs.html?mls
 Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR



 - Original Message 
  From: Rahul R rahul.s...@gmail.com
  To: solr-user@lucene.apache.org
  Sent: Tuesday, August 4, 2009 1:09:06 AM
  Subject: JVM Heap utilization  Memory leaks with Solr
 
  I am trying to track memory utilization with my Application that uses
 Solr.
  Details of the setup :
  -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
  - Hardware : 12 CPU, 24 GB RAM
 
  For testing during PSR I am using a smaller subset of the actual data
 that I
  want to work with. Details of this smaller sub-set :
  - 5 million records, 4.5 GB index size
 
  Observations during PSR:
  A) I have allocated 3.2 GB for the JVM(s) that I used. After all users
  logout and doing a force GC, only 60 % of the heap is reclaimed. As part
 of
  the logout process I am invalidating the HttpSession and doing a close()
 on
  CoreContainer. From my application's side, I don't believe I am holding
 on
  to any resource. I wanted to know if there are known issues surrounding
  memory leaks with Solr ?
  B) To further test this, I tried deploying with shards. 3.2 GB was
 allocated
  to each JVM. All JVMs had 96 % free heap space after start up. I got
 varying
  results with this.
  Case 1 : Used 6 weblogic domains. My application was deployed one 1
 domain.
  I split the 5 million index into 5 parts of 1 million each and used them
 as
  shards. After multiple users used the system and doing a force GC, around
 94
  - 96 % of heap was reclaimed in all the JVMs.
  Case 2: Used 2 weblogic domains. My application was deployed on 1 domain.
 On
  the other, I deployed the entire 5 million part index as one shard. After
  multiple users used the system and doing a gorce GC, around 76 % of the
 heap
  was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM where
 my
  application was running. This result further convinces me that my
  application can be absolved of holding on to memory resources.
 
  I am not sure how to interpret these results ? For searching, I am using
  Without Shards : EmbeddedSolrServer
  With Shards :CommonsHttpSolrServer
  In terms of Solr objects this is what differs in my code between normal
  search and shards search (distributed search)
 
  After looking at Case 1, I thought that the CommonsHttpSolrServer was
 more
  memory efficient but Case 2 proved me wrong. Or could there still be
 memory
  leaks in my application ? Any thoughts, suggestions would be welcome.
 
  Regards
  Rahul




JVM Heap utilization Memory leaks with Solr

2009-08-03 Thread Rahul R
I am trying to track memory utilization with my Application that uses Solr.
Details of the setup :
 -3rd party Software : Solaris 10, Weblogic 10, jdk_150_14, Solr 1.3.0
- Hardware : 12 CPU, 24 GB RAM

For testing during PSR I am using a smaller subset of the actual data that I
want to work with. Details of this smaller sub-set :
- 5 million records, 4.5 GB index size

Observations during PSR:
A) I have allocated 3.2 GB for the JVM(s) that I used. After all users
logout and doing a force GC, only 60 % of the heap is reclaimed. As part of
the logout process I am invalidating the HttpSession and doing a close() on
CoreContainer. From my application's side, I don't believe I am holding on
to any resource. I wanted to know if there are known issues surrounding
memory leaks with Solr ?
B) To further test this, I tried deploying with shards. 3.2 GB was allocated
to each JVM. All JVMs had 96 % free heap space after start up. I got varying
results with this.
Case 1 : Used 6 weblogic domains. My application was deployed one 1 domain.
I split the 5 million index into 5 parts of 1 million each and used them as
shards. After multiple users used the system and doing a force GC, around 94
- 96 % of heap was reclaimed in all the JVMs.
Case 2: Used 2 weblogic domains. My application was deployed on 1 domain. On
the other, I deployed the entire 5 million part index as one shard. After
multiple users used the system and doing a gorce GC, around 76 % of the heap
was reclaimed in the shard JVM. And 96 % was reclaimed in the JVM where my
application was running. This result further convinces me that my
application can be absolved of holding on to memory resources.

I am not sure how to interpret these results ? For searching, I am using
Without Shards : EmbeddedSolrServer
With Shards :CommonsHttpSolrServer
In terms of Solr objects this is what differs in my code between normal
search and shards search (distributed search)

After looking at Case 1, I thought that the CommonsHttpSolrServer was more
memory efficient but Case 2 proved me wrong. Or could there still be memory
leaks in my application ? Any thoughts, suggestions would be welcome.

Regards
Rahul