Re: Suddenly OOM

2012-05-13 Thread Otis Gospodnetic
Hi Jasper,

Hm, not sure what it could be without a closer inspection.  If you facet or 
sort, those two operations can use lots of memory.  Check your Solr caches, 
make sure they don't have crazy high values.  Consider upgrading to Solr 3.6, 
it uses less memory than previous versions of Solr.  Consider dumping JVM heap 
and inspecting it.  Make sure norms are not turned on on fields that don't need 
them.

Otis

Performance Monitoring for Solr / ElasticSearch / HBase - 
http://sematext.com/spm 




 From: Jasper Floor jasper.fl...@m4n.nl
To: solr-user@lucene.apache.org 
Sent: Friday, May 11, 2012 6:06 AM
Subject: Re: Suddenly OOM
 
Outr rambuffer is the default. the Xmx is 75% of the available memory
on the machine which is 4GB. We've tried increasing it to 85% and even
gave the machine 10GB of memory. So we more than doubled the memory.
The amount of data wasn't double but where it used to be enough now it
seems to never be enough.

mvg,
Jasper

On Thu, May 10, 2012 at 6:03 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
 Jasper,

 The simple answer is to increase -Xmx :)
 What is your ramBufferSizeMB (solrconfig.xml) set to?  Default is 32 (MB).

 That autocommit you mentioned is a DB commit?  Not Solr one, right?  If so, 
 why is commit needed when you *read* data from DB?

 Otis
 
 Performance Monitoring for Solr / ElasticSearch / HBase - 
 http://sematext.com/spm



 - Original Message -
 From: Jasper Floor jasper.fl...@m4n.nl
 To: solr-user@lucene.apache.org
 Cc:
 Sent: Thursday, May 10, 2012 9:06 AM
 Subject: Suddenly OOM

 Hi all,

 we've been running Solr 1.4 for about a year with no real problems. As
 of monday it became impossible to do a full import on our master
 because of an OOM. Now what I think is strange is that even after we
 more than doubled the available memory there would still always be an
 OOM.  We seem to have reached a magic number of documents beyond which
 Solr requires infinite memory (or at least more than 2.5x what it
 previously needed which is the same as infinite unless we invest in
 more resources).

 We have solved the immediate problem by changing autocommit=false,
 holdability=CLOSE_CURSORS_AT_COMMIT, batchSize=1. Now
 holdability in this case I don't think does very much as I believe
 this is the default behavior. BatchSize certainly has a direct effect
 on performance (about 3x time difference between 1 and 1). The
 autocommit is a problem for us however. This leaves transactions
 active in the db which may block other processes.

 We have about 5.1 million documents in the index which is about 2.2 
 gigabytes.

 A full index is a rare operation with us but when we need it we also
 need it to work (thank you captain obvious).

 With the settings above a full index takes 15 minutes. We anticipate
 we will be handling at least 10x the amount of data in the future. I
 actually hope to have solr 4 by then but I can't sell a product which
 isn't finalized yet here.


 Thanks for any insight you can give.

 mvg,
 Jasper





Re: Suddenly OOM

2012-05-11 Thread Jasper Floor
Outr rambuffer is the default. the Xmx is 75% of the available memory
on the machine which is 4GB. We've tried increasing it to 85% and even
gave the machine 10GB of memory. So we more than doubled the memory.
The amount of data wasn't double but where it used to be enough now it
seems to never be enough.

mvg,
Jasper

On Thu, May 10, 2012 at 6:03 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
 Jasper,

 The simple answer is to increase -Xmx :)
 What is your ramBufferSizeMB (solrconfig.xml) set to?  Default is 32 (MB).

 That autocommit you mentioned is a DB commit?  Not Solr one, right?  If so, 
 why is commit needed when you *read* data from DB?

 Otis
 
 Performance Monitoring for Solr / ElasticSearch / HBase - 
 http://sematext.com/spm



 - Original Message -
 From: Jasper Floor jasper.fl...@m4n.nl
 To: solr-user@lucene.apache.org
 Cc:
 Sent: Thursday, May 10, 2012 9:06 AM
 Subject: Suddenly OOM

 Hi all,

 we've been running Solr 1.4 for about a year with no real problems. As
 of monday it became impossible to do a full import on our master
 because of an OOM. Now what I think is strange is that even after we
 more than doubled the available memory there would still always be an
 OOM.  We seem to have reached a magic number of documents beyond which
 Solr requires infinite memory (or at least more than 2.5x what it
 previously needed which is the same as infinite unless we invest in
 more resources).

 We have solved the immediate problem by changing autocommit=false,
 holdability=CLOSE_CURSORS_AT_COMMIT, batchSize=1. Now
 holdability in this case I don't think does very much as I believe
 this is the default behavior. BatchSize certainly has a direct effect
 on performance (about 3x time difference between 1 and 1). The
 autocommit is a problem for us however. This leaves transactions
 active in the db which may block other processes.

 We have about 5.1 million documents in the index which is about 2.2 
 gigabytes.

 A full index is a rare operation with us but when we need it we also
 need it to work (thank you captain obvious).

 With the settings above a full index takes 15 minutes. We anticipate
 we will be handling at least 10x the amount of data in the future. I
 actually hope to have solr 4 by then but I can't sell a product which
 isn't finalized yet here.


 Thanks for any insight you can give.

 mvg,
 Jasper



Suddenly OOM

2012-05-10 Thread Jasper Floor
Hi all,

we've been running Solr 1.4 for about a year with no real problems. As
of monday it became impossible to do a full import on our master
because of an OOM. Now what I think is strange is that even after we
more than doubled the available memory there would still always be an
OOM.  We seem to have reached a magic number of documents beyond which
Solr requires infinite memory (or at least more than 2.5x what it
previously needed which is the same as infinite unless we invest in
more resources).

We have solved the immediate problem by changing autocommit=false,
holdability=CLOSE_CURSORS_AT_COMMIT, batchSize=1. Now
holdability in this case I don't think does very much as I believe
this is the default behavior. BatchSize certainly has a direct effect
on performance (about 3x time difference between 1 and 1). The
autocommit is a problem for us however. This leaves transactions
active in the db which may block other processes.

We have about 5.1 million documents in the index which is about 2.2 gigabytes.

A full index is a rare operation with us but when we need it we also
need it to work (thank you captain obvious).

With the settings above a full index takes 15 minutes. We anticipate
we will be handling at least 10x the amount of data in the future. I
actually hope to have solr 4 by then but I can't sell a product which
isn't finalized yet here.


Thanks for any insight you can give.

mvg,
Jasper


Re: Suddenly OOM

2012-05-10 Thread Godfrey Obinchu
You need to perform Garbage Collection tune up on your JVM to handle the OOM

Sent from my iPhone

On May 10, 2012, at 21:06, Jasper Floor jasper.fl...@m4n.nl wrote:

 Hi all,
 
 we've been running Solr 1.4 for about a year with no real problems. As
 of monday it became impossible to do a full import on our master
 because of an OOM. Now what I think is strange is that even after we
 more than doubled the available memory there would still always be an
 OOM.  We seem to have reached a magic number of documents beyond which
 Solr requires infinite memory (or at least more than 2.5x what it
 previously needed which is the same as infinite unless we invest in
 more resources).
 
 We have solved the immediate problem by changing autocommit=false,
 holdability=CLOSE_CURSORS_AT_COMMIT, batchSize=1. Now
 holdability in this case I don't think does very much as I believe
 this is the default behavior. BatchSize certainly has a direct effect
 on performance (about 3x time difference between 1 and 1). The
 autocommit is a problem for us however. This leaves transactions
 active in the db which may block other processes.
 
 We have about 5.1 million documents in the index which is about 2.2 gigabytes.
 
 A full index is a rare operation with us but when we need it we also
 need it to work (thank you captain obvious).
 
 With the settings above a full index takes 15 minutes. We anticipate
 we will be handling at least 10x the amount of data in the future. I
 actually hope to have solr 4 by then but I can't sell a product which
 isn't finalized yet here.
 
 
 Thanks for any insight you can give.
 
 mvg,
 Jasper


Re: Suddenly OOM

2012-05-10 Thread Otis Gospodnetic
Jasper,

The simple answer is to increase -Xmx :)
What is your ramBufferSizeMB (solrconfig.xml) set to?  Default is 32 (MB).

That autocommit you mentioned is a DB commit?  Not Solr one, right?  If so, why 
is commit needed when you *read* data from DB?

Otis 

Performance Monitoring for Solr / ElasticSearch / HBase - 
http://sematext.com/spm 



- Original Message -
 From: Jasper Floor jasper.fl...@m4n.nl
 To: solr-user@lucene.apache.org
 Cc: 
 Sent: Thursday, May 10, 2012 9:06 AM
 Subject: Suddenly OOM
 
 Hi all,
 
 we've been running Solr 1.4 for about a year with no real problems. As
 of monday it became impossible to do a full import on our master
 because of an OOM. Now what I think is strange is that even after we
 more than doubled the available memory there would still always be an
 OOM.  We seem to have reached a magic number of documents beyond which
 Solr requires infinite memory (or at least more than 2.5x what it
 previously needed which is the same as infinite unless we invest in
 more resources).
 
 We have solved the immediate problem by changing autocommit=false,
 holdability=CLOSE_CURSORS_AT_COMMIT, batchSize=1. Now
 holdability in this case I don't think does very much as I believe
 this is the default behavior. BatchSize certainly has a direct effect
 on performance (about 3x time difference between 1 and 1). The
 autocommit is a problem for us however. This leaves transactions
 active in the db which may block other processes.
 
 We have about 5.1 million documents in the index which is about 2.2 gigabytes.
 
 A full index is a rare operation with us but when we need it we also
 need it to work (thank you captain obvious).
 
 With the settings above a full index takes 15 minutes. We anticipate
 we will be handling at least 10x the amount of data in the future. I
 actually hope to have solr 4 by then but I can't sell a product which
 isn't finalized yet here.
 
 
 Thanks for any insight you can give.
 
 mvg,
 Jasper