Hi - it's certainly not a rule of thumb but usually RES always grows higher
than Xmx so keep an eye on it.
-Original message-
From:vsilgalis vsilga...@gmail.com
Sent: Wednesday 28th August 2013 2:53
To: solr-user@lucene.apache.org
Subject: Re: SOLR 4.2.1 - High Resident Memory Usage
So we actually 3 of the 6 machines automatically restart the SOLR service as
memory pressure was too high, 2 were by SIGABRT and one was java OOMkiller.
I dropped a pmap on one of the solr services before it died.
Basically i need to figure out what the other direct memory references are
outside
We have a 2 shard SOLRCloud implementation with 6 servers in production. We
have allocated 24GB to each server and are using JVM max memory settings of
-Xmx14336 on each of the servers. We are using the same embedded jetty that
SOLR comes with. The JVM side of things looks like what I'd expect
On 8/27/2013 11:56 AM, vsilgalis wrote:
We have a 2 shard SOLRCloud implementation with 6 servers in production. We
have allocated 24GB to each server and are using JVM max memory settings of
-Xmx14336 on each of the servers. We are using the same embedded jetty that
SOLR comes with. The JVM
thanks for the quick reply.
I made to rule out what I could around how Linux is handling this stuff.
Yes I'm using the default swappiness setting of 60, but at this point it
looks like the machine is swapping now because of low memory.
Here is the vmstat and free -m results:
On 8/27/2013 3:32 PM, vsilgalis wrote:
thanks for the quick reply.
I made to rule out what I could around how Linux is handling this stuff.
Yes I'm using the default swappiness setting of 60, but at this point it
looks like the machine is swapping now because of low memory.
Here is the vmstat
Ok, this whole topic usually gives me heartburn. So I'll just point out
an interesting blog on this from Mike McCandless:
http://blog.mikemccandless.com/2011/04/just-say-no-to-swapping.html
At least tuning swappiness to 0 will tell you whether it's real or phantom.
Of course I'd be trying it on a
dash:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_dash1.png
JVM section:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_dash2.png
ps output:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_ps_out.png
Erick that may be one of the ways I approach this, I just want to
On 8/27/2013 4:17 PM, Erick Erickson wrote:
Ok, this whole topic usually gives me heartburn. So I'll just point out
an interesting blog on this from Mike McCandless:
http://blog.mikemccandless.com/2011/04/just-say-no-to-swapping.html
At least tuning swappiness to 0 will tell you whether it's
On 8/27/2013 4:48 PM, vsilgalis wrote:
dash:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_dash1.png
JVM section:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_dash2.png
ps output:
http://lucene.472066.n3.nabble.com/file/n4086902/solr_ps_out.png
Erick that may be one of the
Hi
-Original message-
From:Shawn Heisey s...@elyograg.org
Sent: Wednesday 28th August 2013 0:50
To: solr-user@lucene.apache.org
Subject: Re: SOLR 4.2.1 - High Resident Memory Usage
On 8/27/2013 4:17 PM, Erick Erickson wrote:
Ok, this whole topic usually gives me heartburn. So
http://lucene.472066.n3.nabble.com/file/n4086923/huge.png
That doesn't seem to be a problem.
Markus, are you saying that I should plan on resident memory being at least
double my heap size? I haven't run into issues around this before but then
again I don't know everything.
Is this a rule of
12 matches
Mail list logo