On 3/26/2014 10:26 PM, Darrell Burgan wrote:
Okay well it didn't take long for the swapping to start happening on one of
our nodes. Here is a screen shot of the Solr console:
https://s3-us-west-2.amazonaws.com/panswers-darrell/solr.png
And here is a shot of top, with processes sorted by
: Thursday, March 27, 2014 2:59 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 memory swapping
On 3/26/2014 10:26 PM, Darrell Burgan wrote:
Okay well it didn't take long for the swapping to start happening on one of
our nodes. Here is a screen shot of the Solr console:
https://s3
It could be related to NUMA.
Check out this article about it which has some fixes that worked for me.
http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/
--
View this message in context:
to work with. Could it be that the swapping is due
to the memory-mapped file in some way?
-Original Message-
From: Lan [mailto:dung@gmail.com]
Sent: Wednesday, March 26, 2014 12:45 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3.1 memory swapping
It could be related to NUMA
Thanks - we're currently running Solr inside of RHEL virtual machines
inside of VMware. Running numactl --hardware inside the VM shows the
following:
available: 1 nodes (0)
node 0 size: 16139 MB
node 0 free: 364 MB
node distances:
node 0
0: 10
So there is only one node being
, March 26, 2014 8:14 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr 4.3.1 memory swapping
Thanks - we're currently running Solr inside of RHEL virtual machines
inside of VMware. Running numactl --hardware inside the VM shows the
following:
available: 1 nodes (0)
node 0 size: 16139 MB
Hello all, we have a SolrCloud implementation in production, with two servers
running Solr 4.3.1 in a SolrCloud configuration. Our search index is about
70-80GB in size. The trouble is that after several days of uptime, we will
suddenly have periods where the operating system Solr is running