Is there anything says something about that bug?
2013/8/28 Dan Davis dansm...@gmail.com
This could be an operating systems problem rather than a Solr problem.
CentOS 6.4 (linux kernel 2.6.32) may have some issues with page flushing
and I would read-up up on that.
The VM parameters can be
On 9/9/2013 10:35 AM, P Williams wrote:
Is it odd that my index is ~16GB but top shows 30GB in virtual memory?
Would the extra be for the field and filter caches I've increased in size?
This should probably be a new thread, but it might have some
applicability here, so I'm replying.
I
Hi,
I've been seeing the same thing on CentOS with high physical memory use
with low JVM-Memory use. I came to the conclusion that this was expected
behaviour. Using top I noticed that my solr user's java process has
Virtual memory allocated of about twice the size of the index, actual is
This could be an operating systems problem rather than a Solr problem.
CentOS 6.4 (linux kernel 2.6.32) may have some issues with page flushing
and I would read-up up on that.
The VM parameters can be tuned in /etc/sysctl.conf
On Sun, Aug 25, 2013 at 4:23 PM, Furkan KAMACI
Hi Erick;
I wanted to get a quick answer that's why I asked my question as that way.
Error is as follows:
INFO - 2013-08-21 22:01:30.978;
org.apache.solr.update.processor.LogUpdateProcessor; [collection1]
webapp=/solr path=/update params={wt=javabinversion=2}
{add=[com.deviantart.reachmeh
I make a test at my SolrCloud. I try to send 100 millions documents into my
node which has no replica via Hadoop. When document count send to that node
is around 30 millions, RAM usage of my machine becomes 99% (Solr Heap Usage
is not 99%, it uses just 3GB - 4GB of RAM). After a time later my node
This is sounding like an XY problem. What are you measuring
when you say RAM usage is 99%? is this virtual memory? See:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
What errors are you seeing when you say: my node stops to receiving
documents?
How are you sending 10M