Aug 06, 2012 10:05:55 AM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java
heap space
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:456)
at
Stack trace looks normal - it's just a multi-term query instantiating
a bitset. The memory is being taken up somewhere else.
How many documents are in your index?
Can you get a heap dump or use some other memory profiler to see
what's taking up the space?
if I stop query more then ten minutes,
There are 400 million documents in a shard, a document is less then 1 kb.
the data file _**.fdt is 149g.
Does the recovering need large memory in downloading or after downloaded?
I find some log before OOM as below:
Aug 06, 2012 9:43:04 AM org.apache.solr.core.SolrCore execute
INFO: [blog]
Perhaps this describes your problem:
https://issues.apache.org/jira/browse/SOLR-3685
-Original message-
From:Jam Luo cooljam2...@gmail.com
Sent: Tue 07-Aug-2012 11:52
To: solr-user@lucene.apache.org
Subject: Recovery problem in solrcloud
Hi
I have big index data files
On Aug 7, 2012, at 5:49 AM, Jam Luo cooljam2...@gmail.com wrote:
Hi
I have big index data files more then 200g, there are two solr
instance in a shard. leader startup and is ok, but the peer alway OOM
when it startup.
Can you share the OOM msg and stacktrace please?
The peer
Still no idea on the OOM - please send the stacktrace if you can.
As for doing a replication recovery when it should not be necessary, yonik just
committed a fix for that a bit ago.
On Aug 7, 2012, at 9:41 AM, Mark Miller markrmil...@gmail.com wrote:
On Aug 7, 2012, at 5:49 AM, Jam Luo