On 4/22/2019 3:19 AM, vishal patel wrote:
-- 228634803 maxDoc of one shard [we have 26 collection in production
and 2 shards 2 replicas]
228 million is quite a lot of documents.
Can you gather and share the screenshot described on the following wiki
page?
There seem to be two Solr instances
On 4/18/2019 1:00 AM, vishal patel wrote:
Thanks for your reply.
You are right. I checked GC log and use of GC Viewer I noticed that pause time
was 111.4546597 secs.
2019-04-08T13:52:09.939+0100: 796800.430: [GC (Allocation Failure) 796800.431:
[ParNew
Desired survivor size 2415919104 byt
o: solr-user@lucene.apache.org
Subject: Re: Replica becomes leader when shard was taking a time to update
document - Solr 6.1.0
Specifically a _leader_ being put into the down or recovering state is almost
always because ZooKeeper cannot ping it and get a response back before it times
out. This al
Specifically a _leader_ being put into the down or recovering state is almost
always because ZooKeeper cannot ping it and get a response back before it times
out. This also points to large GC pauses no the Solr node. Using something like
GCViewer on the GC logs at the time of the problem will he
On 4/17/2019 6:25 AM, vishal patel wrote:
Why did shard1 take a 1.8 minutes time for update? and if it took time for
update then why did replica1 try to become leader? Is it required to update any
timeout?
There's no information here that can tell us why the update took so
long. My best gue
We have 2 shard and 2 replicas in production server.Somehow replica1 became
leader when some commit process was running in shard1.
Log ::
***shard1***
2019-04-08 12:52:09.930 INFO
(searcherExecutor-30-thread-1-processing-n:shard1:8983_solr x:productData
s:shard1 c:productData r:core_node1) [c