On 10/5/2018 9:15 AM, Ganesh Sethuraman wrote:
I am not sure the logs and GC logs were evident from my previous mail.
Re-posting it here for your reference:
Here is the full Solr Log file (Note that it is in INFO mode):
https://raw.githubusercontent.com/ganeshmailbox/har/master/SolrLogFile
Here
Reading the ZK transaction log could be issue, as ZK seems to be sensitive
to this (
https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#The+Log+Directory
)
> incorrect placement of transasction log
> The most performance critical part of ZooKeeper is the transaction log.
> ZooKeeper
On 10/5/2018 5:15 AM, Ganesh Sethuraman wrote:
1. Does GC and Solr Logs help to why the Solr replicas server continues to
be in the recovering/ state? Our assumption is that Sept 17 16:00 hrs we
had done ZK transaction log reading, that might have caused the issue. Is
that correct?
2. Does this
1. Does GC and Solr Logs help to why the Solr replicas server continues to
be in the recovering/ state? Our assumption is that Sept 17 16:00 hrs we
had done ZK transaction log reading, that might have caused the issue. Is
that correct?
2. Does this state can cause slowness to Solr Queries for
On Tue, Oct 2, 2018 at 11:46 PM Shawn Heisey wrote:
> On 10/2/2018 8:55 PM, Ganesh Sethuraman wrote:
> > We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
> > ensemble in AWS. There are about 60 collections at any point in time. We
> > have per JVM max heap of 8GB.
>
> Let's
On 10/2/2018 8:55 PM, Ganesh Sethuraman wrote:
We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
ensemble in AWS. There are about 60 collections at any point in time. We
have per JVM max heap of 8GB.
Let's focus for right now on a single Solr machine, rather than the
whole
Hi
We are using 2 node SolrCloud 7.2.1 cluster with external 3 node ZK
ensemble in AWS. There are about 60 collections at any point in time. We
have per JVM max heap of 8GB.
The problem is: We are seeing few collection's few replicas in "recovering"
state and few in the "down". Since we have 2