On 3/22/2016 9:28 PM, Raveendra Yerraguntla wrote:
> I am using Solr 5.4 in solr cloud mode in a 8 node cluster. Used the
> replication factor of 1 for creating the index, then switched to
> replication factor > 1 for redundancy. With replication factor > 1,
> and tried to do indexing for incremental.  When the incremental
> indexing happens - getting a stack trace with the root cause pointing
> to write.lock is not available. Further analysis found that there is
> only one write.lock across all shards (leader and replicas). 

Unless you use the HDFS Directory implementation in Solr, the *only*
time replicationFactor has *any* effect is when you first create your
collection.  After that, it has *zero* effect -- unless you are using
HDFS and have configured it in a particular way.

To achieve redundancy with the normal Directory implementation (usually
NRTCachingDirectoryFactory), you will need to either create the
collection with a replicationFactor higher than 1, or you will need to
use the ADDREPLICA action on the Collections API to create more replicas
of your shards.

> But with replication factor of 1 , I could see write.lock across all
> nodes.
>
> is this the expected behavior (one write.lock) in the solr cloud with
> replication factor > 1. If so how can the indexing be done (even
> though it is slow) with distributed and redundant shards?

There are three major reasons for a problem with write.lock.
1) Solr is crashing and leaving the write.lock file behind.
2) You are trying to share an index directory between more than one core
or Solr instance.
3) You are trying to run with your index data on a network filesystem
like NFS.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to