On 6/11/2018 5:47 AM, THADC wrote:
Shawn, thanks. You say "at least two replicas per shard are required for high
availability". So that would be a total of three nodes for that shard,
correct?
The smallest possible fault-tolerant Solr install is a total of three
servers. Two of them will run
Shawn, thanks. You say "at least two replicas per shard are required for high
availability". So that would be a total of three nodes for that shard,
correct?
Thanks, Tim Clotworthy
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On 6/8/2018 12:13 PM, THADC wrote:
> I am having trouble getting a clear understanding of the relationship
> between my 3-node zookeeper cluster and how those 3 nodes relate to solr
> replicas (if at all). Since the replicas exist for failover purposes
> (correct?) as opposed to for load balancing
There's at least two ways of going about this:
1> when you create your collection, create it with the specil "EMPTY"
node set, then use ADDREPLICA to place each replica where you want it,
applying your knowledge of where the VMs are hosted.
2> use the replica placement rules, see:
Thanks Eric, that was helpful. But what if you want to proactively replicate
across multiple servers either at the VM or even physical server level. It
seems that we have control over the zookeeper locations and the solr server
locations since we explicitly define these when we configure the
Not at all. ZooKeeper is just the record-keeper for the _states_ of
the replicas, i.e. whether they are active, recovering, down and the
like, as well as the config sets (schema, solrconfig.xml etc).
There is no relationship between these two counts. Well, if you have a
zillion collections with a
Hello,
I am having trouble getting a clear understanding of the relationship
between my 3-node zookeeper cluster and how those 3 nodes relate to solr
replicas (if at all). Since the replicas exist for failover purposes
(correct?) as opposed to for load balancing (which is what the sharding