At first, I noticed what some have called "shard thrashing," ie during 
startup shards are re-allocated as nodes come online.

Have implemented the following by either creating a new setting or 
modifying existing settings in elasticsearch.yml

1. Disable allocation altogether

cluster.routing.allocation.disable_allocation: true

2. Avoid split-brain in the current 5 node cluster

discovery.zen.minimum_master_nodes: 3

3 Increased Discovery timeout

discovery.zen.ping.timeout: 100s


Specific Objective:
When a cluster restarts, try to force re-use of how the shards were 
allocated before shutdown.

Attempt:
- Tried to increase the discovery.zen.minimum_master_nodes to 5 in a 5 node 
cluster with the idea that if a node could refuse to become operational 
until all 5 nodes in the cluster were recognized. 

Result:
Unfortunately, despite making this setting equal to the total number of 
nodes in the cluster, I observed shard re-allocation at 4 of the 5 nodes 
without waiting for the fifth node to come online. And, this is with 
allocation disabled.

Would like an opinion whether what I'm trying to accomplish is even 
possible to
- As much as possible to force a restarted cluster to use existing shards 
as already allocated
- Start all at once rather than rolling node starts which contributes to 
shard re-allocation.

TIA,
Tony

 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aadbb803-f78e-4ddf-a718-69d4a2792f12%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to