I tried this in a lab environment and ended up with a split-brain cluster. 
You've been warned. ;)

On Wednesday, September 30, 2015 at 10:36:37 AM UTC-6, Jesse Skrivseth 
wrote:
>
> This may be off-topic in this forum, but I wanted to focus on the 
> omnibus-provided configuration provided in Graylog. We have an instance 
> with 1 large node - 6TB storage - and we're now breaking this out into 3 x 
> 2TB smaller nodes. I've joined two of the 2TB nodes to the cluster and ES 
> has distributed shard replicas evenly to these two nodes. My problem begins 
> with the fact that ES doesn't seem to automatically rebalance existing 
> primary shards, so the 6TB node is still holding onto almost all the same 
> data. Over time new indices will be evenly distributed, but I want to evict 
> 66% of the data from the 6TB node. I've researched a few ways to do this, 
> such as using the routing API and manually specifying the "from" and "to" 
> nodes for a range of indices. And that may indeed be a good way to proceed. 
>
> But I wonder what would happen if I simply stop ES on the 6TB node, let 
> the cluster go yellow, delete the /var/opt/graylog/data/elasticsearch 
> contents from only the 6TB node, and restart ES. I assume ES would start 
> with no local data (or other state information) and automatically begin 
> copying replicas back. The end result would be an evenly balanced cluster 
> at the cost of some otherwise unnecessary copying. 
>
> Does this seem even remotely sane?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/cbc83667-1d9c-4480-81fe-4e63e109cb96%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to