Thanks for sharing!

On Thursday, 1 October 2015 19:20:06 UTC+2, Jesse Skrivseth wrote:
>
> I tried this in a lab environment and ended up with a split-brain cluster. 
> You've been warned. ;)
>
> On Wednesday, September 30, 2015 at 10:36:37 AM UTC-6, Jesse Skrivseth 
> wrote:
>>
>> This may be off-topic in this forum, but I wanted to focus on the 
>> omnibus-provided configuration provided in Graylog. We have an instance 
>> with 1 large node - 6TB storage - and we're now breaking this out into 3 x 
>> 2TB smaller nodes. I've joined two of the 2TB nodes to the cluster and ES 
>> has distributed shard replicas evenly to these two nodes. My problem begins 
>> with the fact that ES doesn't seem to automatically rebalance existing 
>> primary shards, so the 6TB node is still holding onto almost all the same 
>> data. Over time new indices will be evenly distributed, but I want to evict 
>> 66% of the data from the 6TB node. I've researched a few ways to do this, 
>> such as using the routing API and manually specifying the "from" and "to" 
>> nodes for a range of indices. And that may indeed be a good way to proceed. 
>>
>> But I wonder what would happen if I simply stop ES on the 6TB node, let 
>> the cluster go yellow, delete the /var/opt/graylog/data/elasticsearch 
>> contents from only the 6TB node, and restart ES. I assume ES would start 
>> with no local data (or other state information) and automatically begin 
>> copying replicas back. The end result would be an evenly balanced cluster 
>> at the cost of some otherwise unnecessary copying. 
>>
>> Does this seem even remotely sane?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/d2f41b64-5d94-4843-abc8-cb6b0d1a5be9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to