Hello,
On one of my ELS cluster, i have node with different hardware capacity.
1 node : 8 GB RAM and 200GB disk
1 node : 4 GB RAM and 20GB disk
2 node : 64GB RAM with 4To Disk

I find that ELS tries to balance the same amount of data on each node.
The 2 smaller node are near full (disks and cpu) while the 2 biggers don't 
do much work.
And so they crash often with OOM or others errors.


Is there any parameters like in hadoop to have the data distributed by % 
instead of MB and so with the memory ?

regards

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/4014e188-458c-429b-b6c2-7af941a8302e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to