You can balance, to a degree, based on disk space, but not heap/system RAM.
There might be other options, like playing with shard allocation.

See
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html
for some ideas.

On 11 November 2014 19:43, lagarutte via elasticsearch <
[email protected]> wrote:

> Hello,
> On one of my ELS cluster, i have node with different hardware capacity.
> 1 node : 8 GB RAM and 200GB disk
> 1 node : 4 GB RAM and 20GB disk
> 2 node : 64GB RAM with 4To Disk
>
> I find that ELS tries to balance the same amount of data on each node.
> The 2 smaller node are near full (disks and cpu) while the 2 biggers don't
> do much work.
> And so they crash often with OOM or others errors.
>
>
> Is there any parameters like in hadoop to have the data distributed by %
> instead of MB and so with the memory ?
>
> regards
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/4014e188-458c-429b-b6c2-7af941a8302e%40googlegroups.com
> <https://groups.google.com/d/msgid/elasticsearch/4014e188-458c-429b-b6c2-7af941a8302e%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAF3ZnZ%3DL6ag3GZtz9Zh30%3Do9CHKyPsyQ%2B5Vi%2BtNEw7iH9DKe-Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to