I have not tried this but my initial thoughts would be

- Set ES_HEAP_SIZE = 30 GB, give Hadoop an appropriate amount, leave the rest 
for the OS cache.
- Set the filesystem paths where ES and Hadoop store data to separate physical 
disk(s). You don't want them contending for bandwidth.
- You don't have to use RAID for ES, you can use multiple data paths if you 
have multiple disks.
- At this size, many people choose to run multiple instances of ES on a single 
physical. Give each instance 30GB and point to different disks.

A

On Sep 4, 2014, at 11:32 AM, Ronny Vaningh <[email protected]> wrote:

> Hi
> 
> 
> I have some beefy boxes with 512 Gb ram and I would like to co-locate 
> yarn/hadoop with elasticsearch
> 
> Does anyone have experience in doing the same ?
> How did you split the resources (memory/disk) across both functions ?
> 
> Hdfs like jbod while E.S. like raid10
> 
> 
> Thank 
> 
> 
> Reg
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/CACA5U5mW8zn0BLWryeePdR%2B0yiBGhd_9pQWVy-Uufj3H-oeQhA%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0A8CF809-7F38-40B2-AE23-508F1477B823%40elasticsearch.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to