Hi Community,

Im running spark in standalone mode and In my current cluster each slave has 8GB of RAM. I wanted to add one more powerful machine with 100GB of RAM as a slave to the cluster and encountered some difficulty. If I don't set spark.executor.memory, all slaves will only allocate 512MB of RAM to the job. However, I cant set spark.executor.memory to be more than 8GB, otherwise my existing slaves will not be used. It seems Spark was designed mainly for a homogeneous cluster. Can anyone suggest a way around this?

Thanks,

Yadid

Reply via email to