Unfortunately setting the executor memory to prevent multiple executors from
the same framework would inherently mean that we'd need to set just over
half the available worker memory for each node. So if each node had 32GB of
worker memory, then the application would need to set 17GB to absolutely
ensure that it never gets 2 executors on the same host (but that's way too
much memory to request per node in our case haha).

I'll look into adding this type of config option myself when I get some
time, since I think it would still be valuable to prevent a memory-intensive
application from taking down a bunch of spark workers just because their
executors are going bananas :P



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-Executor-Cores-question-tp14763p14826.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to