Not that I am aware of. Spark will try to spread the tasks evenly
across executors, its not aware of the workers at all. So if the
executors to worker allocation is uneven, I am not sure what can be
done. Maybe others can get smoe ideas.
On Tue, Dec 9, 2014 at 6:20 AM, Gerard Maas
Gerard,
Are you familiar with spark.deploy.spreadOut
http://spark.apache.org/docs/latest/spark-standalone.html in Standalone
mode? It sounds like you want the same thing in Mesos mode.
On Thu, Dec 11, 2014 at 6:48 AM, Tathagata Das tathagata.das1...@gmail.com
wrote:
Not that I am aware of.
Hi,
We've a number of Spark Streaming /Kafka jobs that would benefit of an even
spread of consumers over physical hosts in order to maximize network usage.
As far as I can see, the Spark Mesos scheduler accepts resource offers
until all required Mem + CPU allocation has been satisfied.
This