Hey Tom,

Are you using the fine-grained or coarse-grained scheduler? For the 
coarse-grained scheduler, there is a spark.cores.max config setting that will 
limit the total # of cores it grabs. This was there in earlier versions too.

Matei

> On May 19, 2015, at 12:39 PM, Thomas Dudziak <tom...@gmail.com> wrote:
> 
> I read the other day that there will be a fair number of improvements in 1.4 
> for Mesos. Could I ask for one more (if it isn't already in there): a 
> configurable limit for the number of tasks for jobs run on Mesos ? This would 
> be a very simple yet effective way to prevent a job dominating the cluster.
> 
> cheers,
> Tom
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to