Xuefu Zhang created SPARK-20662:
-----------------------------------

             Summary: Block jobs that have greater than a configured number of 
tasks
                 Key: SPARK-20662
                 URL: https://issues.apache.org/jira/browse/SPARK-20662
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 2.0.0, 1.6.0
            Reporter: Xuefu Zhang


In a shared cluster, it's desirable for an admin to block large Spark jobs. 
While there might not be a single metrics defining the size of a job, the 
number of tasks is usually a good indicator. Thus, it would be useful for Spark 
scheduler to block a job whose number of tasks reaches a configured limit. By 
default, the limit could be just infinite, to retain the existing behavior.

MapReduce has mapreduce.job.max.map and mapreduce.job.max.reduce to be 
configured, which blocks a MR job at job submission time.

The proposed configuration is spark.job.max.tasks with a default value -1 
(infinite).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to