Mario Briggs commented on SPARK-17917:

would appreciate if the spark devs comment in whether they see this as a bad 
idea for some reason. 

I basically see add 2 events to SparkListener like
  onTaskStarved() and OnTaskUnStarved() - the latter fires only if 
onTaskStarved() fired in the first place for a taskSet

> Convert 'Initial job has not accepted any resources..' logWarning to a 
> SparkListener event
> ------------------------------------------------------------------------------------------
>                 Key: SPARK-17917
>                 URL: https://issues.apache.org/jira/browse/SPARK-17917
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Mario Briggs
> When supporting Spark on a multi-tenant shared large cluster with quotas per 
> tenant, often a submitted taskSet might not get executors because quotas have 
> been exhausted (or) resources unavailable. In these situations, firing a 
> SparkListener event instead of just logging the issue (as done currently at 
> https://github.com/apache/spark/blob/9216901d52c9c763bfb908013587dcf5e781f15b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L192),
>  would give applications/listeners an opportunity to handle this more 
> appropriately as needed.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to