Mario Briggs commented on SPARK-17917:

I don't have a strong feeling on this partly because I'm not sure what the 
action then is – kill the job?
Here is an example - Lets say i am using a notebook and kicked off some spark 
actions that dont' get executors because user/org/group quota's of executors 
have been exhausted. These events can be used by the notebook implementor to 
then surface the issue to the user via a UI update on that cell itself, maybe 
even additionally query the user/org/group quota's show which apps are using up 
the quota's etc and allow the user to take what action required (kill the other 
jobs, just wait on this job etc). Therefore not looking to define in anyway on 
the event, what the set of actions can be, since that would be very 
implementation specific.

Maybe, I suppose it will be a little tricky to define what the event is here
Where you referring to the actual arguments of the event method. I can give a 
shot at defining and then look for feedback

> Convert 'Initial job has not accepted any resources..' logWarning to a 
> SparkListener event
> ------------------------------------------------------------------------------------------
>                 Key: SPARK-17917
>                 URL: https://issues.apache.org/jira/browse/SPARK-17917
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Mario Briggs
>            Priority: Minor
> When supporting Spark on a multi-tenant shared large cluster with quotas per 
> tenant, often a submitted taskSet might not get executors because quotas have 
> been exhausted (or) resources unavailable. In these situations, firing a 
> SparkListener event instead of just logging the issue (as done currently at 
> https://github.com/apache/spark/blob/9216901d52c9c763bfb908013587dcf5e781f15b/core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala#L192),
>  would give applications/listeners an opportunity to handle this more 
> appropriately as needed.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to