tgravescs commented on pull request #29977:
URL: https://github.com/apache/spark/pull/29977#issuecomment-707140239


   That is exactly what I wanted to look at some more. 
   There are a few corner cases Executor handles like failing on memory leak. 
The other thing is that the fetchFailed error is private[spark] in the 
TaskContext. If something happens such that user/Spark wraps that exception you 
might not be able to tell that.  There is also the task commit denied exception 
and then failures due to just stopping early.
   Most of these are corner cases but really it comes down to do we care this 
if users using this api infer what the real Spark status was in these few 
cases. they just mostly just have to reproduce some of the logic that Executor 
does. like check for commit denied exception for instance.   This is a 
developer api and think it would be ok as long as we document appropriately.
   
   I think we should definitely keep something that allows user to tell if task 
passed or failed.
   
   @mridulm thoughts on leaving here vs we could just move the task end 
notification about into the Executor which would be in the same thread still. 
That feels like it would be more reliable with the plugin knowing same status 
of task that the scheduler will see.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to