Github user tgravescs commented on the pull request:

    https://github.com/apache/spark/pull/5343#issuecomment-90188806
  
    I'm a bit on the fence about this one as well.  Relying on the fact the 
client goes away kills the job when its running on the cluster seems unreliable 
to me.  For instance lets say client loses network connection to yarn 
temporarily.  User thinks its killed when it really isn't.  I'm sure the common 
case it would work just fine though. But it seems like it would be better to 
ask for status of the app and kill it via yarn.  
    On the other hand this would probably help things like oozie job -kill 
work. 
    
    Also can you perhaps rename the title on this as this is really adding 
option for client to kill AM when its killed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to