Github user SaintBacchus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5663#discussion_r29018738
  
    --- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
    @@ -657,6 +657,9 @@ private[spark] class Client(
               case e: ApplicationNotFoundException =>
                 logError(s"Application $appId not found.")
                 return (YarnApplicationState.KILLED, 
FinalApplicationStatus.KILLED)
    +          case ie: IOException =>
    +            logError(s"Application $appId occurred an unexpected 
IOException.")
    +            return (YarnApplicationState.KILLED, 
FinalApplicationStatus.KILLED)
    --- End diff --
    
    @vanzin The process hang due to the endless `renewLease` logic in 
`DFSClient`, so I think an IOException is just fine for this problem. Other 
exception may cause the 'monitor' thread down but can not hang the process.
    And it seems that if the exception handler is under `run` method, it can't 
go through the `sc.stop()` logic. If that, the driver process had to wait the 
network to be reachable then exit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to