Hi all!

I am a bit confused why Spark AM and Client are both trying to delete
Staging Directory.

https://github.com/apache/spark/blob/branch-2.1/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L1110
https://github.com/apache/spark/blob/branch-2.1/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L233

As you can see, in case if a job was running on YARN in Cluster deployment
mode, both AM and Client will try to delete Staging directory if job
succeeded and eventually one of them will fail to do this, because the
other one already deleted the directory.

Shouldn't we add some check to Client?


Thanks,
Rostyslav

Reply via email to