Are you actually seeing a problem or just questioning the code?

I have never seen a situation where there's a failure because of that
part of the current code.

On Fri, Jan 13, 2017 at 3:24 AM, Rostyslav Sotnychenko
<r.sotnyche...@gmail.com> wrote:
> Hi all!
>
> I am a bit confused why Spark AM and Client are both trying to delete
> Staging Directory.
>
> https://github.com/apache/spark/blob/branch-2.1/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L1110
> https://github.com/apache/spark/blob/branch-2.1/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L233
>
> As you can see, in case if a job was running on YARN in Cluster deployment
> mode, both AM and Client will try to delete Staging directory if job
> succeeded and eventually one of them will fail to do this, because the other
> one already deleted the directory.
>
> Shouldn't we add some check to Client?
>
>
> Thanks,
> Rostyslav



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to