Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/21286
  
    Thanks @steveloughran for your deep explanation!
    
    Spark does have a unique job id, but it's only unique within a 
SparkContext, we may have 2 different spark applications writing to the same 
directory. I think timestamp+uuid should be good enough as a job id. Spark 
doesn't retry jobs so we can always set job attempt id to 0.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to