Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/21286
@jinxing64 yes, with the detail that the way some bits of hadoop parse a
jobattempt, they like it to be an integer. Some random number used as the upper
digits of counter could work; it'd still give meaningful job IDs like
"45630001" for the first, "45630002", for the process which came up with
"4563" as its prefix. Yes, eventually it'll wrap, but that's integers for you.
BTW, the `newFileAbsPath` code creates the staging dir ".spark-staging-" +
jobId. Again, a jobID unique across all processes is enough
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]