Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/21286
that would work. Like you say, no need to worry about job attempt IDs, just
uniqueness. If you put the timestamp first, you could still sort the listing by
time, which might be good for diagnostics.
Some org.apache.hadoop code snippets do attempt to parse the yarn app
attempt strings into numeric job & task IDs in exactly the way they shouldn't.
It should already have surfaced if it was a problem in the committer codepaths,
but it's worth remembering & maybe replicate in the new IDs.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]