Github user squito commented on the issue:
https://github.com/apache/spark/pull/19848
i dunno what the requirements are -- I was hoping you would know the hadoop
committer semantics better than me! I suppose a uuid is really the only get
something globally unique, as you could even have multiple independent spark
contexts. I have seen a committer creating a temp directory based on the ID,
so you could end up with a collision with them both writing to the same dir.
anyway, I'm willling to set this aside as a rare case, the fix here is
still a huge improvement.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]