Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/21616
I should have checked first, but this doesn't merge to 2.1, and it doesn't
look like 2.1 is affected anyway. There seems to be just one code path in 2.1
that hits this path, and it already uses a similar approach:
```
val writer = new SparkHadoopWriter(hadoopConf)
writer.preSetup()
val writeToFile = (context: TaskContext, iter: Iterator[(K, V)]) => {
// Hadoop wants a 32-bit task attempt ID, so if ours is bigger than
Int.MaxValue, roll it
// around by taking a mod. We expect that no task will be attempted 2
billion times.
val taskAttemptId = (context.taskAttemptId % Int.MaxValue).toInt
```
That's in PairRDDFunctions.scala. There might be other paths affected, but
at this point I'll leave it alone.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]