Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/21606#discussion_r197285278
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -104,12 +104,12 @@ object SparkHadoopWriter extends Logging {
jobTrackerId: String,
commitJobId: Int,
sparkPartitionId: Int,
- sparkAttemptNumber: Int,
+ sparkTaskId: Long,
committer: FileCommitProtocol,
iterator: Iterator[(K, V)]): TaskCommitMessage = {
// Set up a task.
val taskContext = config.createTaskAttemptContext(
- jobTrackerId, commitJobId, sparkPartitionId, sparkAttemptNumber)
+ jobTrackerId, commitJobId, sparkPartitionId, sparkTaskId.toInt)
--- End diff --
I don't follow, the task ids increment across jobs. so if you have a very
long running application that continues to start new jobs you could potentially
run out.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]