Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21606#discussion_r197278111
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -104,12 +104,12 @@ object SparkHadoopWriter extends Logging {
jobTrackerId: String,
commitJobId: Int,
sparkPartitionId: Int,
- sparkAttemptNumber: Int,
+ sparkTaskId: Long,
committer: FileCommitProtocol,
iterator: Iterator[(K, V)]): TaskCommitMessage = {
// Set up a task.
val taskContext = config.createTaskAttemptContext(
- jobTrackerId, commitJobId, sparkPartitionId, sparkAttemptNumber)
+ jobTrackerId, commitJobId, sparkPartitionId, sparkTaskId.toInt)
--- End diff --
Streaming still generates separate jobs / stages for each batch, right?
In that case this should be fine; this would only be a problem if a single
stage has enough tasks to cover all the integer space (4 billion tasks). That
shouldn't be even possible since I doubt that you'd be able to have more than
`Integer.MAX_VALUE` tasks (and even that is unlikely to ever happen).
I could use `abs` here (and in the sql code) to avoid a negative value
(potentially avoiding weird file names).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]