Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21606#discussion_r197265481
  
    --- Diff: 
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
    @@ -104,12 +104,12 @@ object SparkHadoopWriter extends Logging {
           jobTrackerId: String,
           commitJobId: Int,
           sparkPartitionId: Int,
    -      sparkAttemptNumber: Int,
    +      sparkTaskId: Long,
           committer: FileCommitProtocol,
           iterator: Iterator[(K, V)]): TaskCommitMessage = {
         // Set up a task.
         val taskContext = config.createTaskAttemptContext(
    -      jobTrackerId, commitJobId, sparkPartitionId, sparkAttemptNumber)
    +      jobTrackerId, commitJobId, sparkPartitionId, sparkTaskId.toInt)
    --- End diff --
    
    task id is unique across the entire Spark application, which means we may 
have very large task id in a long-running micro-batch streaming application.
    
    If we do need an int here, I'd suggest we combine `stageAttemptNumber` and 
`taskAttemptNumber` into a int, which is much less risky.(Spark won't have a 
lot of stage/task attempts)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to