jiangxb1987 opened a new pull request #29474: URL: https://github.com/apache/spark/pull/29474
# What changes were proposed in this pull request? The `count` in `PartitionWriterStream` should be a long value, instead of int. The issue is introduced by apache/spark@abef84a . When the overflow happens, the shuffle index file would record wrong index of a reduceId, thus lead to `FetchFailedException: Stream is corrupted` error. Besides the fix, I also added some debug logs, so in the future it's easier to debug similar issues. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? A Spark user reported this issue when migrating their workload to 3.0. One of the jobs fail deterministically on Spark 3.0 without the patch, and the job succeed after applied the fix. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
