Xingbo Jiang created SPARK-32658: ------------------------------------ Summary: Partition length number overflow in `PartitionWriterStream` Key: SPARK-32658 URL: https://issues.apache.org/jira/browse/SPARK-32658 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 3.0.0 Reporter: Xingbo Jiang
A Spark user reported `FetchFailedException: Stream is corrupted` error when they upgraded their workload to 3.0. The issue happens when the shuffle output data size from a single task is very large (~5GB). The issue is introduced by https://github.com/apache/spark/commit/abef84a868e9e15f346eea315bbab0ec8ac8e389 , the `PartitionWriterStream` defined the partition length to be an int value, while it should be a long value. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org