cloud-fan commented on code in PR #44709:
URL: https://github.com/apache/spark/pull/44709#discussion_r1450541011
##########
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java:
##########
@@ -327,12 +327,6 @@ private long[] mergeSpillsUsingStandardWriter(SpillInfo[]
spills) throws IOExcep
logger.debug("Using slow merge");
mergeSpillsWithFileStream(spills, mapWriter, compressionCodec);
}
- // When closing an UnsafeShuffleExternalSorter that has already spilled
once but also has
- // in-memory records, we write out the in-memory records to a file but
do not count that
- // final write as bytes spilled (instead, it's accounted as shuffle
write). The merge needs
- // to be counted as shuffle write, but this will lead to double-counting
of the final
- // SpillInfo's bytes.
- writeMetrics.decBytesWritten(spills[spills.length - 1].file.length());
Review Comment:
This hack is not needed anymore.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]