Github user carsonwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/10634#discussion_r50659285
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -202,6 +201,7 @@ public long spill(long size, MemoryConsumer trigger)
throws IOException {
// pages will currently be counted as memory spilled even though that
space isn't actually
// written to disk. This also counts the space needed to store the
sorter's pointer array.
taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);
+
taskContext.taskMetrics().incDiskBytesSpilled(writeMetrics.shuffleBytesWritten());
--- End diff --
The `spillSize` here is 0 because the data are stored in a map instead of
this sorter. So `incMemoryBytesSpilled(spillSize)` actually increase 0. We need
update the `MemoryBytesSpilled` after freeing the memory in the map.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]