Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5622#discussion_r28834809
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -732,19 +730,24 @@ private[spark] class ExternalSorter[K, V, C](
// this simple we spill out the current in-memory collection so that
everything is in files.
spillToPartitionFiles(if (aggregator.isDefined) map else buffer)
partitionWriters.foreach(_.commitAndClose())
- val out = new FileOutputStream(outputFile, true)
val writeStartTime = System.nanoTime
util.Utils.tryWithSafeFinally {
for (i <- 0 until numPartitions) {
- val in = new
FileInputStream(partitionWriters(i).fileSegment().file)
- util.Utils.tryWithSafeFinally {
- lengths(i) = org.apache.spark.util.Utils.copyStream(in, out,
false, transferToEnabled)
- } {
- in.close()
+ val file = partitionWriters(i).fileSegment().file
+ if (!file.exists()) {
--- End diff --
I guess I'm implicitly assuming that the only reason that this file would
not exist is because no values were written to it. If we can think of cases
where this assumption might be violated, such as someone deleting the file from
underneath us, then maybe we should use some other data structure to track
whether we should expect a file or not.
We could try just `touch`ing the file to ensure that it always exists, but
I found that this had a measurable performance impact compared to not
generating the file in the first place.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]