[
https://issues.apache.org/jira/browse/SPARK-5581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186488#comment-15186488
]
Sital Kedia commented on SPARK-5581:
------------------------------------
[~joshrosen] - The issue is not only open/close the file output stream for each
partition. But also we are flushing the data to disk for each partitions. So
when the partition size is small and we have many partitions, the cost of disk
i/o is very high. Do you have any idea how we can avoid that?
> When writing sorted map output file, avoid open / close between each partition
> ------------------------------------------------------------------------------
>
> Key: SPARK-5581
> URL: https://issues.apache.org/jira/browse/SPARK-5581
> Project: Spark
> Issue Type: Improvement
> Components: Shuffle
> Affects Versions: 1.3.0
> Reporter: Sandy Ryza
>
> {code}
> // Bypassing merge-sort; get an iterator by partition and just write
> everything directly.
> for ((id, elements) <- this.partitionedIterator) {
> if (elements.hasNext) {
> val writer = blockManager.getDiskWriter(
> blockId, outputFile, ser, fileBufferSize,
> context.taskMetrics.shuffleWriteMetrics.get)
> for (elem <- elements) {
> writer.write(elem)
> }
> writer.commitAndClose()
> val segment = writer.fileSegment()
> lengths(id) = segment.length
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]