infoverload commented on a change in pull request #476:
URL: https://github.com/apache/flink-web/pull/476#discussion_r735703993



##########
File path: _posts/2021-10-15-sort-shuffle-part2.md
##########
@@ -0,0 +1,154 @@
+---
+layout: post
+title: "Sort-Based Blocking Shuffle Implementation in Flink - Part Two"
+date: 2021-10-15 00:00:00
+authors:
+- Yingjie Cao:
+  name: "Yingjie Cao (Kevin)"
+- Daisy Tsang:
+  name: "Daisy Tsang"
+excerpt: Flink has implemented the sort-based blocking shuffle (FLIP-148) for 
batch data processing. In this blog post, we will take a close look at the 
design & implementation details and see what we can gain from it.
+---
+
+The part two of this blog post will describe the [design 
considerations](#design-considerations) & 
[implementations](#implementation-details) in detail which can provide more 
insights and list several [potential improvements](#future-improvements) that 
can be implemented in the future.
+
+{% toc %}
+
+# Abstract
+
+Like sort-merge shuffle implemented by other distributed data processing 
frameworks, the whole sort-based shuffle process in Flink consists of several 
important stages, including collecting data in memory, sorting the collected 
data in memory, spilling the sorted data to files, and reading the shuffle data 
from these spilled files. However, Flink’s implementation has some core 
differences, including the multiple data region file structure, the removal of 
file merge, and IO scheduling. The following sections describe some core design 
considerations and implementations of the sort-based blocking shuffle in Flink.
+
+# Design Considerations
+
+There are several core objectives we want to achieve for the new sort-based 
blocking shuffle to be implemented Flink:
+
+## Produce Fewer (Small) Files
+
+As discussed above, the hash-based blocking shuffle would produce too many 
small files for large-scale batch jobs. Producing fewer files can help to 
improve both stability and performance. The sort-merge approach has been widely 
adopted to solve this problem. By first writing to the in-memory buffer and 
then sorting and spilling the data into a file after the in-memory buffer is 
full, the number of output files can be reduced, which becomes (total data 
size) / (in-memory buffer size). Then by merging the produced files together, 
the number of files can be further reduced and larger data blocks can provide 
better sequential reads.
+
+Flink’s sort-based blocking shuffle adopts a similar logic. A core difference 
is that data spilling will always append data to the same file so only one file 
will be spilled for each output, which means fewer files are produced.
+
+## Open Fewer Files Concurrently

Review comment:
       ```suggestion
   ## Open fewer files concurrently
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to