wsry commented on a change in pull request #18505:
URL: https://github.com/apache/flink/pull/18505#discussion_r802577256



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartition.java
##########
@@ -365,16 +392,17 @@ private void updateStatistics(Buffer buffer, boolean 
isBroadcast) {
     private void writeLargeRecord(
             ByteBuffer record, int targetSubpartition, DataType dataType, 
boolean isBroadcast)
             throws IOException {
+        checkState(numBuffersForWrite > 0, "No buffers available for 
writing.");

Review comment:
       For hash-based implementation, large record will be appended to the sort 
buffer, when the data buffer is full the partial data of the record will be 
spilled as a data region and the remaining data of the large record will be 
appended to the sort buffer again. That is to say, a large record can span 
multiple data regions. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to