danny0405 commented on code in PR #7978:
URL: https://github.com/apache/hudi/pull/7978#discussion_r1118445446


##########
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieBaseParquetWriter.java:
##########
@@ -62,17 +63,28 @@ public HoodieBaseParquetWriter(Path file,
     // stream and the actual file size reported by HDFS
     this.maxFileSize = parquetConfig.getMaxFileSize()
         + Math.round(parquetConfig.getMaxFileSize() * 
parquetConfig.getCompressionRatio());
+    this.recordCountForNextSizeCheck = DEFAULT_MINIMUM_RECORD_COUNT_FOR_CHECK;
   }
 
   public boolean canWrite() {
-    // TODO we can actually do evaluation more accurately:
-    //      if we cache last data size check, since we account for how many 
records
-    //      were written we can accurately project avg record size, and 
therefore
-    //      estimate how many more records we can write before cut off
-    if (lastCachedDataSize == -1 || getWrittenRecordCount() % 
WRITTEN_RECORDS_THRESHOLD_FOR_FILE_SIZE_CHECK == 0) {
-      lastCachedDataSize = getDataSize();
+    long writtenCount = getWrittenRecordCount();
+    if (writtenCount >= recordCountForNextSizeCheck) {
+      long dataSize = getDataSize();
+      // In some very extreme cases, like all records are same value, then 
it's possible
+      // the dataSize is much lower than the writtenRecordCount(high 
compression ratio),
+      // causing avgRecordSize to 0, we'll force the avgRecordSize to 1 for 
such cases.
+      long avgRecordSize = Math.max(dataSize / writtenCount, 1);
+      // Follow the parquet block size check logic here, return false

Review Comment:
   Is `1` reasonable as we already have a default value as `1024 bytes` in 
`HoodieCompactionConfig.COPY_ON_WRITE_RECORD_SIZE_ESTIMATE` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to