boneanxs commented on code in PR #7978:
URL: https://github.com/apache/hudi/pull/7978#discussion_r1109424306
##########
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieBaseParquetWriter.java:
##########
@@ -65,14 +63,18 @@ public HoodieBaseParquetWriter(Path file,
}
public boolean canWrite() {
- // TODO we can actually do evaluation more accurately:
- // if we cache last data size check, since we account for how many
records
- // were written we can accurately project avg record size, and
therefore
- // estimate how many more records we can write before cut off
- if (lastCachedDataSize == -1 || getWrittenRecordCount() %
WRITTEN_RECORDS_THRESHOLD_FOR_FILE_SIZE_CHECK == 0) {
- lastCachedDataSize = getDataSize();
+ if (getWrittenRecordCount() >= recordNumForNextCheck) {
+ long dataSize = getDataSize();
+ long avgRecordSize = dataSize / getWrittenRecordCount();
+ // Follow the parquet block size check logic here, return false
+ // if it is within ~2 records of the limit
+ if (dataSize > (maxFileSize - avgRecordSize * 2)) {
+ return false;
Review Comment:
Follow
this:https://github.com/apache/parquet-mr/blob/261f7d2679407c833545b56f4c85a4ae8b5c9ed4/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/InternalParquetRecordWriter.java#L154
we better to recalculate the avgRecodSize every batch, since it could become
more and more accurate.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]