boneanxs commented on code in PR #7978:
URL: https://github.com/apache/hudi/pull/7978#discussion_r1112470451
##########
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieBaseParquetWriter.java:
##########
@@ -65,14 +63,18 @@ public HoodieBaseParquetWriter(Path file,
}
public boolean canWrite() {
- // TODO we can actually do evaluation more accurately:
- // if we cache last data size check, since we account for how many
records
- // were written we can accurately project avg record size, and
therefore
- // estimate how many more records we can write before cut off
- if (lastCachedDataSize == -1 || getWrittenRecordCount() %
WRITTEN_RECORDS_THRESHOLD_FOR_FILE_SIZE_CHECK == 0) {
- lastCachedDataSize = getDataSize();
+ if (getWrittenRecordCount() >= recordNumForNextCheck) {
+ long dataSize = getDataSize();
+ long avgRecordSize = dataSize / getWrittenRecordCount();
+ // Follow the parquet block size check logic here, return false
+ // if it is within ~2 records of the limit
+ if (dataSize > (maxFileSize - avgRecordSize * 2)) {
+ return false;
Review Comment:
Yes, from my understanding, if the user set the max file size, we'd better
don't exceed this value.
##########
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieBaseParquetWriter.java:
##########
@@ -65,14 +63,18 @@ public HoodieBaseParquetWriter(Path file,
}
public boolean canWrite() {
- // TODO we can actually do evaluation more accurately:
- // if we cache last data size check, since we account for how many
records
- // were written we can accurately project avg record size, and
therefore
- // estimate how many more records we can write before cut off
- if (lastCachedDataSize == -1 || getWrittenRecordCount() %
WRITTEN_RECORDS_THRESHOLD_FOR_FILE_SIZE_CHECK == 0) {
- lastCachedDataSize = getDataSize();
+ if (getWrittenRecordCount() >= recordNumForNextCheck) {
+ long dataSize = getDataSize();
+ long avgRecordSize = dataSize / getWrittenRecordCount();
+ // Follow the parquet block size check logic here, return false
+ // if it is within ~2 records of the limit
+ if (dataSize > (maxFileSize - avgRecordSize * 2)) {
+ return false;
+ }
+ // Do check it in the halfway
+ recordNumForNextCheck = (recordNumForNextCheck + maxFileSize /
avgRecordSize) / 2;
Review Comment:
In the first place, I thought this issue only happens if `avgRecordSize` can
vary greatly(large variance), and in such case
`props.getMaxRowCountForPageSizeCheck()` doesn't do much help, because it needs
users well understanding their data.
But I think you're right, let me add these two configures
`getMinRowCountForPageSizeCheck`, `getMaxRowCountForPageSizeCheck`, at least
gives some users ability to adjust it.
##########
hudi-common/src/main/java/org/apache/hudi/io/storage/HoodieBaseParquetWriter.java:
##########
@@ -36,11 +36,9 @@
*/
public abstract class HoodieBaseParquetWriter<R> extends ParquetWriter<R> {
- private static final int WRITTEN_RECORDS_THRESHOLD_FOR_FILE_SIZE_CHECK =
1000;
-
private final AtomicLong writtenRecordCount = new AtomicLong(0);
private final long maxFileSize;
- private long lastCachedDataSize = -1;
+ private long recordNumForNextCheck = 100;
Review Comment:
sure
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]