honeyaya commented on code in PR #7255:
URL: https://github.com/apache/hudi/pull/7255#discussion_r1027630040
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/UpsertPartitioner.java:
##########
@@ -372,7 +372,7 @@ protected static long averageBytesPerRecord(HoodieTimeline
commitTimeline, Hoodi
long avgSize = hoodieWriteConfig.getCopyOnWriteRecordSizeEstimate();
long fileSizeThreshold = (long)
(hoodieWriteConfig.getRecordSizeEstimationThreshold() *
hoodieWriteConfig.getParquetSmallFileLimit());
try {
- if (!commitTimeline.empty()) {
+ if (hoodieWriteConfig.getRecordSizeEstimationThreshold() > 0 &&
!commitTimeline.empty()) {
// Go over the reverse ordered commits to get a more recent estimate
of average record size.
Iterator<HoodieInstant> instants =
commitTimeline.getReverseOrderedInstants().iterator();
Review Comment:
hi, we found that the result of totalBytesWritten/totalRecordsWritten is
small when the last commit, but the next commit record number is very large,
then the data files will become very large, in our case the data file will come
to 600~700M.
More, our record size is fixed about 100M, so we don't want to use the above
algorithm to help calculate avgsize and use the default value of the estimated
record size which is set by users.
Although the default value of the estimation threshold is 1.0, in fact, I
want to extend the usage of this property and then don't need to add another
boolean property to control this, when we set it to less than 0, then could use
the default size not using the totalBytesWritten/totalRecordsWritten.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]