guanlisheng commented on code in PR #9013:
URL: https://github.com/apache/hudi/pull/9013#discussion_r1241468919
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/UpsertPartitioner.java:
##########
@@ -170,7 +171,7 @@ private void assignInserts(WorkloadProfile profile,
HoodieEngineContext context)
* may result in OOM by making spark underestimate the actual input record
sizes.
*/
long averageRecordSize =
averageBytesPerRecord(table.getMetaClient().getActiveTimeline()
-
.getTimelineOfActions(CollectionUtils.createSet(COMMIT_ACTION)).filterCompletedInstants(),
config);
+ .getTimelineOfActions(CollectionUtils.createSet(COMMIT_ACTION,
DELTA_COMMIT_ACTION)).filterCompletedInstants(), config);
Review Comment:
`averageBytesPerRecord` is not changed and impacted as it just goes through
the commits reversely to get the first eligible one to calculate the size.
this explains why #6864 did modify it either.
here we go the existing UTs for the function. i know the value of UT while
it is not easy to have one on the current code struct.
https://github.com/apache/hudi/blob/master/hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/table/action/commit/TestUpsertPartitioner.java#L168-L190
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]