jonvex commented on code in PR #10137:
URL: https://github.com/apache/hudi/pull/10137#discussion_r1408161069
##########
hudi-common/src/main/java/org/apache/hudi/common/model/HoodieRecordMerger.java:
##########
@@ -122,6 +125,23 @@ default boolean shouldFlush(HoodieRecord record, Schema
schema, TypedProperties
return true;
}
+ default String[] getMandatoryFieldsForMerging(HoodieTableConfig cfg) {
+ ArrayList<String> requiredFields = new ArrayList<>();
+ if (cfg.populateMetaFields()) {
+ requiredFields.add(HoodieRecord.RECORD_KEY_METADATA_FIELD);
+ } else {
+ cfg.getRecordKeyFieldStream().forEach(requiredFields::add);
+ }
+ String preCombine = cfg.getPreCombineField();
+
+ //maybe throw exception otherwise
Review Comment:
I decided not to throw an exception, because I think if we are going to
enforce precombine being required, it should be done on the write side, not the
read side. I don't think the caller should have to worry about potential nulls
in the returning array. Do you think it would be better if we let it NPE? I
don't have a strong opinion on what the correct thing to do here is
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]