jonvex commented on code in PR #12317:
URL: https://github.com/apache/hudi/pull/12317#discussion_r1856881049
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieSparkSqlWriter.scala:
##########
@@ -755,7 +755,9 @@ class HoodieSparkSqlWriterInternal {
.setPayloadClassName(payloadClass)
.setRecordMergeMode(RecordMergeMode.getValue(hoodieConfig.getString(HoodieWriteConfig.RECORD_MERGE_MODE)))
.setRecordMergeStrategyId(recordMergerStrategy)
-
.setPreCombineField(hoodieConfig.getStringOrDefault(PRECOMBINE_FIELD, null))
+ // we can't fetch preCombine field from hoodieConfig object, since
it falls back to "ts" as default value,
Review Comment:
I am very much in favor of doing that. But users that rely on ts as the
precombine without setting it will see a behavior change. In the 6 to 8 upgrade
flow, we could check the table schema and if precombine is not set and the
schema includes a field "ts" then we could add it to the hoodie.properties.
@yihua didn't think we should make that change though.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]