pengzhiwei2018 commented on a change in pull request #2497:
URL: https://github.com/apache/hudi/pull/2497#discussion_r566917206



##########
File path: 
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/MergeOnReadIncrementalRelation.scala
##########
@@ -78,7 +78,16 @@ class MergeOnReadIncrementalRelation(val sqlContext: 
SQLContext,
   private val tableStructSchema = 
AvroConversionUtils.convertAvroSchemaToStructType(tableAvroSchema)
   private val maxCompactionMemoryInBytes = 
getMaxCompactionMemoryInBytes(jobConf)
   private val fileIndex = buildFileIndex()
-
+  private val preCombineField = {
+    val fieldFromTableConfig = metaClient.getTableConfig.getPreCombineField
+    if (fieldFromTableConfig != null) {
+      fieldFromTableConfig
+    } else if 
(optParams.contains(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY)) {

Review comment:
       Yes, the PRECOMBINE_FIELD_OPT_KEY is used to compatible with the old 
table which has not store the preCombineField to hoodie.properties.  Using the 
write option is a bit odd. So I have add a `READ_PRECOMBINE_FIELD` to the 
`DataSourceReadOptions`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to