zhuanshenbsj1 commented on code in PR #11031:
URL: https://github.com/apache/hudi/pull/11031#discussion_r1580701002
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableSource.java:
##########
@@ -518,7 +499,7 @@ private MergeOnReadInputFormat mergeOnReadInputFormat(
tableAvroSchema.toString(),
AvroSchemaConverter.convertToSchema(requiredRowType).toString(),
inputSplits,
- conf.getString(FlinkOptions.RECORD_KEY_FIELD).split(","));
+ OptionsResolver.getRecordKeyField(conf));
Review Comment:
1. The table created by the upstream write (recorded in the existing
metadata) do not match the columns configured by the downstream stream read.
For example, some columns do not exist, resulting in the columns not be found.
-> Verification failed, throwing exception
2. The recordkey configuration does not exist
-> Verification failed, throwing exception
3. Case problem. The columns created based on calsite in the upstream are
all lowercase. If there are uppercase in the downstream, such as "eventTime",
the columns will not be found.
->Uniformly converted to lowercase
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]