danny0405 commented on code in PR #11031:
URL: https://github.com/apache/hudi/pull/11031#discussion_r1580753077
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/table/HoodieTableSource.java:
##########
@@ -518,7 +499,7 @@ private MergeOnReadInputFormat mergeOnReadInputFormat(
tableAvroSchema.toString(),
AvroSchemaConverter.convertToSchema(requiredRowType).toString(),
inputSplits,
- conf.getString(FlinkOptions.RECORD_KEY_FIELD).split(","));
+ OptionsResolver.getRecordKeyField(conf));
Review Comment:
> Case problem. The columns created based on calsite in the upstream are all
lowercase. If there are uppercase in the downstream, such as "eventTime", the
columns will not be found.
->Uniformly converted to lowercase
This is not expected to be handled by Hudi, I think. At least, from the
catalog layer, we should make the case-sensitivity agnostic to specific engines.
> The table created by the upstream write (recorded in the existing
metadata) do not match the columns configured by the do
In `HoodieTableFactory#createDynamicTableSource`, add a sanity check for the
catalog table resolved schema and the existing Hudi table schema, that should
be enough I guess. Similiar with the primary key definition.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]