cshuo commented on code in PR #12598:
URL: https://github.com/apache/hudi/pull/12598#discussion_r1906336062
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/clustering/ClusteringOperator.java:
##########
@@ -344,6 +349,27 @@ private Iterator<RowData>
readRecordsForGroupBaseFiles(List<ClusteringOperation>
return new ConcatenatingIterator<>(iteratorsForPartition);
}
+ /**
+ * Since there exists discrepancies between flink and spark dealing with
nullability of primary key field,
+ * and there may be some files written by spark, we reconcile read schema
against write schema to promote
+ * the nullability, so that schema validating would not fail.
+ *
+ * @param clusteringOperation the cluster operation
+ * @return schema that has nullability constraints reconciled
+ */
+ private Schema reconcileSchemaWithNullability(ClusteringOperation
clusteringOperation) {
+ String instantTs =
StringUtils.isNullOrEmpty(clusteringOperation.getDataFilePath())
+ ? FSUtils.getCommitTime(clusteringOperation.getDeltaFilePaths().get(0))
Review Comment:
As for where to get the file schema, actually, files with same commit time
share the same write schema, and getTableAvroSchema in `TableSchemaResolver` is
more efficient since there is a cache, what do you think
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]