danny0405 commented on code in PR #12598:
URL: https://github.com/apache/hudi/pull/12598#discussion_r1906260009


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/clustering/ClusteringOperator.java:
##########
@@ -344,6 +349,27 @@ private Iterator<RowData> 
readRecordsForGroupBaseFiles(List<ClusteringOperation>
     return new ConcatenatingIterator<>(iteratorsForPartition);
   }
 
+  /**
+   * Since there exists discrepancies between flink and spark dealing with 
nullability of primary key field,
+   * and there may be some files written by spark, we reconcile read schema 
against write schema to promote
+   * the nullability, so that schema validating would not fail.
+   *
+   * @param clusteringOperation the cluster operation
+   * @return schema that has nullability constraints reconciled
+   */
+  private Schema reconcileSchemaWithNullability(ClusteringOperation 
clusteringOperation) {
+    String instantTs = 
StringUtils.isNullOrEmpty(clusteringOperation.getDataFilePath())
+        ? FSUtils.getCommitTime(clusteringOperation.getDeltaFilePaths().get(0))

Review Comment:
   Currently, the Flink clustering only work on append only table, all the data 
files should be in parquet format, we can fetch the record key fields, then the 
file schema from the parquet footer, and reconcile the record key fields if the 
nullability discrepency exists.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to