SteNicholas commented on code in PR #8933:
URL: https://github.com/apache/hudi/pull/8933#discussion_r1226086493


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/util/StreamerUtil.java:
##########
@@ -478,4 +481,36 @@ public static boolean isWriteCommit(HoodieTableType 
tableType, HoodieInstant ins
   public static String getAuxiliaryPath(Configuration conf) {
     return conf.getString(FlinkOptions.PATH) + Path.SEPARATOR + 
AUXILIARYFOLDER_NAME;
   }
+
+  /**
+   * Check the {@link FlinkOptions#PRECOMBINE_FIELD} of configuration.
+   *
+   * @param conf   The flink configuration.
+   * @param schema The table schema.
+   */
+  public static void checkPreCombineField(Configuration conf, ResolvedSchema 
schema) {
+    checkPreCombineField(conf, schema.getColumnNames());
+  }
+
+  /**
+   * Check the {@link FlinkOptions#PRECOMBINE_FIELD} of configuration.
+   *
+   * @param conf        The flink configuration.
+   * @param columnNames The column names.
+   */
+  public static void checkPreCombineField(Configuration conf, List<String> 
columnNames) {
+    String preCombineField = conf.get(FlinkOptions.PRECOMBINE_FIELD);
+    if (!columnNames.contains(preCombineField)) {

Review Comment:
   @danny0405, it should still throw after spark pkless support #8107, because 
#8107 is used for auto generation of record keys not precombine field. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to