danny0405 commented on code in PR #11031:
URL: https://github.com/apache/hudi/pull/11031#discussion_r1604603189


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/util/SanityChecks.java:
##########
@@ -41,23 +42,22 @@
 /**
  * Utilities for HoodieTableFactory sanity check.
  */
-public class SanityChecksUtil {
+public class SanityChecks {
 
-  private static final Logger LOG = 
LoggerFactory.getLogger(SanityChecksUtil.class);
+  private static final Logger LOG = 
LoggerFactory.getLogger(SanityChecks.class);
 
   /**
    * The sanity check.
-   * If the metaClient is not null, it means that this is a table source 
sanity check and the source table has
-   * already been initialized.
    *
-   * @param conf   The table options
-   * @param schema The table schema
-   * @param metaClient  The table meta client
+   * @param conf  The table options
+   * @param schema  The table schema
+   * @param checkMetaData  Weather to check metadata
    */
-  public static void sanitCheck(Configuration conf, ResolvedSchema schema, 
HoodieTableMetaClient metaClient) {
+  public static void sanitCheck(Configuration conf, ResolvedSchema schema, 
Boolean checkMetaData) {
     checkTableType(conf);
     List<String> schemaFields = schema.getColumnNames();
-    if (metaClient != null) {
+    if (checkMetaData) {
+      HoodieTableMetaClient metaClient = 
StreamerUtil.metaClientForReader(conf, 
HadoopConfigurations.getHadoopConf(conf));
       List<String> latestTablefields = 
StreamerUtil.getLatestTableFields(metaClient);
       if (latestTablefields != null) {

Review Comment:
   The logic for sink has been changed with this patch. We have this code for 
the original sink:
   
   ```java
       if (!OptionsResolver.isAppendMode(conf)) {
         checkRecordKey(conf, schema);
       }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to