Toroidals opened a new issue, #10849:
URL: https://github.com/apache/hudi/issues/10849

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   1.First initialize data writing to Hudi using Flink BULK_INSERT mode 
   2.Then write CDC incremental data using Flink UPSERT mode,give an error:
   Caused by: java.lang.RuntimeException: Duplicate fileId 
00000002-6791-4a32-8edb-e33c558e0df3 from bucket 2 of partition  found during 
the BucketStreamWriteFunction index bootstrap.
   
   **To Reproduce**
   1.First initialize data writing to Hudi using Flink BULK_INSERT mode 
   public class CustomHudiStreamSink {
   
       public static HoodiePipeline.Builder getHoodieBuilder(HashMap<String, 
String> infoMap, HashMap<String, String> connectInfo) {
   
           HoodiePipeline.Builder builder = 
HoodiePipeline.builder(infoMap.get("hudi_table_name"));
   
           //add Field
           String hudiFieldMap = 
infoMap.get("hudi_field_map").toLowerCase(Locale.ROOT);
           ArrayList<ArrayList<String>> fieldList = 
JSON.parseObject(hudiFieldMap, new 
TypeReference<ArrayList<ArrayList<String>>>() {
           });
           for (ArrayList<String> columnList : fieldList) {
               builder.column("`" + columnList.get(0) + "` " + 
columnList.get(1));
           }
   
           //add PrimaryKey
           String[] hudiPrimaryKeys = 
infoMap.get("hudi_primary_key").split(",");
           builder.pk(hudiPrimaryKeys);
   
           Map<String, String> options = new HashMap<>();
           options.put(FlinkOptions.PATH.key(), infoMap.get("hudi_hdfs_path"));
           options.put(FlinkOptions.TABLE_TYPE.key(), 
HoodieTableType.MERGE_ON_READ.name());
           options.put(FlinkOptions.DATABASE_NAME.key(), 
infoMap.get("hudi_database_name"));
           options.put(FlinkOptions.TABLE_NAME.key(), 
infoMap.get("hudi_table_name"));
   
           options.put(FlinkOptions.PRECOMBINE_FIELD.key(), "ts_ms");
   
           //add index
           options.put(FlinkOptions.INDEX_BOOTSTRAP_ENABLED.key(), "true");
           options.put(FlinkOptions.INDEX_TYPE.key(), 
HoodieIndex.IndexType.BUCKET.name());
   
           //WRITE_TASKS 
           options.put(FlinkOptions.WRITE_TASKS.key(), "4");
   
           //bucket assigner
           options.put(FlinkOptions.BUCKET_ASSIGN_TASKS.key(), "4");
           options.put(FlinkOptions.BUCKET_INDEX_NUM_BUCKETS.key(), 128);
           options.put(FlinkOptions.BUCKET_INDEX_ENGINE_TYPE.key(), "SIMPLE");
   
           //COMPACTION
           options.put(FlinkOptions.COMPACTION_TASKS.key(), "4");
           options.put(FlinkOptions.COMPACTION_TRIGGER_STRATEGY.key(), 
"num_or_time");
           options.put(FlinkOptions.COMPACTION_DELTA_COMMITS.key(), "5");
           options.put(FlinkOptions.COMPACTION_DELTA_SECONDS.key(), "300");
   
           //hive sync
           options.put(FlinkOptions.HIVE_SYNC_ENABLED.key(), 
infoMap.get("hudi_hive_sync_enabled"));
           options.put(FlinkOptions.HIVE_SYNC_MODE.key(), 
infoMap.get("hudi_hive_sync_mode"));
           options.put(FlinkOptions.HIVE_SYNC_DB.key(), 
infoMap.get("hudi_hive_sync_db"));
           options.put(FlinkOptions.HIVE_SYNC_TABLE.key(), 
infoMap.get("hudi_hive_sync_table"));
           options.put(FlinkOptions.HIVE_SYNC_CONF_DIR.key(), "/etc/hive/conf");
           options.put(FlinkOptions.HIVE_SYNC_METASTORE_URIS.key(), 
connectInfo.get("hive_metastore_url"));
           options.put(FlinkOptions.HIVE_SYNC_JDBC_URL.key(), 
connectInfo.get("conn_url"));
           options.put(FlinkOptions.HIVE_SYNC_SUPPORT_TIMESTAMP.key(), "true");
           options.put(FlinkOptions.HIVE_SYNC_SKIP_RO_SUFFIX.key(), "true");
   
           //add partition
           options.put(FlinkOptions.PARTITION_PATH_FIELD.key(), "part");
           options.put(FlinkOptions.HIVE_SYNC_PARTITION_FIELDS.key(), "part");
   
           //write mode
           options.put(FlinkOptions.OPERATION.key(), 
WriteOperationType.BULK_INSERT.value());
   
           builder.options(options);
           return builder;
       }
   }
   
   2.Then write CDC incremental data using Flink UPSERT mode,give an error:
   public class CustomHudiStreamSink {
   
       public static HoodiePipeline.Builder getHoodieBuilder(HashMap<String, 
String> infoMap, HashMap<String, String> connectInfo) {
   
           HoodiePipeline.Builder builder = 
HoodiePipeline.builder(infoMap.get("hudi_table_name"));
   
           //add Field
           String hudiFieldMap = 
infoMap.get("hudi_field_map").toLowerCase(Locale.ROOT);
           ArrayList<ArrayList<String>> fieldList = 
JSON.parseObject(hudiFieldMap, new 
TypeReference<ArrayList<ArrayList<String>>>() {
           });
           for (ArrayList<String> columnList : fieldList) {
               builder.column("`" + columnList.get(0) + "` " + 
columnList.get(1));
           }
   
           //add PrimaryKey
           String[] hudiPrimaryKeys = 
infoMap.get("hudi_primary_key").split(",");
           builder.pk(hudiPrimaryKeys);
   
           Map<String, String> options = new HashMap<>();
           options.put(FlinkOptions.PATH.key(), infoMap.get("hudi_hdfs_path"));
           options.put(FlinkOptions.TABLE_TYPE.key(), 
HoodieTableType.MERGE_ON_READ.name());
           options.put(FlinkOptions.DATABASE_NAME.key(), 
infoMap.get("hudi_database_name"));
           options.put(FlinkOptions.TABLE_NAME.key(), 
infoMap.get("hudi_table_name"));
   
           options.put(FlinkOptions.PRECOMBINE_FIELD.key(), "ts_ms");
   
           //add index
           options.put(FlinkOptions.INDEX_BOOTSTRAP_ENABLED.key(), "true");
           options.put(FlinkOptions.INDEX_TYPE.key(), 
HoodieIndex.IndexType.BUCKET.name());
   
           //WRITE_TASKS 
           options.put(FlinkOptions.WRITE_TASKS.key(), "4");
   
           //bucket assigner
           options.put(FlinkOptions.BUCKET_ASSIGN_TASKS.key(), "4");
           options.put(FlinkOptions.BUCKET_INDEX_NUM_BUCKETS.key(), 128);
           options.put(FlinkOptions.BUCKET_INDEX_ENGINE_TYPE.key(), "SIMPLE");
   
           //COMPACTION
           options.put(FlinkOptions.COMPACTION_TASKS.key(), "4");
           options.put(FlinkOptions.COMPACTION_TRIGGER_STRATEGY.key(), 
"num_or_time");
           options.put(FlinkOptions.COMPACTION_DELTA_COMMITS.key(), "5");
           options.put(FlinkOptions.COMPACTION_DELTA_SECONDS.key(), "300");
   
           //hive sync
           options.put(FlinkOptions.HIVE_SYNC_ENABLED.key(), 
infoMap.get("hudi_hive_sync_enabled"));
           options.put(FlinkOptions.HIVE_SYNC_MODE.key(), 
infoMap.get("hudi_hive_sync_mode"));
           options.put(FlinkOptions.HIVE_SYNC_DB.key(), 
infoMap.get("hudi_hive_sync_db"));
           options.put(FlinkOptions.HIVE_SYNC_TABLE.key(), 
infoMap.get("hudi_hive_sync_table"));
           options.put(FlinkOptions.HIVE_SYNC_CONF_DIR.key(), "/etc/hive/conf");
           options.put(FlinkOptions.HIVE_SYNC_METASTORE_URIS.key(), 
connectInfo.get("hive_metastore_url"));
           options.put(FlinkOptions.HIVE_SYNC_JDBC_URL.key(), 
connectInfo.get("conn_url"));
           options.put(FlinkOptions.HIVE_SYNC_SUPPORT_TIMESTAMP.key(), "true");
           options.put(FlinkOptions.HIVE_SYNC_SKIP_RO_SUFFIX.key(), "true");
   
           //add partition
           options.put(FlinkOptions.PARTITION_PATH_FIELD.key(), "part");
           options.put(FlinkOptions.HIVE_SYNC_PARTITION_FIELDS.key(), "part");
   
           //write mode
           options.put(FlinkOptions.OPERATION.key(), 
WriteOperationType.UPSERT.value());
   
           builder.options(options);
           return builder;
       }
   }
   
   3.
   4.
   
   **Expected behavior**
   The CDC incremental data is written normally.
   
   **Environment Description**
   
   * Hudi version : 0.14.0
   
   * Hive version :3.1.3
   
   * Flink version : 1.15.2
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to