Toroidals opened a new issue, #13114:
URL: https://github.com/apache/hudi/issues/13114

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   **After the Flink job terminated unexpectedly, an attempt was made to 
restore the job from the latest checkpoint, but the following error occurred:**
   Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: 
/apps/hive/warehouse/hudi.db/hudi_qwe_rty_cmf_fin_po_headers_cdc/.hoodie/metadata/.hoodie/timeline/history/20250408130311013_20250408132310766_0.parquet
 for client 10.188.29.178 already exists
   **Stacktrace** 
   https://gist.github.com/Toroidals/8bc2618b0f726fb51d290b1591b37bc7
   
   **When I tried to delete this file and then attempted to restart the job 
from the checkpoint, a new error occurred:**
   Caused by: java.io.FileNotFoundException: File does not exist: 
hdfs://asaaprdhadoop/apps/hive/warehouse/hudi.db/hudi_qwe_rty_cmf_fin_po_headers_cdc/.hoodie/metadata/.hoodie/timeline/history/20250408130311013_20250408132310766_0.parquet
   **Stacktrace** 
   https://gist.github.com/Toroidals/4e9560ce56daf0894338e70ef1b4e18c
   
   **To Reproduce**
   **flink env**
   `    public static StreamExecutionEnvironment 
getFlinkStreamEnv(HashMap<String, String> confInfo) throws Exception {
   
           StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
   
           
env.setParallelism(Integer.parseInt(confInfo.get("flink_parallelism")));
   
           // restart_strategies
                
env.setRestartStrategy(RestartStrategies.fixedDelayRestart(Integer.valueOf(confInfo.get("flink_restart_attempts")),org.apache.flink.api.common.time.Time.seconds(Long.valueOf(confInfo.get("flink_delay_between_attempts")))));
   
           //checkpoint
           env.enableCheckpointing(TimeUnit.SECONDS.toMillis(150), 
CheckpointingMode.EXACTLY_ONCE);
           
env.getCheckpointConfig().setMinPauseBetweenCheckpoints(TimeUnit.SECONDS.toMillis(5));
           
env.getCheckpointConfig().setCheckpointTimeout(TimeUnit.SECONDS.toMillis(600));
                env.getCheckpointConfig().setMaxConcurrentCheckpoints(3);
                
env.getCheckpointConfig().setCheckpointStorage(confInfo.get("checkpoint_path"));
                
env.getCheckpointConfig().setExternalizedCheckpointCleanup(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
                
env.getCheckpointConfig().setTolerableCheckpointFailureNumber(10);
   
           //RocksDBStateBackend
                EmbeddedRocksDBStateBackend embeddedRocksDBStateBackend = new 
EmbeddedRocksDBStateBackend(true);
                
embeddedRocksDBStateBackend.setPredefinedOptions(PredefinedOptions.SPINNING_DISK_OPTIMIZED_HIGH_MEM);
                env.setStateBackend(embeddedRocksDBStateBackend);
           return env;
       }`
   
   
   **flink hudi sink**
   `    public static HoodiePipeline.Builder getHoodieBuilder(HashMap<String, 
String> infoMap, HashMap<String, String> connectInfo) {
   
           HoodiePipeline.Builder builder = 
HoodiePipeline.builder(infoMap.get("hudi_table_name"));
           
           Map<String, String> options = new HashMap<>();
           options.put(FlinkOptions.DATABASE_NAME.key(), 
infoMap.get("hudi_database_name"));
           options.put(FlinkOptions.TABLE_NAME.key(), 
infoMap.get("hudi_table_name"));
           options.put(FlinkOptions..key(), infoMap.get("hudi_hdfs_path"));
   
           options.put("catalog.path", "hdfs:///apps/hudi/catalog/");
           log.info("DATABASE_NAME: {}", infoMap.get("hudi_database_name"));
           log.info("TABLE_NAME: {}", infoMap.get("hudi_table_name"));
           log.info("PATH: {}", infoMap.get("hudi_hdfs_path"));
   
           String hudiFieldMap = 
infoMap.get("hudi_field_map").toLowerCase(Locale.ROOT);
           ArrayList<ArrayList<String>> fieldList = 
JSON.parseObject(hudiFieldMap, new 
TypeReference<ArrayList<ArrayList<String>>>() {
           });
           log.info("fieldList: {}", fieldList.toString());
           for (ArrayList<String> columnList : fieldList) {
               builder.column("`" + columnList.get(0) + "` " + 
columnList.get(1));
           }
   
           String[] hudiPrimaryKeys = 
infoMap.get("hudi_primary_key").split(",");
           builder.pk(hudiPrimaryKeys);
           options.put(FlinkOptions.PRE_COMBINE.key(), "true");
           options.put(FlinkOptions.PRECOMBINE_FIELD.key(), "_flink_cdc_ts_ms");
           options.put(FlinkOptions.PAYLOAD_CLASS_NAME.key(), 
EventTimeAvroPayload.class.getName());
   
           options.put(FlinkOptions.TABLE_TYPE.key(), 
HoodieTableType.MERGE_ON_READ.name());
   
           //index
           options.put(FlinkOptions.INDEX_KEY_FIELD.key(), 
infoMap.get("hudi_primary_key"));
           options.put(FlinkOptions.INDEX_TYPE.key(), 
HoodieIndex.IndexType.BUCKET.name());
   
           //bucket assigner
           options.put(FlinkOptions.BUCKET_INDEX_NUM_BUCKETS.key(), "300");
           options.put(FlinkOptions.BUCKET_INDEX_ENGINE_TYPE.key(), "SIMPLE");
   
           options.put(FlinkOptions.COMPACTION_TRIGGER_STRATEGY.key(), 
"num_or_time");
           options.put(FlinkOptions.COMPACTION_DELTA_COMMITS.key(), "30");
           options.put(FlinkOptions.COMPACTION_DELTA_SECONDS.key(), "150");
           options.put(FlinkOptions.COMPACTION_MAX_MEMORY.key(), "100");
           options.put(FlinkOptions.COMPACTION_TIMEOUT_SECONDS.key(), "3600");
   
           options.put(HoodieWriteConfig.ALLOW_EMPTY_COMMIT.key(), "true");
           options.put(FlinkOptions.CLEAN_RETAIN_COMMITS.key(), "180");
   
           options.put(FlinkOptions.IGNORE_FAILED.key(), "true");
           options.put(HoodieLockConfig.LOCK_PROVIDER_CLASS_NAME.key(), 
InProcessLockProvider.class.getName());
   
           //hive
           options.put(FlinkOptions.HIVE_SYNC_ENABLED.key(), 
infoMap.get("hudi_hive_sync_enabled"));
           options.put(FlinkOptions.HIVE_SYNC_MODE.key(), 
infoMap.get("hudi_hive_sync_mode"));
           options.put(FlinkOptions.HIVE_SYNC_DB.key(), 
infoMap.get("hudi_hive_sync_db"));
           options.put(FlinkOptions.HIVE_SYNC_TABLE.key(), 
infoMap.get("hudi_hive_sync_table"));
           options.put(FlinkOptions.HIVE_SYNC_CONF_DIR.key(), "/etc/hive/conf");
           options.put(FlinkOptions.HIVE_SYNC_METASTORE_URIS.key(), 
connectInfo.get("hive_metastore_url"));
           options.put(FlinkOptions.HIVE_SYNC_JDBC_URL.key(), 
connectInfo.get("conn_url"));
           options.put(FlinkOptions.HIVE_SYNC_SUPPORT_TIMESTAMP.key(), "true");
           options.put(FlinkOptions.HIVE_SYNC_SKIP_RO_SUFFIX.key(), "true");
   
           //partition
           builder.partition(infoMap.get("hudi_hive_sync_partition_fields"));
           options.put(FlinkOptions.PARTITION_PATH_FIELD.key(), 
infoMap.get("hudi_hive_sync_partition_fields"));
           options.put(FlinkOptions.HIVE_SYNC_PARTITION_FIELDS.key(), 
infoMap.get("hudi_hive_sync_partition_fields"));
   
           options.put(FlinkOptions.WRITE_RATE_LIMIT.key(), "20000");
   
           options.put(FlinkOptions.WRITE_TASKS.key(), 4);
   
           options.put(FlinkOptions.OPERATION.key(), 
WriteOperationType.UPSERT.value());
   
           builder.options(options);
           return builder;
       }`
   
   Note:
   Does setting env.getCheckpointConfig().setMaxConcurrentCheckpoints(3); 
potentially cause exceptions in Hudi writes?
   
   Flink Hudi sink configuration:
   
   table.type = MOR
   
   hoodie.index.type = BUCKET
   
   hoodie.index.bucket.engine = SIMPLE
   
   
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 1.0.0
   
   * Flink version : 1.15.2
   
   * Hive version : 3.1.3
   
   * Hadoop version : 3.3.4
   
   * Storage (HDFS/S3/GCS..) : HDFS
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to