fanfanAlice opened a new issue, #10700: URL: https://github.com/apache/hudi/issues/10700
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** A clear and concise description of the problem. **To Reproduce** Steps to reproduce the behavior: 1. task1: flink reads kafka data and writes it to the hudi table sync hive, table1 2. task2:flink reads kafka data and writes it to the hudi table sync hive, table2 3. a few days later,flink write table1 task failed 4. I use flink to write data from table2 into table1 5. flink task exception:Caused by: java.io.FileNotFoundException: File does not exist: hdfs://admin-stage/user/tempuser/hudi/hudipath/tbale_name/343f7bec-e29d-4b1e-a429-463c8efb09fb-0_91-23390-2236222_20200921075346.parquet **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : 0.13.0 * Spark version : 3.2.0 * Hive version : 2.1.1 * Hadoop version : cdh-6.3.2 * Storage (HDFS/S3/GCS..) : HDFS * Running on Docker? (yes/no) : no **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` I found a similar issue:https://github.com/apache/hudi/issues/2098 But I don't understand the specific solution to this problem. Task1 and task2 run until Task1 fails. I don't know how to set up the configuration so that table2 data is written to table1 without errors when task1 fails -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
