njalan opened a new issue, #9967:
URL: https://github.com/apache/hudi/issues/9967

   I rarely faced that there are duplicate records when use overwrite by spark 
API (not sql).
   Duplicate records are current incoming records with records in table before 
overwrite.
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 0.9
   
   * Spark version : 3.0.1
   
   * Hive version : 3.1
   
   * Hadoop version : 3.2.2
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? no :
   
    
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to