koochiswathiTR opened a new issue, #8178:
URL: https://github.com/apache/hudi/issues/8178

   We see duplicate data in our hudi dataset
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   A clear and concise description of the problem.
   
   We run spark streaming application where we read kinesis stream process the 
data and stores in Hudi.
   We started seeing duplicates in our hudi dataset
    Below are our Hudi configs
   
       DataSourceWriteOptions.TABLE_TYPE.key() -> 
DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL,
       DataSourceWriteOptions.RECORDKEY_FIELD.key() -> "guid",
       DataSourceWriteOptions.PARTITIONPATH_FIELD.key() -> "collectionName",
       DataSourceWriteOptions.PRECOMBINE_FIELD.key() -> "operationTime",
       HoodieCompactionConfig.INLINE_COMPACT_TRIGGER_STRATEGY.key() -> 
CompactionTriggerStrategy.TIME_ELAPSED.name,
       HoodieCompactionConfig.INLINE_COMPACT_TIME_DELTA_SECONDS.key() -> 
String.valueOf(60 * 60),
       HoodieCompactionConfig.CLEANER_POLICY.key() -> 
HoodieCleaningPolicy.KEEP_LATEST_COMMITS.name(),
       HoodieCompactionConfig.CLEANER_COMMITS_RETAINED.key() -> "624", 
       HoodieCompactionConfig.MIN_COMMITS_TO_KEEP.key() -> "625",  
       HoodieCompactionConfig.MAX_COMMITS_TO_KEEP.key() -> "648", 
       HoodieCompactionConfig.ASYNC_CLEAN.key() -> "false", 
       HoodieCompactionConfig.INLINE_COMPACT.key() -> "true",
       HoodieMetricsConfig.TURN_METRICS_ON.key() -> "true",
       HoodieMetricsConfig.METRICS_REPORTER_TYPE_VALUE.key() -> 
MetricsReporterType.DATADOG.name(),
       HoodieMetricsDatadogConfig.API_SITE_VALUE.key() -> "US",
       HoodieMetricsDatadogConfig.METRIC_PREFIX_VALUE.key() -> 
"tacticalnovusingest.hudi",
       HoodieMetadataConfig.ENABLE.key() -> "false",
       HoodieWriteConfig.ROLLBACK_USING_MARKERS_ENABLE.key() -> "false",
   
   We only use upsert in our code , we never use insert
   
           dataframe.write.format("org.apache.hudi")
             .option("hoodie.insert.shuffle.parallelism", hudiParallelism)
             .option("hoodie.upsert.shuffle.parallelism", hudiParallelism)
             .option(HoodieWriteConfig.TABLE_NAME, hudiTableName)
             .option(DataSourceWriteOptions.OPERATION_OPT_KEY, 
DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL)
             .option(HoodieMetricsDatadogConfig.METRIC_TAG_VALUES.key(), 
s"env:$environment")
             .options(hudiOptions).mode(SaveMode.Append)
             .save(s3Location)
   
   Please help us on this.
   Below are two situation where we see duplicates.
   1. duplicates with same hudi commit time
   2. duplicates with different commit time.
   
   I have attached the json files for reference 
   We tried to delete duplicate data using hudi commit seq num and our primary 
key, it is deleting both keys
   Duplicate with hudi DELETE 
   dataframe.write.format("org.apache.hudi")
         .option("hoodie.insert.shuffle.parallelism", hudiParallelism)
         .option("hoodie.upsert.shuffle.parallelism", hudiParallelism)
         .option(DataSourceWriteOptions.OPERATION_OPT_KEY, 
DataSourceWriteOptions.DELETE_OPERATION_OPT_VAL)
   
   We tried to deduplicate with the hudi cli command,
   
   repair deduplicate --duplicatedPartitionPath s3://**/ --repairedOutputPath 
s3://**/ --sparkMemory 2G --sparkMaster yarn
   
   We are getting java.io.FileNotFoundException:  
   Please help
   
   
   
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 0.11.1
   
   * Spark version : 3.2
   
   * Hive version :NA
   
   * Hadoop version : NA
   
   * Storage (HDFS/S3/GCS..) :S3
   
   * Running on Docker? (yes/no) :no
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to