[
https://issues.apache.org/jira/browse/HUDI-6047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Danny Chen updated HUDI-6047:
-----------------------------
Fix Version/s: 0.14.0
> Clustering operation on consistent hashing resulting in duplicate data
> ----------------------------------------------------------------------
>
> Key: HUDI-6047
> URL: https://issues.apache.org/jira/browse/HUDI-6047
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Rohan
> Priority: Major
> Labels: pull-request-available
> Fix For: 0.14.0
>
>
> Hudi chooses consistent hashing committed bucket metadata file on the basis
> of r{*}eplace commit logged on hudi active timeline{*}. but {*}once hudi
> archives timeline{*}, it falls back to *default consistent hashing bucket
> metadata* that is *00000000000000.hashing_meta* , which result in writing
> duplicate records in the table *.*
> above behaviour results in duplicate data in the hudi table and *failing in
> subsequent clustering operation as there is inconsistency between file groups
> on storage vs file groups in metadata files*
>
> Check the loadMetadata function of consistent hashing index implementation.
> [https://github.com/apache/hudi/blob/4da64686cfbcb6471b1967091401565f58c835c7/hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java#L190|http://example.com/]
>
> let me know if anything else is needed.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)