[ 
https://issues.apache.org/jira/browse/HUDI-4992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17616029#comment-17616029
 ] 

Alexey Kudinkin commented on HUDI-4992:
---------------------------------------

Original issue has been introduced by this change 
[https://github.com/apache/hudi/pull/5470]

> Spark Row-writing Bulk Insert produces incorrect Bloom Filter metadata
> ----------------------------------------------------------------------
>
>                 Key: HUDI-4992
>                 URL: https://issues.apache.org/jira/browse/HUDI-4992
>             Project: Apache Hudi
>          Issue Type: Bug
>    Affects Versions: 0.12.0
>            Reporter: Alexey Kudinkin
>            Assignee: Alexey Kudinkin
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 0.12.1
>
>
> Troubleshooting duplicates issue w/ Abhishek Modi from Notion, we've found 
> that the min/max record key stats are being currently persisted incorrectly 
> into Parquet metadata, leading to duplicate records being produced in their 
> pipeline after initial bulk-insert.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to