[ 
https://issues.apache.org/jira/browse/HUDI-1231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raymond Xu updated HUDI-1231:
-----------------------------
    Fix Version/s: 0.11.0

> Duplicate record while querying from hive synced table
> ------------------------------------------------------
>
>                 Key: HUDI-1231
>                 URL: https://issues.apache.org/jira/browse/HUDI-1231
>             Project: Apache Hudi
>          Issue Type: Bug
>            Reporter: Ashok Kumar
>            Assignee: Balaji Varadarajan
>            Priority: Major
>             Fix For: 0.11.0
>
>
> I am writting in upsert mode with precombine flag enabled. Still when i query 
> i see same record available 3 times in same parquet file
>  
> spark.sql("select 
> _hoodie_commit_time,_hoodie_commit_seqno,_hoodie_record_key,_hoodie_partition_path,_hoodie_file_name
>  from hudi5_mor_ro where id1=1086187 and timestamp=1598461500 and 
> _hoodie_record_key='timestamp:1598461500,id1:1086187,id2:1872725,flowId:23'").show(10,false)
>  
> +----------------------+------------------------++++--------------------------------------------------------------------------------------------------------------------------------------------------------------------
> |_hoodie_commit_time|_hoodie_commit_seqno|_hoodie_record_key|_hoodie_partition_path|_hoodie_file_name|
> +----------------------+------------------------++++--------------------------------------------------------------------------------------------------------------------------------------------------------------------
> |20200826171813|20200826171813_13856_855766|timestamp:1598461500,id1:1086187,id2:1872725,flowId:23|1086187/2020082617|5ecb020f-29be-4eed-b130-8c02ae819603-0_13856-104-296775_20200826171813.parquet|
> |20200826171813|20200826171813_13856_855766|timestamp:1598461500,id1:1086187,id2:1872725,flowId:23|1086187/2020082617|5ecb020f-29be-4eed-b130-8c02ae819603-0_13856-104-296775_20200826171813.parquet|
> |20200826171813|20200826171813_13856_855766|timestamp:1598461500,id1:1086187,id2:1872725,flowId:23|1086187/2020082617|5ecb020f-29be-4eed-b130-8c02ae819603-0_13856-104-296775_20200826171813.parquet|
> +----------------------+------------------------++++--------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  
> This issue i am getting with both kind of table i.e COW and MOR. 
> I have tried it 0.6.3 version but i had tried 0.5.3 and in that also this bug 
> was coming.
> This issue is not coming with small data set. 
>  
> Strange thing is when i query only parquet file it gives only one record(i.e 
> correct)
> df.filter(col("_hoodie_record_key")==="timestamp:1598461500,id1:1086187,id2:1872725,flowId:23").count
>  res13: Long = 1
>  
> Note:
> When i query filesystem, its fine.
> This issue i see when i query from hive synced table.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to