[ 
https://issues.apache.org/jira/browse/HUDI-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17913082#comment-17913082
 ] 

Davis Zhang edited comment on HUDI-8819 at 1/14/25 10:06 PM:
-------------------------------------------------------------

as long as 1.0 and 0.15 writer both disable MDT and no issue, then we should be 
good.

For other use case, please follow migration protocol.

I need to rerun the repro with MDT disabled to see if the issue persist before 
closing it.


was (Author: JIRAUSER305408):
as long as 1.0 and 0.15 writer both disable MDT and no issue, then we should be 
good.

For other use case, please follow migration protocol.

> Hudi 1.0's backward writer's UPDATE/DELETE would corrupt older versioned Hudi 
> table
> -----------------------------------------------------------------------------------
>
>                 Key: HUDI-8819
>                 URL: https://issues.apache.org/jira/browse/HUDI-8819
>             Project: Apache Hudi
>          Issue Type: Sub-task
>    Affects Versions: 1.0.0
>            Reporter: Shawn Chang
>            Assignee: Davis Zhang
>            Priority: Blocker
>             Fix For: 1.0.1
>
>          Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Reproduction:
>  # Create a table with Hudi 0.14 + Spark 3.5.0 with some rows
>  # Use Hudi 1.0.0 + Spark 3.5.3 as writer, set 
> .option("hoodie.write.table.version", 6) to enable backward writer
>  
>  # After updating some rows, read with Hudi 1.0.0 + Spark 3.5.3: 
> spark.read.format("hudi").load(tablePath)
>  
>  # The read results from Hudi 1.0.0 + Spark 3.5.3 would only contain updated 
> rows
>  # Same happens to DELETE, if we delete some rows with Hudi 1.0.0 + Spark 
> 3.5.3, then the Spark reader can only see the delete blocks that contain zero 
> row
>  # Older versioned Hudi reader (Athena) can still see the correct results 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to