[
https://issues.apache.org/jira/browse/HUDI-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17914485#comment-17914485
]
Davis Zhang edited comment on HUDI-8819 at 1/19/25 7:53 PM:
------------------------------------------------------------
with MDT disabled we still saw the same issue. Looking into the culprit now.
Even if both session uses 0.15 the issue persists
was (Author: JIRAUSER305408):
with MDT disabled we still saw the same issue. Looking into the culprit now
> Hudi 1.0's backward writer's UPDATE/DELETE would corrupt older versioned Hudi
> table
> -----------------------------------------------------------------------------------
>
> Key: HUDI-8819
> URL: https://issues.apache.org/jira/browse/HUDI-8819
> Project: Apache Hudi
> Issue Type: Sub-task
> Affects Versions: 1.0.0
> Reporter: Shawn Chang
> Assignee: Davis Zhang
> Priority: Blocker
> Fix For: 1.0.1
>
> Time Spent: 2.5h
> Remaining Estimate: 0h
>
> Reproduction:
> # Create a table with Hudi 0.14 + Spark 3.5.0 with some rows
> # Use Hudi 1.0.0 + Spark 3.5.3 as writer, set
> .option("hoodie.write.table.version", 6) to enable backward writer
>
> # After updating some rows, read with Hudi 1.0.0 + Spark 3.5.3:
> spark.read.format("hudi").load(tablePath)
>
> # The read results from Hudi 1.0.0 + Spark 3.5.3 would only contain updated
> rows
> # Same happens to DELETE, if we delete some rows with Hudi 1.0.0 + Spark
> 3.5.3, then the Spark reader can only see the delete blocks that contain zero
> row
> # Older versioned Hudi reader (Athena) can still see the correct results
--
This message was sent by Atlassian Jira
(v8.20.10#820010)