[
https://issues.apache.org/jira/browse/HUDI-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Danny Chen closed HUDI-7515.
----------------------------
Resolution: Fixed
Fixed via master branch: 7a44b1ebc41ce66621e958df22195524373434c1
> Fix partition metadata write failure
> ------------------------------------
>
> Key: HUDI-7515
> URL: https://issues.apache.org/jira/browse/HUDI-7515
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Wechar
> Assignee: Danny Chen
> Priority: Major
> Labels: pull-request-available
> Fix For: 0.15.0, 1.0.0
>
> Attachments: screenshot-1.png
>
>
> Avoid failing to write partition metadata. When spark.speculation is enabled,
> if the write metadata operation become slow for some reason, a speculative
> will be started to write the same metadata file concurrently.
> In HDFS, two tasks(like one is speculate task) writing to the same file could
> both throw exception like so:
> {code:bash}
> File does not exist:
> /path/to/table/a=3519/b=3520/c=3521/.hoodie_partition_metadata_112 (inode
> 48415575374) Holder DFSClient_NONMAPREDUCE_-2108606624_29 does not have any
> open files.
> {code}
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)