[
https://issues.apache.org/jira/browse/HUDI-3915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Alexey Kudinkin updated HUDI-3915:
----------------------------------
Status: In Progress (was: Open)
> Error upserting bucketType UPDATE for partition :0
> --------------------------------------------------
>
> Key: HUDI-3915
> URL: https://issues.apache.org/jira/browse/HUDI-3915
> Project: Apache Hudi
> Issue Type: Bug
> Components: deltastreamer
> Reporter: Neetu Gupta
> Assignee: Alexey Kudinkin
> Priority: Critical
> Fix For: 0.12.1
>
>
> I have updated the hudi column partition from 'year,month' to 'year. Then I
> ran the process in overwrite mode. The process executed successfully and hudi
> table got created.
> However, when the process got triggered in 'append' mode, I started getting
> the error mentioned below:
> '
> Task 0 in stage 32.0 failed 4 times; aborting job java.lang.Exception: Job
> aborted due to stage failure: Task 0 in stage 32.0 failed 4 times, most
> recent failure: Lost task 0.3 in stage 32.0 (TID 1207,
> ip-10-73-110-184.ec2.internal, executor 6):
> org.apache.hudi.exception.HoodieUpsertException: Error upserting bucketType
> UPDATE for partition :0 at
> org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:305)
> '
> Then I reverted the partition columns back to 'year,month' but still got the
> same error. But, when I am writing data in different folder in 'append' mode,
> the script ran fine and I could see the Hudi table.
> In short, the process is not working when I am trying to append the data in
> the same path. Can you please look into this. This is critical to us because
> the jobs are stuck.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)