Put Write only = true
and make sure your code supports NULLABILITY, Partial Update !!

OR recommended to use dedpulicate the simpler one
Jus write a separate code for Nullabilty handling

Thanks,
Neeraj


On Fri, Oct 10, 2025 at 12:59 PM Jingsong Li <[email protected]> wrote:

> Can you take a look to
>
> https://paimon.apache.org/docs/master/maintenance/dedicated-compaction/#dedicated-compaction-job
> ? As exception message.
>
> You should enable write-only.
>
> Best,
> Jingsong
>
> On Fri, Oct 10, 2025 at 11:52 AM yuzhang qin <[email protected]>
> wrote:
> >
> > Hi,I have some questions about compaction and function of partial update.
> > In my business scenarios,I need to use 8 tasks write to the same table,
> and I also need only a few columns keep the first non-null value, other be
> covered.
> > Now,I set table as:
> > WITH (
> >   'bucket' = '40',
> >   'path' =
> 's3://yj-datalake-test/paimon/qinyuzhang.db/Mt5HistoryTradeRecordModel',
> >   'write-mode' = 'WRITE_ONLY',
> >   'merge-engine' = 'partial-update',
> >   'bucket-key' = 'b_id,s_id'
> > )
> > and run 8 tasks to write,but it prints logs as:
> > 2025-10-09 05:33:13,865 WARN org.apache.flink.runtime.taskmanager.Task
> [] - Writer : Mt5HistoryTradeRecordModel -> Global Committer :
> Mt5HistoryTradeRecordModel -> end: Writer (1/1)#677
> (70647f047568adcfd35f0ed8f64fd97d_f6dc7f4d2283f4605b127b9364e21148_0_677)
> switched from RUNNING to FAILED with failure cause:
> java.lang.RuntimeException: File deletion conflicts detected! Give up
> committing. Don't panic! Conflicts during commits are normal and this
> failure is intended to resolve the conflicts. Conflicts are mainly caused
> by the following scenarios: 1. Multiple jobs are writing into the same
> partition at the same time, or you use STATEMENT SET to execute multiple
> INSERT statements into the same Paimon table. You'll probably see different
> base commit user and current commit user below. You can use
> https://paimon.apache.org/docs/master/maintenance/dedicated-compaction#dedicated-compaction-job
> to support multiple writing. 2. You're recovering from an old savepoint, or
> you're creating multiple jobs from a savepoint. The job will fail
> continuously in this scenario to protect metadata from corruption. You can
> either recover from the latest savepoint, or you can revert the table to
> the snapshot corresponding to the old savepoint. Base commit user is:
> d0d36272-9541-497b-8710-0ad75295d42b; Current commit user is:
> d0d36272-9541-497b-8710-0ad75295d42b
> > So table does not save all data.
> > How should I set up the table to eliminate the problem and retain all
> the data?
> > Help me thanks a lot.
> > yuzhang
>

Reply via email to