Thanks zhanghao, we can do this in flink too

Best,
Fang Yong

Zhanghao Chen <zhanghao.c...@outlook.com> 写道:

> Hi Yong,
>
> Thanks for raising it! It is a common problem shared by all sinks using
> the global committer pattern. Would it be better to initiate a discussion
> in the Flink community as well?
>
> Best,
> Zhanghao Chen
> ________________________________
> From: Jingsong Li <jingsongl...@gmail.com>
> Sent: Thursday, January 23, 2025 20:38
> To: dev@paimon.apache.org <dev@paimon.apache.org>
> Subject: Re: [DISCUSS] PIP-30: Improvement For Paimon Committer In Flink
>
> Thanks Yong!
>
> Looks fantastic! We can discuss this in detail after the Spring Festival.
>
> Best,
> Jingsong
>
> On Thu, Jan 23, 2025 at 5:44 PM Yong Fang <zjur...@gmail.com> wrote:
> >
> > Hi devs,
> >
> > I would like to start a discussion about PIP-30: Improvement For Paimon
> > Committer In Flink [1].
> >
> > Currently Flink writes data to Paimon based on Two-Phase Commit which
> will
> > generate a global committer node and connect all tasks in one region. If
> > any task fails, it will lead to a global failover in Flink job.
> >
> > To solve this issue, we would like to introduce a Paimon Writer
> Coordinator
> > to perform table commit operation, enabling Flink paimon jobs to support
> > region failover and improving stability.
> >
> > Looking forward to hearing from you, thanks!
> >
> > [1]
> > https://cwiki.apache.org/confluence/display/PAIMON/PIP-
> 30%3A+Improvement+For+Paimon+Committer+In+Flink
> >
> >
> > Best,
> > Fang Yong
>

Reply via email to