Hi Da,

Currently FLIP-506 released in Flink-2.1 only supports sink reuse with
identical columns.
Some work still needs to be done to support sink reuse crossing multiple
streams with different columns.
I think this will be released in next Flink version.

Regards,
Xiangyu Feng

Jingsong Li <jingsongl...@gmail.com> 于2025年8月1日周五 12:27写道:

> Hi,
>
> You can take a look to
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-506%3A+Support+Reuse+Multiple+Table+Sinks+in+Planner
>
> It was released in Flink 2.1.
>
> Best,
> Jingsong
>
> On Thu, Jul 31, 2025 at 3:17 PM Ge Da <wczfore...@gmail.com> wrote:
> >
> > Hi Paimon community,
> >
> > I'm working on a multi-stream join scenario using Paimon and have a
> question about Sink Node Reuse optimization.
> >
> > Current setup:
> > - Using partial-update merge engine
> > - Multiple Kafka streams (order_base_cdc, order_pay_cdc)
> > - Currently using UNION ALL with NULL placeholders
> >
> > I've heard about "Sink Node Reuse" optimization that can avoid
> maintaining NULL placeholders when table schema changes. Could someone
> explain:
> >
> > 1. How does Sink Node Reuse work in Paimon?
> > 2. What are the specific configuration parameters needed?
> > 3. Can each stream write only its own fields without NULL placeholders?
> > 4. Any examples or documentation for this feature?
> >
> > Current DML structure:
> > ```sql
> > INSERT INTO paimon_table
> > SELECT orderid, channelName, orderTime, NULL, NULL FROM stream1
> > UNION ALL
> > SELECT orderid, NULL, NULL, payAmount, payTime FROM stream2
> > ```
> >
> > Is there a way to write like this instead?
> > ```sql
> > -- Stream 1
> > INSERT INTO paimon_table SELECT orderid, channelName, orderTime FROM
> stream1;
> > -- Stream 2
> > INSERT INTO paimon_table SELECT orderid, payAmount, payTime FROM stream2;
> > ```
> >
> > Thanks for any help!
> >
>

Reply via email to