Hi Timo and Dawid,

Thanks for taking a look and the feedback! To address your valid point,
I've included a new section called "4.3.6 Invalid op_mapping Combinations",
Dawid. Apart from that, I've also included one functionality to both
functions called "passthrough". It allows users to carry top-level fields
(e.g., ts_ms) through the PTF when before/after are used, instead of
dropping them. It seems to be a common use and a useful additional
functionality. Documentation and use cases were added.

If there are no further comments, I'd like to start the vote in the coming
days.

Kind regards,
Gustavo

On Fri, 13 Feb 2026 at 12:13, Dawid Wysakowicz <[email protected]>
wrote:

> Hi Gustavo,
> Thank you for this very well written FLIP ! I really liked the examples, it
> helps understand the purpose well.
>
> +1 for this proposal.
>
> One comment from my side. I understand there are only certain combinations
> that are allowed in `op_mapping`, especially in TO_CHANGELOG PTF. How do
> you plan to handle invalid cases? For example, what would happen if I
> specify: ["INSERT, DELETE", "ID"]. I suspect you'd fail during planning,
> am I right? Could we make it explicit in the FLIP?
>
> Best,
> Dawid
>
> On Fri, 13 Feb 2026 at 12:04, Timo Walther <[email protected]> wrote:
>
> > Hi Gustavo,
> >
> > thank you for this excellent design doc and coming up with all these
> > different combinations of use cases. These two PTFs will significantly
> > improve the Flink CDC story and help polishing the stream/table duality
> > story.
> >
> > +1 for this proposal.
> >
> > Naming-wise I could also imagine calling it TO_CDC/FROM_CDC. But the
> > current naming fits well to other API endpoints and Flink terminology,
> > which is why I support FROM_CHANGELOG/TO_CHANGELOG.
> >
> > Cheers,
> > Timo
> >
> > On 06.02.26 18:52, Gustavo de Morais wrote:
> > > Hi everyone,
> > >
> > > I'd like to propose FLIP-564: Support *FROM_CHANGELOG* and
> *TO_CHANGELOG*
> > > built-in PTFs [1] for discussion.
> > >
> > > Flink's DataStream API offers flexible methods like toChangelogStream()
> > and
> > > fromChangelogStream() to work with changelog streams, but SQL users
> > > currently lack this capability. This FLIP introduces two built-in
> Process
> > > Table Functions (PTFs) to bring *similar* functionality with additional
> > > features to Flink SQL:
> > >
> > >
> > >     - *FROM_CHANGELOG*: Converts an append-only stream of CDC records
> > into a
> > >     dynamic table, enabling, for example, custom CDC connector
> > implementations
> > >     directly in SQL.
> > >     - *TO_CHANGELOG*: Converts a dynamic table back into an append-only
> > >     changelog stream - the first operator that makes it possible to
> > convert a
> > >     retract/upsert stream back to append in SQL.
> > >
> > >
> > > Both PTFs support flexible operation mapping (e.g., Debezium-style 'c',
> > > 'u', 'd' codes), before/after image handling, configurable state TTL,
> and
> > > watermark-based ordering. They are designed to work symmetrically, so
> > > FROM_CHANGELOG(TO_CHANGELOG(table)) round-trips correctly.
> > >
> > > The naming follows the existing DataStream API convention. Alternative
> > > names were considered (e.g., ENCODE_CHANGELOG, DEMATERIALIZE) - they
> are
> > > under the rejected alternatives. If you have any better suggestions,
> feel
> > > free to share them here.
> > >
> > > Looking forward to your feedback and thoughts.
> > >
> > > Kind regards,
> > > Gustavo de Morais
> > >
> > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-564%3A+Support+FROM_CHANGELOG+and+TO_CHANGELOG+built-in+PTFs
> > >
> >
> >
>

Reply via email to