Hi Gordon!
Thank you for publishing this FLIP! I would like to ask several questions
and confirm my understanding of several aspects of this proposal. Even
though this discussion is focused on FLIP-319, as it is based on top of
KIP-939, my questions will also cover the KIP.
1. Why does
+1 (nb).
Checked:
- checksums are correct
- source code builds (JDK 8+11)
- release notes are correct
Best,
Alex
On Wed, Apr 12, 2023 at 5:07 PM Tzu-Li (Gordon) Tai
wrote:
> A few important remarks about this release candidate:
>
> - As mentioned in the previous voting thread of
access_logs
> > > GROUP BY URL;
> > >
> > > This is a continuous query[1] that continuously emits a new > > page_views> record once a new URL
> > > access entry is received. The same URLs in the log may be far away and
> be
> > > processed in di
and
> b5,
> then back a5 if jobs perform checkpoints after producing records.
>
>
> Best,
> Jark
>
>
> > 2023年4月5日 09:39,Alexander Sorokoumov
> 写道:
> >
> > Hello Flink community,
> >
> > I would like to discuss if it is worth adding EXACTLY_O
Hello Flink community,
I would like to discuss if it is worth adding EXACTLY_ONCE delivery
semantics to upsert-kafka connector. According to upsert-kafka docs[1] and
ReducingUpsertSink javadoc[2], the connector is correct even with duplicate
records under AT_LEAST_ONCE because the records are
/7e0d55ff3dc9fd48455b17d9a439647b0554d020/flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/TableStoreSource.java#L136-L141
Best,
Alex
On Fri, Dec 16, 2022 at 5:28 PM Alexander Sorokoumov <
asorokou...@confluent.io> wrote:
> Hello community,
>
> I want to ask
Hello community,
I want to ask about streaming queries with Flink Table Store. After reading
the documentation on Streaming Queries [1], I was under the impression that
only tables with LogStore-over-TableStore and No Changelog Producer need
the normalization step since the Kafka log has the
I’m Alexander from Confluent. I am new to Flink and its community. I would
like to contribute to the Flink Table Store, but am missing certain
details. Can someone please clarify the points mentioned below to me?
- Given that there is always a single writer to a stream, in what
situations