Re: [DISCUSS] FLIP-319: Integrating with Kafka’s proper support for 2PC participation (KIP-939).

2023-08-31 Thread Alexander Sorokoumov
Hi Gordon! Thank you for publishing this FLIP! I would like to ask several questions and confirm my understanding of several aspects of this proposal. Even though this discussion is focused on FLIP-319, as it is based on top of KIP-939, my questions will also cover the KIP. 1. Why does

Re: [VOTE] Release flink-connector-kafka 3.0.0 for Flink 1.17, release candidate #2

2023-04-13 Thread Alexander Sorokoumov
+1 (nb). Checked: - checksums are correct - source code builds (JDK 8+11) - release notes are correct Best, Alex On Wed, Apr 12, 2023 at 5:07 PM Tzu-Li (Gordon) Tai wrote: > A few important remarks about this release candidate: > > - As mentioned in the previous voting thread of

Re: [DISCUSS] EXACTLY_ONCE delivery semantics for upsert-kafka connector

2023-04-12 Thread Alexander Sorokoumov
access_logs > > > GROUP BY URL; > > > > > > This is a continuous query[1] that continuously emits a new > > page_views> record once a new URL > > > access entry is received. The same URLs in the log may be far away and > be > > > processed in di

Re: [DISCUSS] EXACTLY_ONCE delivery semantics for upsert-kafka connector

2023-04-07 Thread Alexander Sorokoumov
and > b5, > then back a5 if jobs perform checkpoints after producing records. > > > Best, > Jark > > > > 2023年4月5日 09:39,Alexander Sorokoumov > 写道: > > > > Hello Flink community, > > > > I would like to discuss if it is worth adding EXACTLY_O

[DISCUSS] EXACTLY_ONCE delivery semantics for upsert-kafka connector

2023-04-04 Thread Alexander Sorokoumov
Hello Flink community, I would like to discuss if it is worth adding EXACTLY_ONCE delivery semantics to upsert-kafka connector. According to upsert-kafka docs[1] and ReducingUpsertSink javadoc[2], the connector is correct even with duplicate records under AT_LEAST_ONCE because the records are

Re: Streaming queries in FTS using Kafka log

2022-12-21 Thread Alexander Sorokoumov
/7e0d55ff3dc9fd48455b17d9a439647b0554d020/flink-table-store-connector/src/main/java/org/apache/flink/table/store/connector/source/TableStoreSource.java#L136-L141 Best, Alex On Fri, Dec 16, 2022 at 5:28 PM Alexander Sorokoumov < asorokou...@confluent.io> wrote: > Hello community, > > I want to ask

Streaming queries in FTS using Kafka log

2022-12-16 Thread Alexander Sorokoumov
Hello community, I want to ask about streaming queries with Flink Table Store. After reading the documentation on Streaming Queries [1], I was under the impression that only tables with LogStore-over-TableStore and No Changelog Producer need the normalization step since the Kafka log has the

Questions about Flink Table Store

2022-11-07 Thread Alexander Sorokoumov
I’m Alexander from Confluent. I am new to Flink and its community. I would like to contribute to the Flink Table Store, but am missing certain details. Can someone please clarify the points mentioned below to me? - Given that there is always a single writer to a stream, in what situations