Hi Ravi,

The official Flink documentation has a pretty good explanation for this:
https://nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/kafka/#bounded-ending-position

And yes, I can confirm it's officially supported. The behaviour also
depends on your start reading position ("scan.startup.mode"), you need to
specify both.

On Thu, Jan 22, 2026 at 6:14 AM ravi_suryavanshi.yahoo.com via user <
[email protected]> wrote:

> Hi,
> Does anyone have input/info regarding this?
> Regards,
> Ravi
>
> On Wednesday, 19 November 2025 at 11:04:08 am IST,
> ravi_suryavanshi.yahoo.com via user <[email protected]> wrote:
>
>
> Hi,
> We are currently evaluating the Apache Kafka SQL Connector. According to
> the Flink v2.0.1 documentation, a Kafka Scan Source can be used as an
> unbounded source. However, we also noticed the scan.bounded.mode option,
> and when this option is enabled, the Flink job runs in batch mode.
>
> We would like to confirm whether batch mode is officially supported for
> the Kafka SQL Connector. If it is supported, what behavior should we expect
> when using the different bounded modes: latest-offset, group-offsets,
> timestamp, and specific-offsets?
>
> In our tests, latest-offset returns old events, while timestamp returns no
> events.
>
> Regards,
> Ravi
>
>

Reply via email to