Thanks everyone; I have opened a PR: https://github.com/apache/flink/pull/20526

On 10/08/2022 04:16, Xingbo Huang wrote:
I'm not sure if this kafka upgrade will bring other unknown problems, but
if this upgrade can help us solve known critical bugs, the sooner the
upgrade, the better. Before cutting release-1.16 branch, we will still have
enough time to test the impact.

So I'm +1 for including this fix in 1.16.0.

Best,
Xingbo

Mason Chen <mas.chen6...@gmail.com> 于2022年8月10日周三 07:20写道:

+1

We don't have a reproducible test in Flink CI to tell if the issue is
definitely solved. However, bumping the version doesn't make matters worse
then the current state.

Best,
Mason

On Tue, Aug 9, 2022 at 10:56 AM David Anderson <dander...@apache.org>
wrote:

I'm in favor of adopting this fix in 1.16.0.

+1

On Tue, Aug 9, 2022 at 7:13 AM tison <wander4...@gmail.com> wrote:

+1

This looks reasonable.

Best,
tison.


Thomas Weise <t...@apache.org> 于2022年8月9日周二 21:33写道:

+1 for bumping the Kafka dependency.

Flink X.Y.0 releases require thorough testing, so considering the
severity
of the problem this is still good timing, even that close to the
first
RC.
Thanks for bringing this up.

Thomas

On Tue, Aug 9, 2022 at 7:51 AM Chesnay Schepler <ches...@apache.org>
wrote:

Hello,

The Kafka upgrade in 1.15.0 resulted in a regression
(https://issues.apache.org/jira/browse/FLINK-28060) where offsets
are
not committed to Kafka, impeding monitoring and the starting
offsets
functionality of the connector.

This has been fixed a about a week ago in Kafka 3.2.1.

The question is whether we want to upgrade Kafka so close to the
feature
freeze. I'm usually not a friend of doing that in general, but in
this
case there is a specific issue we'd like to get fixed and we still
have
the entire duration of the feature freeze to observe the behavior.

I'd like to know what you think about this.

For reference, our current Kafka version is 3.1.1, and our CI is
passing
with 3.2.1.




Reply via email to