Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.3 #173

2023-05-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1814

2023-05-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 567195 lines...]
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2023-05-02T22:01:28.928Z] 
[2023-05-02T22:01:28.928Z] > Task :connect:mirror:integrationTest
[2023-05-02T22:01:28.928Z] 
[2023-05-02T22:01:28.928Z] Gradle Test Run :connect:mirror:integrationTest > 
Gradle Test Executor 134 > MirrorConnectorsIntegrationSSLTest > 
testReplication() PASSED
[2023-05-02T22:01:28.928Z] 
[2023-05-02T22:01:28.928Z] Gradle Test Run :connect:mirror:integrationTest > 
Gradle Test Executor 134 > MirrorConnectorsIntegrationSSLTest > 
testReplicationWithEmptyPartition() STARTED
[2023-05-02T22:01:46.293Z] 
[2023-05-02T22:01:46.293Z] > Task :connect:runtime:integrationTest
[2023-05-02T22:01:46.293Z] 
[2023-05-02T22:01:46.293Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testMultipleWorkersRejoining PASSED
[2023-05-02T22:01:50.020Z] 
[2023-05-02T22:01:50.020Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardNoSslDualListener STARTED
[2023-05-02T22:02:20.782Z] 
[2023-05-02T22:02:20.782Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardNoSslDualListener PASSED
[2023-05-02T22:02:20.782Z] 
[2023-05-02T22:02:20.782Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardSslDualListener STARTED
[2023-05-02T22:02:29.148Z] 
[2023-05-02T22:02:29.148Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardSslDualListener PASSED
[2023-05-02T22:02:29.148Z] 
[2023-05-02T22:02:29.148Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardFollowerSsl STARTED
[2023-05-02T22:02:34.318Z] 
[2023-05-02T22:02:34.318Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardFollowerSsl PASSED
[2023-05-02T22:02:34.318Z] 
[2023-05-02T22:02:34.318Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardSsl STARTED
[2023-05-02T22:02:42.671Z] 
[2023-05-02T22:02:42.671Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardSsl PASSED
[2023-05-02T22:02:42.671Z] 
[2023-05-02T22:02:42.671Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardNoSsl STARTED
[2023-05-02T22:02:45.826Z] 
[2023-05-02T22:02:45.826Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardNoSsl PASSED
[2023-05-02T22:02:45.826Z] 
[2023-05-02T22:02:45.826Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardLeaderSsl STARTED
[2023-05-02T22:02:50.342Z] 
[2023-05-02T22:02:50.342Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.RestForwardingIntegrationTest > 
testRestForwardLeaderSsl PASSED
[2023-05-02T22:02:50.342Z] 
[2023-05-02T22:02:50.342Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.SinkConnectorsIntegrationTest > 
testCooperativeConsumerPartitionAssignment STARTED
[2023-05-02T22:03:28.079Z] 
[2023-05-02T22:03:28.079Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.SinkConnectorsIntegrationTest > 
testCooperativeConsumerPartitionAssignment PASSED
[2023-05-02T22:03:28.079Z] 
[2023-05-02T22:03:28.079Z] Gradle Test Run :connect:runtime:integrationTest > 
Gradle Test Executor 137 > 
org.apache.kafka.connect.integration.SinkConnectorsIntegrationTest > 
testEagerConsumerPartitionAssignment STARTED
[2023-05-02T22:03:51.110Z] 
[2023-05-02T22:03:51.110Z] > Task :connect:mirror:integrationTest
[2023-05-02T22:03:51.110Z] 
[2023-05-02T22:03:51.110Z] Gradle Test Run :connect:mirror:integrationTest > 
Gradle Test Executor 134 > MirrorConnectorsIntegrationSSLTest > 
testReplicationWithEmptyPartition() 

[jira] [Created] (KAFKA-14960) Metadata Request Manager and listTopics/partitionsFor API

2023-05-02 Thread Philip Nee (Jira)
Philip Nee created KAFKA-14960:
--

 Summary: Metadata Request Manager and listTopics/partitionsFor API
 Key: KAFKA-14960
 URL: https://issues.apache.org/jira/browse/KAFKA-14960
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Philip Nee
Assignee: Philip Nee


Implement listTopics and partitionsFor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-923: Add A Grace Period to Stream Table Join

2023-05-02 Thread Victoria Xia
Cool KIP, Walker! Thanks for sharing this proposal.

A few clarifications:

1. Is the order that records exit the buffer in necessarily the same as the
order that records enter the buffer in, or no? Based on the description in
the KIP, it sounds like the answer is no, i.e., records will exit the
buffer in increasing timestamp order, which means that they may be ordered
(even for the same key) compared to the input order.

2. What happens if the join grace period is nonzero, and a stream-side
record arrives with a timestamp that is older than the current stream time
minus the grace period? Will this record trigger a join result, or will it
be dropped? Based on the description for what happens when the join grace
period is set to zero, it sounds like the late record will be dropped, even
if the join grace period is nonzero. Is that true?

3. What could cause stream time to advance, for purposes of removing
records from the join buffer? For example, will new records arriving on the
table side of the join cause stream time to advance? From the KIP it sounds
like only stream-side records will advance stream time -- does that mean
that the join processor itself will have to track this stream time?

Also +1 to Lucas's question about what options will be available for
configuring the join buffer. Will users have the option to choose whether
they want the buffer to be in-memory vs persistent?

- Victoria

On Fri, Apr 28, 2023 at 11:54 AM Lucas Brutschy
 wrote:

> HI Walker,
>
> thanks for the KIP! We definitely need this. I have two questions:
>
>  - Have you considered allowing the customization of the underlying
> buffer implementation? As I can see, `StreamJoined` lets you customize
> the underlying store via a `WindowStoreSupplier`. Would it make sense
> for `Joined` to have this as well? I can imagine one may want to limit
> the number of records in the buffer, for example. If we hit the
> maximum, the only option would be to drop semantic guarantees, but
> users may still want to do this.
>  - With "second option on the table side" you are referring to
> versioned tables, right? Will the buffer on the stream side behave any
> different whether the table side is versioned or not?
>
> Finally, I think a simple example in the motivation section could help
> non-experts understand the KIP.
>
> Best,
> Lucas
>
> On Tue, Apr 25, 2023 at 9:13 PM Walker Carlson
>  wrote:
> >
> > Hello everybody,
> >
> > I have a stream proposal to improve the stream table join by adding a
> grace
> > period and buffer to the stream side of the join to allow processing in
> > timestamp order matching the recent improvements of the versioned tables.
> >
> > Please take a look here 
> and
> > share your thoughts.
> >
> > best,
> > Walker
>


Re: Kafka client needs KAFKA-10337 to cover async commit use case

2023-05-02 Thread Philip Nee
Sorry - I dug a bit into the old PR. Seems like the issue is there's broken
contract that if the commitSync won't wait for the previous async commits
to complete if it tries to commit an empty offset map.

On Tue, May 2, 2023 at 12:49 PM Philip Nee  wrote:

> Hey Erik,
>
> Just a couple of questions to you: Firstly, could you explain the
> situation in that you would prefer to invoke commitAsync over commitSync in
> the rebalance listener?  Typically we would use the synchronized method to
> ensure the commits are completed before moving on with the rebalancing,
> which leads to my second comment/question.  is it your concern that we
> currently don't have a way to invoke the callback, and the user won't be to
> correctly handle these failed/successful async commits?
>
> Thanks,
> P
>
> On Tue, May 2, 2023 at 12:22 PM Erik van Oosten
>  wrote:
>
>> Dear developers of the Kafka java client,
>>
>> It seems I have found a feature gap in the Kafka java client.
>> KAFKA-10337 and its associated pull request on Github (from 2020!) would
>> solve this, but it was closed without merging. We would love to see it
>> being reconsidered for merging. This mail has the arguments for doing so.
>>
>> The javadoc of `ConsumerRebalanceListener` method `onPartitionsRevoked`
>> recommends you commit all offsets within the method, thereby holding up
>> the rebalance until those commits are done. The (perceived) feature gap
>> is when the user is trying to do async commits from the rebalance
>> listener; there is nothing available to trigger the callbacks of
>> completed commits. Without these callback, there is no way to know when
>> it is safe to return from onPartitionsRevoked. (We cannot call `poll`
>> because the rebalance listener is already called from inside a poll.)
>>
>> Calling `commitAsync` with an empty offsets parameter seems a perfect
>> candidate for triggering callbacks of earlier commits. Unfortunately,
>> commitAsync doesn't behave that way. This is fixed by mentioned pull
>> request.
>>
>> The pull request conversation has a comment saying that calling `commit`
>> with an empty offsets parameter is not something that should happen. I
>> found this a strange thing to say. First of all, the method does have
>> special handling for this situation, negating the comment outright. In
>> addition this special handling violates the contract of the method (as
>> specified in the javadoc section about ordering). Therefore, this pull
>> request has 2 advantages:
>>
>>  1. KafkaConsumer.commitAsync will be more in line with its javadoc,
>>  2. the feature gap is gone.
>>
>> Of course, it might be that I missed something and that there are other
>> ways to trigger the commit callbacks. I would be very happy to hear
>> about that because it means I do not have to wait for a release cycle.
>>
>> If you agree these arguments are sound, I would be happy to make the
>> pull request mergable again.
>>
>> Curious to your thoughts and kind regards,
>>  Erik.
>>
>>
>> --
>> Erik van Oosten
>> e.vanoos...@grons.nl
>> https://day-to-day-stuff.blogspot.com
>> Committer of zio-kafkahttps://github.com/zio/zio-kafka
>>
>


Re: Kafka client needs KAFKA-10337 to cover async commit use case

2023-05-02 Thread Philip Nee
Hey Erik,

Just a couple of questions to you: Firstly, could you explain the situation
in that you would prefer to invoke commitAsync over commitSync in the
rebalance listener?  Typically we would use the synchronized method to
ensure the commits are completed before moving on with the rebalancing,
which leads to my second comment/question.  is it your concern that we
currently don't have a way to invoke the callback, and the user won't be to
correctly handle these failed/successful async commits?

Thanks,
P

On Tue, May 2, 2023 at 12:22 PM Erik van Oosten
 wrote:

> Dear developers of the Kafka java client,
>
> It seems I have found a feature gap in the Kafka java client.
> KAFKA-10337 and its associated pull request on Github (from 2020!) would
> solve this, but it was closed without merging. We would love to see it
> being reconsidered for merging. This mail has the arguments for doing so.
>
> The javadoc of `ConsumerRebalanceListener` method `onPartitionsRevoked`
> recommends you commit all offsets within the method, thereby holding up
> the rebalance until those commits are done. The (perceived) feature gap
> is when the user is trying to do async commits from the rebalance
> listener; there is nothing available to trigger the callbacks of
> completed commits. Without these callback, there is no way to know when
> it is safe to return from onPartitionsRevoked. (We cannot call `poll`
> because the rebalance listener is already called from inside a poll.)
>
> Calling `commitAsync` with an empty offsets parameter seems a perfect
> candidate for triggering callbacks of earlier commits. Unfortunately,
> commitAsync doesn't behave that way. This is fixed by mentioned pull
> request.
>
> The pull request conversation has a comment saying that calling `commit`
> with an empty offsets parameter is not something that should happen. I
> found this a strange thing to say. First of all, the method does have
> special handling for this situation, negating the comment outright. In
> addition this special handling violates the contract of the method (as
> specified in the javadoc section about ordering). Therefore, this pull
> request has 2 advantages:
>
>  1. KafkaConsumer.commitAsync will be more in line with its javadoc,
>  2. the feature gap is gone.
>
> Of course, it might be that I missed something and that there are other
> ways to trigger the commit callbacks. I would be very happy to hear
> about that because it means I do not have to wait for a release cycle.
>
> If you agree these arguments are sound, I would be happy to make the
> pull request mergable again.
>
> Curious to your thoughts and kind regards,
>  Erik.
>
>
> --
> Erik van Oosten
> e.vanoos...@grons.nl
> https://day-to-day-stuff.blogspot.com
> Committer of zio-kafkahttps://github.com/zio/zio-kafka
>


Kafka client needs KAFKA-10337 to cover async commit use case

2023-05-02 Thread Erik van Oosten

Dear developers of the Kafka java client,

It seems I have found a feature gap in the Kafka java client. 
KAFKA-10337 and its associated pull request on Github (from 2020!) would 
solve this, but it was closed without merging. We would love to see it 
being reconsidered for merging. This mail has the arguments for doing so.


The javadoc of `ConsumerRebalanceListener` method `onPartitionsRevoked` 
recommends you commit all offsets within the method, thereby holding up 
the rebalance until those commits are done. The (perceived) feature gap 
is when the user is trying to do async commits from the rebalance 
listener; there is nothing available to trigger the callbacks of 
completed commits. Without these callback, there is no way to know when 
it is safe to return from onPartitionsRevoked. (We cannot call `poll` 
because the rebalance listener is already called from inside a poll.)


Calling `commitAsync` with an empty offsets parameter seems a perfect 
candidate for triggering callbacks of earlier commits. Unfortunately, 
commitAsync doesn't behave that way. This is fixed by mentioned pull 
request.


The pull request conversation has a comment saying that calling `commit` 
with an empty offsets parameter is not something that should happen. I 
found this a strange thing to say. First of all, the method does have 
special handling for this situation, negating the comment outright. In 
addition this special handling violates the contract of the method (as 
specified in the javadoc section about ordering). Therefore, this pull 
request has 2 advantages:


1. KafkaConsumer.commitAsync will be more in line with its javadoc,
2. the feature gap is gone.

Of course, it might be that I missed something and that there are other 
ways to trigger the commit callbacks. I would be very happy to hear 
about that because it means I do not have to wait for a release cycle.


If you agree these arguments are sound, I would be happy to make the 
pull request mergable again.


Curious to your thoughts and kind regards,
    Erik.


--
Erik van Oosten
e.vanoos...@grons.nl
https://day-to-day-stuff.blogspot.com
Committer of zio-kafkahttps://github.com/zio/zio-kafka


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1813

2023-05-02 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14959) Remove metrics on ClientQuota Managers shutdown

2023-05-02 Thread Manyanda Chitimbo (Jira)
Manyanda Chitimbo created KAFKA-14959:
-

 Summary: Remove metrics on ClientQuota Managers shutdown
 Key: KAFKA-14959
 URL: https://issues.apache.org/jira/browse/KAFKA-14959
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Manyanda Chitimbo
Assignee: Divij Vaidya
 Fix For: 3.6.0


We register metrics with the KafkaMetricsGroup in LogCleaner.scala but we don't 
remove them on shutdown.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14958) Investigate enforcing all batches have the same producer ID

2023-05-02 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-14958:
--

 Summary: Investigate enforcing all batches have the same producer 
ID
 Key: KAFKA-14958
 URL: https://issues.apache.org/jira/browse/KAFKA-14958
 Project: Kafka
  Issue Type: Task
Reporter: Justine Olshan


KAFKA-14916 was created after I incorrectly assumed transaction ID in the 
produce request indicated all batches were transactional.

Originally this ticket had an action item to ensure all the producer IDs are 
the same in the batches since we send a single txn ID, but we decided this can 
be done in a followup, as we still need to assess if we can enforce this 
without breaking workloads.

This ticket is that followup. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14957) Default value for state.dir is confusing

2023-05-02 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14957:
--

 Summary: Default value for state.dir is confusing
 Key: KAFKA-14957
 URL: https://issues.apache.org/jira/browse/KAFKA-14957
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Mickael Maison


The default value for state.dir is documented as 
/var/folders/0t/68svdzmx1sld0mxjl8dgmmzmgq/T//kafka-streams

This is misleading, the value will be different in each environment as it 
computed using System.getProperty("java.io.tmpdir"). We should update the 
description to mention how the path is computed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #119

2023-05-02 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 524210 lines...]
[2023-05-02T15:41:58.270Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-02T15:41:58.270Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-02T15:41:58.270Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-02T15:41:58.270Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:62:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2023-05-02T15:41:58.270Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:62:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2023-05-02T15:41:58.270Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2023-05-02T15:41:59.394Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:109:
 warning - Tag @link: reference not found: this#getResult()
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureReason()
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:116:
 warning - Tag @link: reference not found: this#getFailureMessage()
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:154:
 warning - Tag @link: reference not found: this#isSuccess()
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:154:
 warning - Tag @link: reference not found: this#isFailure()
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2023-05-02T15:41:59.394Z] 25 warnings
[2023-05-02T15:41:59.394Z] 
[2023-05-02T15:41:59.394Z] > Task :streams:copyDependantLibs UP-TO-DATE
[2023-05-02T15:41:59.394Z] > Task :streams:jar UP-TO-DATE
[2023-05-02T15:41:59.394Z] 
[2023-05-02T15:41:59.394Z] > Task :clients:javadoc
[2023-05-02T15:41:59.394Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/OAuthBearerLoginCallbackHandler.java:151:
 warning - Tag @link: reference not found: 
[2023-05-02T15:42:00.420Z] 
[2023-05-02T15:42:00.420Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2023-05-02T15:42:00.420Z] > Task :streams:javadocJar
[2023-05-02T15:42:04.477Z] 
[2023-05-02T15:42:04.477Z] > Task :clients:javadoc
[2023-05-02T15:42:04.477Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2023-05-02T15:42:04.477Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.4/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2023-05-02T15:42:04.477Z] 3 warnings

Re: [DISCUSS] Apache Kafka 3.5.0 release

2023-05-02 Thread Mickael Maison
Hi Luke,

Yes I think it makes sense to backport both to 3.5.

Thanks,
Mickael

On Tue, May 2, 2023 at 11:38 AM Luke Chen  wrote:
>
> Hi Mickael,
>
> There are 1 bug and 1 improvement that I'd like to backport to 3.5.
> 1. A small improvement for ZK migration based on KAFKA-14840 (mentioned
> above in David's mail). PR is already merged to trunk.
> https://issues.apache.org/jira/browse/KAFKA-14909
>
> 2. A bug will cause the KRaft controller node to shut down unexpectedly. PR
> is ready for review.
> https://issues.apache.org/jira/browse/KAFKA-14946
> https://github.com/apache/kafka/pull/13653
>
> Thanks.
> Luke
>
>
>
> On Fri, Apr 28, 2023 at 4:18 PM Mickael Maison 
> wrote:
>
> > Hi David,
> >
> > Yes you can backport these to 3.5. Let me know when you are done.
> >
> > Thanks,
> > Mickael
> >
> > On Thu, Apr 27, 2023 at 9:02 PM David Arthur
> >  wrote:
> > >
> > > Hey Mickael,
> > >
> > > I have one major ZK migration improvement (KAFKA-14805) that landed in
> > > trunk this week that I'd like to merge to 3.5 (once we fix some test
> > > failures it introduced). After that, I have another PR for KAFKA-14840
> > > which is essentially a huge bug in the ZK migration logic that needs to
> > > land in 3.5.
> > >
> > > https://issues.apache.org/jira/browse/KAFKA-14805 (done)
> > > https://issues.apache.org/jira/browse/KAFKA-14840 (nearly done)
> > >
> > > I just wanted to check with you before cherry-picking these to 3.5
> > >
> > > David
> > >
> > >
> > > On Mon, Apr 24, 2023 at 1:18 PM Mickael Maison  > >
> > > wrote:
> > >
> > > > Hi Justine,
> > > >
> > > > That makes sense. Feel free to revert that commit in 3.5.
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > > On Mon, Apr 24, 2023 at 7:16 PM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > > >
> > > > > Hi Josep,
> > > > >
> > > > > Thanks for letting me know!
> > > > >
> > > > > On Mon, Apr 24, 2023 at 6:58 PM Justine Olshan
> > > >  wrote:
> > > > > >
> > > > > > Hey Mickael,
> > > > > >
> > > > > > I've just opened a blocker to revert KAFKA-14561 in 3.5. There are
> > a
> > > > few
> > > > > > blocker bugs that I don't think I can fix before the code freeze,
> > so I
> > > > > > think for the quality of the release, we should just revert the
> > commit.
> > > > > >
> > > > > > Thanks,
> > > > > > Justine
> > > > > >
> > > > > > On Fri, Apr 21, 2023 at 1:23 PM Josep Prat
> >  > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Mickael,
> > > > > > >
> > > > > > > Greg Harris managed to fix a flaky test in
> > > > > > > https://github.com/apache/kafka/pull/13575, I cherry-picked it
> > to
> > > > the 3.5
> > > > > > > (and 3.4) branch. I updated the Jira to reflect that is now
> > fixed on
> > > > 3.5.0
> > > > > > > as well as 3.6.0.
> > > > > > > Let me know if I forgot anything.
> > > > > > >
> > > > > > > Best,
> > > > > > >
> > > > > > > On Fri, Apr 21, 2023 at 3:44 PM Mickael Maison <
> > > > mickael.mai...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > Just a quick reminder that code freeze is next week.
> > > > > > > > We still have 27 JIRAs targeting 3.5 [0] including quite a few
> > bugs
> > > > > > > > and flaky test issues opened recently. If you have time, take
> > one
> > > > of
> > > > > > > > these items or help with the reviews.
> > > > > > > >
> > > > > > > > I'll send another update next once we've entered code freeze.
> > > > > > > >
> > > > > > > > 0:
> > > > > > > >
> > > > > > >
> > > >
> > https://issues.apache.org/jira/browse/KAFKA-13421?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%203.5.0%20AND%20status%20not%20in%20(resolved%2C%20closed)%20ORDER%20BY%20priority%20DESC%2C%20status%20DESC%2C%20updated%20DESC
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Mickael
> > > > > > > >
> > > > > > > > On Thu, Apr 20, 2023 at 9:14 PM Mickael Maison <
> > > > mickael.mai...@gmail.com
> > > > > > > >
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi Ron,
> > > > > > > > >
> > > > > > > > > Yes feel free to merge that fix. Thanks for letting me know!
> > > > > > > > >
> > > > > > > > > Mickael
> > > > > > > > >
> > > > > > > > > On Thu, Apr 20, 2023 at 8:15 PM Ron Dagostino <
> > rndg...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > Hi Mickael.  I would like to merge
> > > > > > > > > > https://github.com/apache/kafka/pull/13532 (KAFKA-14887:
> > No
> > > > shutdown
> > > > > > > > > > for ZK session expiration in feature processing) to the 3.5
> > > > branch.
> > > > > > > > > > It is a very small and focused fix that can cause
> > unexpected
> > > > broker
> > > > > > > > > > shutdowns when there is instability in the connectivity to
> > > > ZooKeeper.
> > > > > > > > > > The risk is very low.
> > > > > > > > > >
> > > > > > > > > > Ron
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Tue, Apr 18, 2023 at 9:57 AM Mickael Maison <
> > > > > > > > mickael.mai...@gmail.com> wrote:
> > 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1812

2023-05-02 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14956) Flaky test org.apache.kafka.connect.integration.OffsetsApiIntegrationTest#testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted

2023-05-02 Thread Sagar Rao (Jira)
Sagar Rao created KAFKA-14956:
-

 Summary: Flaky test 
org.apache.kafka.connect.integration.OffsetsApiIntegrationTest#testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted
 Key: KAFKA-14956
 URL: https://issues.apache.org/jira/browse/KAFKA-14956
 Project: Kafka
  Issue Type: Bug
Reporter: Sagar Rao


```
h4. Error
org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
Sink connector consumer group offsets should catch up to the topic end offsets 
==> expected:  but was: 
h4. Stacktrace
org.opentest4j.AssertionFailedError: Condition not met within timeout 15000. 
Sink connector consumer group offsets should catch up to the topic end offsets 
==> expected:  but was: 
 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
 at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
 at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
 at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
 at 
app//org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:337)
 at 
app//org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:334)
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:318)
 at app//org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:291)
 at 
app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.getAndVerifySinkConnectorOffsets(OffsetsApiIntegrationTest.java:150)
 at 
app//org.apache.kafka.connect.integration.OffsetsApiIntegrationTest.testGetSinkConnectorOffsetsDifferentKafkaClusterTargeted(OffsetsApiIntegrationTest.java:131)
 at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
 at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
 at 
app//org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
app//org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
app//org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
app//org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
app//org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
 at 
app//org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
 at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
 at 
app//org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at app//org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
 at 
app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
app//org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at app//org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
 at app//org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
 at app//org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
 at app//org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
 at app//org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
 at app//org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
 at app//org.junit.runners.ParentRunner.run(ParentRunner.java:413)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:108)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:40)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:60)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:52)
 at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
 at 
java.base@17.0.7/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 at 
java.base@17.0.7/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base@17.0.7/java.lang.reflect.Method.invoke(Method.java:568)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 

[jira] [Created] (KAFKA-14955) Validate that all partitions are assigned in TargetAssignmentBuilder

2023-05-02 Thread David Jacot (Jira)
David Jacot created KAFKA-14955:
---

 Summary: Validate that all partitions are assigned in 
TargetAssignmentBuilder
 Key: KAFKA-14955
 URL: https://issues.apache.org/jira/browse/KAFKA-14955
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot


We may want to ensure that all partitions are assigned when a new target 
assignment is computed by the TargetAssignmentBuilder. On the server side, 
there is no reason for a partition to not be assigned. On the client side, this 
is debatable though.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-892: Transactional Semantics for StateStores

2023-05-02 Thread Bruno Cadonna

Hi Nick,

Thanks for the updates!

I have a couple of questions/comments.

1.
Why do you propose a configuration that involves max. bytes and max. 
reords? I think we are mainly concerned about memory consumption because 
we want to limit the off-heap memory used. I cannot think of a case 
where one would want to set the max. number of records.



2.
Why does

 default void commit(final Map changelogOffsets) {
 flush();
 }

take a map of partitions to changelog offsets?
The mapping between state stores to partitions is a 1:1 relationship. 
Passing in a single changelog offset should suffice.



3.
Why do we need the Transaction interface? It should be possible to hide 
beginning and committing a transactions withing the state store 
implementation, so that from outside the state store, it does not matter 
whether the state store is transactional or not. What would be the 
advantage of using the Transaction interface?



4.
Regarding checkpointing offsets, I think we should keep the checkpoint 
file in any case for the reason you mentioned about rebalancing. Even if 
that would not be an issue, I would propose to move the change to offset 
management to a new KIP and to not add more complexity than needed to 
this one. I would not be too concerned about the consistency violation 
you mention. As far as I understand, with transactional state stores 
Streams would write the checkpoint file during every commit even under 
EOS. In the failure case you describe, Streams would restore the state 
stores from the offsets found in the checkpoint file written during the 
penultimate commit instead of during the last commit. Basically, Streams 
would overwrite the records written to the state store between the last 
two commits with the same records read from the changelogs. While I 
understand that this is wasteful, it is -- at the same time -- 
acceptable and most importantly it does not break EOS.


Best,
Bruno


On 27.04.23 12:34, Nick Telford wrote:

Hi everyone,

I find myself (again) considering removing the offset management from
StateStores, and keeping the old checkpoint file system. The reason is that
the StreamPartitionAssignor directly reads checkpoint files in order to
determine which instance has the most up-to-date copy of the local state.
If we move offsets into the StateStore itself, then we will need to open,
initialize, read offsets and then close each StateStore (that is not
already assigned and open) for which we have *any* local state, on every
rebalance.

Generally, I don't think there are many "orphan" stores like this sitting
around on most instances, but even a few would introduce additional latency
to an already somewhat lengthy rebalance procedure.

I'm leaning towards Colt's (Slack) suggestion of just keeping things in the
checkpoint file(s) for now, and not worrying about the race. The downside
is that we wouldn't be able to remove the explicit RocksDB flush on-commit,
which likely hurts performance.

If anyone has any thoughts or ideas on this subject, I would appreciate it!

Regards,
Nick

On Wed, 19 Apr 2023 at 15:05, Nick Telford  wrote:


Hi Colt,

The issue is that if there's a crash between 2 and 3, then you still end
up with inconsistent data in RocksDB. The only way to guarantee that your
checkpoint offsets and locally stored data are consistent with each other
are to atomically commit them, which can be achieved by having the offsets
stored in RocksDB.

The offsets column family is likely to be extremely small (one
per-changelog partition + one per Topology input partition for regular
stores, one per input partition for global stores). So the overhead will be
minimal.

A major benefit of doing this is that we can remove the explicit calls to
db.flush(), which forcibly flushes memtables to disk on-commit. It turns
out, RocksDB memtable flushes are largely dictated by Kafka Streams
commits, *not* RocksDB configuration, which could be a major source of
confusion. Atomic checkpointing makes it safe to remove these explicit
flushes, because it no longer matters exactly when RocksDB flushes data to
disk; since the data and corresponding checkpoint offsets will always be
flushed together, the local store is always in a consistent state, and
on-restart, it can always safely resume restoration from the on-disk
offsets, restoring the small amount of data that hadn't been flushed when
the app exited/crashed.

Regards,
Nick

On Wed, 19 Apr 2023 at 14:35, Colt McNealy  wrote:


Nick,

Thanks for your reply. Ack to A) and B).

For item C), I see what you're referring to. Your proposed solution will
work, so no need to change it. What I was suggesting was that it might be
possible to achieve this with only one column family. So long as:

- No uncommitted records (i.e. not committed to the changelog) are
*committed* to the state store, AND
- The Checkpoint offset (which refers to the changelog topic) is less
than or equal to the last written changelog offset in rocksdb

I 

Re: [ANNOUNCE] New PMC chair: Mickael Maison

2023-05-02 Thread Edoardo Comar
Congratulations Mickael !!

On Mon, 24 Apr 2023 at 20:18, Randall Hauch  wrote:

> Thank you, Jun, for all your contributions as PMC chair.
>
> And congratulations and thanks, Mickael, for volunteering to take over this
> important role.
>
> Best regards,
> Randall
>
> On Mon, Apr 24, 2023 at 1:39 PM Bruno Cadonna  wrote:
>
> > Hi,
> >
> > Jun, Thanks a lot for all you have done for the project!
> >
> > Congrats Mickael and thank you for taking over the PMC chair!
> >
> > Best,
> > Bruno
> >
> > On 21.04.23 17:09, Jun Rao wrote:
> > > Hi, everyone,
> > >
> > > After more than 10 years, I am stepping down as the PMC chair of Apache
> > > Kafka. We now have a new chair Mickael Maison, who has been a PMC
> member
> > > since 2020. I plan to continue to contribute to Apache Kafka myself.
> > >
> > > Congratulations, Mickael!
> > >
> > > Jun
> > >
> >
>


[jira] [Created] (KAFKA-14954) Use BufferPools to optimize allocation in RemoteLogInputStream

2023-05-02 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-14954:


 Summary: Use BufferPools to optimize allocation in 
RemoteLogInputStream
 Key: KAFKA-14954
 URL: https://issues.apache.org/jira/browse/KAFKA-14954
 Project: Kafka
  Issue Type: Sub-task
Reporter: Divij Vaidya


ref: https://github.com/apache/kafka/pull/13535#discussion_r1180144730



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14953) Add metrics for expiration of delayed remote fetch

2023-05-02 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-14953:


 Summary: Add metrics for expiration of delayed remote fetch
 Key: KAFKA-14953
 URL: https://issues.apache.org/jira/browse/KAFKA-14953
 Project: Kafka
  Issue Type: Sub-task
Reporter: Divij Vaidya


ref: [https://github.com/apache/kafka/pull/13535#discussion_r1180286031] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Apache Kafka 3.5.0 release

2023-05-02 Thread Luke Chen
Hi Mickael,

There are 1 bug and 1 improvement that I'd like to backport to 3.5.
1. A small improvement for ZK migration based on KAFKA-14840 (mentioned
above in David's mail). PR is already merged to trunk.
https://issues.apache.org/jira/browse/KAFKA-14909

2. A bug will cause the KRaft controller node to shut down unexpectedly. PR
is ready for review.
https://issues.apache.org/jira/browse/KAFKA-14946
https://github.com/apache/kafka/pull/13653

Thanks.
Luke



On Fri, Apr 28, 2023 at 4:18 PM Mickael Maison 
wrote:

> Hi David,
>
> Yes you can backport these to 3.5. Let me know when you are done.
>
> Thanks,
> Mickael
>
> On Thu, Apr 27, 2023 at 9:02 PM David Arthur
>  wrote:
> >
> > Hey Mickael,
> >
> > I have one major ZK migration improvement (KAFKA-14805) that landed in
> > trunk this week that I'd like to merge to 3.5 (once we fix some test
> > failures it introduced). After that, I have another PR for KAFKA-14840
> > which is essentially a huge bug in the ZK migration logic that needs to
> > land in 3.5.
> >
> > https://issues.apache.org/jira/browse/KAFKA-14805 (done)
> > https://issues.apache.org/jira/browse/KAFKA-14840 (nearly done)
> >
> > I just wanted to check with you before cherry-picking these to 3.5
> >
> > David
> >
> >
> > On Mon, Apr 24, 2023 at 1:18 PM Mickael Maison  >
> > wrote:
> >
> > > Hi Justine,
> > >
> > > That makes sense. Feel free to revert that commit in 3.5.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Mon, Apr 24, 2023 at 7:16 PM Mickael Maison <
> mickael.mai...@gmail.com>
> > > wrote:
> > > >
> > > > Hi Josep,
> > > >
> > > > Thanks for letting me know!
> > > >
> > > > On Mon, Apr 24, 2023 at 6:58 PM Justine Olshan
> > >  wrote:
> > > > >
> > > > > Hey Mickael,
> > > > >
> > > > > I've just opened a blocker to revert KAFKA-14561 in 3.5. There are
> a
> > > few
> > > > > blocker bugs that I don't think I can fix before the code freeze,
> so I
> > > > > think for the quality of the release, we should just revert the
> commit.
> > > > >
> > > > > Thanks,
> > > > > Justine
> > > > >
> > > > > On Fri, Apr 21, 2023 at 1:23 PM Josep Prat
>  > > >
> > > > > wrote:
> > > > >
> > > > > > Hi Mickael,
> > > > > >
> > > > > > Greg Harris managed to fix a flaky test in
> > > > > > https://github.com/apache/kafka/pull/13575, I cherry-picked it
> to
> > > the 3.5
> > > > > > (and 3.4) branch. I updated the Jira to reflect that is now
> fixed on
> > > 3.5.0
> > > > > > as well as 3.6.0.
> > > > > > Let me know if I forgot anything.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > On Fri, Apr 21, 2023 at 3:44 PM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > Just a quick reminder that code freeze is next week.
> > > > > > > We still have 27 JIRAs targeting 3.5 [0] including quite a few
> bugs
> > > > > > > and flaky test issues opened recently. If you have time, take
> one
> > > of
> > > > > > > these items or help with the reviews.
> > > > > > >
> > > > > > > I'll send another update next once we've entered code freeze.
> > > > > > >
> > > > > > > 0:
> > > > > > >
> > > > > >
> > >
> https://issues.apache.org/jira/browse/KAFKA-13421?jql=project%20%3D%20KAFKA%20AND%20fixVersion%20%3D%203.5.0%20AND%20status%20not%20in%20(resolved%2C%20closed)%20ORDER%20BY%20priority%20DESC%2C%20status%20DESC%2C%20updated%20DESC
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Mickael
> > > > > > >
> > > > > > > On Thu, Apr 20, 2023 at 9:14 PM Mickael Maison <
> > > mickael.mai...@gmail.com
> > > > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > Hi Ron,
> > > > > > > >
> > > > > > > > Yes feel free to merge that fix. Thanks for letting me know!
> > > > > > > >
> > > > > > > > Mickael
> > > > > > > >
> > > > > > > > On Thu, Apr 20, 2023 at 8:15 PM Ron Dagostino <
> rndg...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Hi Mickael.  I would like to merge
> > > > > > > > > https://github.com/apache/kafka/pull/13532 (KAFKA-14887:
> No
> > > shutdown
> > > > > > > > > for ZK session expiration in feature processing) to the 3.5
> > > branch.
> > > > > > > > > It is a very small and focused fix that can cause
> unexpected
> > > broker
> > > > > > > > > shutdowns when there is instability in the connectivity to
> > > ZooKeeper.
> > > > > > > > > The risk is very low.
> > > > > > > > >
> > > > > > > > > Ron
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Tue, Apr 18, 2023 at 9:57 AM Mickael Maison <
> > > > > > > mickael.mai...@gmail.com> wrote:
> > > > > > > > > >
> > > > > > > > > > Hi David,
> > > > > > > > > >
> > > > > > > > > > Thanks for the update. I've marked KAFKA-14869 as fixed
> in
> > > 3.5.0, I
> > > > > > > > > > guess you'll only resolve this ticket once you merge the
> > > backports
> > > > > > to
> > > > > > > > > > earlier branches. The ticket will have to be resolved to
> run
> > > the
> > > > > > > > > > release but that should leave you enough time.
> > > > > >