[DISCUSS] KIP-944 Support async runtimes in consumer

2023-06-29 Thread Erik van Oosten

[This is a resend with the correct KIP number.]

Hello developers of the Java based consumer,

I submitted https://github.com/apache/kafka/pull/13914 to fix a long 
standing problem that the Kafka consumer on the JVM is not usable from 
asynchronous runtimes such as Kotlin co-routines and ZIO. However, since 
it extends the public API I was requested to create a KIP.


So here it is:
KIP-944 Support async runtimes in consumer 
https://cwiki.apache.org/confluence/x/chw0Dw


Any questions, comments, ideas and other additions are welcome!

The KIP should be complete except for the testing section. As far as I 
am aware there are no tests for the current behavior. Any help in this 
area would be appreciated.


Kind regards,
    Erik.


--
Erik van Oosten
e.vanoos...@grons.nl
https://day-to-day-stuff.blogspot.com



Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1962

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 485462 lines...]
[Pipeline] echo
Skipping Kafka Streams archetype test for Java 17
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

> Task :connect:runtime:integrationTest
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testDeleteConnector
 failed, log available in 
/home/jenkins/jenkins-agent/712657a4/workspace/Kafka_kafka_trunk/connect/runtime/build/reports/testOutput/org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testDeleteConnector.test.stdout

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testDeleteConnector FAILED
org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Could 
not execute DELETE request. Error response: 
{"error_code":500,"message":"Request timed out"}
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.deleteConnector(EmbeddedConnectCluster.java:400)
at 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testDeleteConnector(RebalanceSourceConnectorsIntegrationTest.java:207)

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testAddingWorker STARTED

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testAddingWorker PASSED

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testRemovingWorker STARTED

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testRemovingWorker PASSED

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testReconfigConnector STARTED

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testReconfigConnector PASSED

Gradle Test Run :connect:runtime:integrationTest > Gradle Test Executor 140 > 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest > 
testMultipleWorkersRejoining STARTED

> Task :connect:mirror:integrationTest
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.testOneWayReplicationWithFrequentOffsetSyncs()
 failed, log available in 
/home/jenkins/jenkins-agent/712657a4/workspace/Kafka_kafka_trunk/connect/mirror/build/reports/testOutput/org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.testOneWayReplicationWithFrequentOffsetSyncs().test.stdout

Gradle Test Run :connect:mirror:integrationTest > Gradle Test Executor 135 > 
MirrorConnectorsIntegrationSSLTest > 
testOneWayReplicationWithFrequentOffsetSyncs() FAILED
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Call(callName=createTopics, 
deadlineMs=1688087414334, tries=1, nextAllowedTryMs=1688087415518) timed out at 
1688087415418 after 1 attempt(s)
at 
org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:428)
at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.createTopics(MirrorConnectorsIntegrationBaseTest.java:1193)
at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:229)
at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:143)
at 
org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationSSLTest.startClusters(MirrorConnectorsIntegrationSSLTest.java:63)

Caused by:
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Call(callName=createTopics, 
deadlineMs=1688087414334, tries=1, nextAllowedTryMs=1688087415518) timed out at 
1688087415418 after 1 attempt(s)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at 

Re: [DISCUSS] KIP-759: Unneeded repartition canceling

2023-06-29 Thread Matthias J. Sax

Shay,

thanks for picking up this KIP. It's a pity that the discussion stalled 
for such a long time.


As expressed previously, I am happy with the name `markAsPartitioned()` 
and also believe it's ok to just document the impact and leave it to the 
user to do the right thing.


If we really get a lot of users that ask about it, because they did not 
do the right thing, we could still add something (eg, a reverse-mapper 
function) in a follow-up KIP. But we don't know if it's necessary; thus, 
making a small incremental step sounds like a good approach to me.


Let's see if others agree or not.


-Matthias

On 6/28/23 5:29 PM, Shay Lin wrote:

Hi all,

Great discussion thread. May I take this KIP up? If it’s alright my plan is
to update the KIP with the operator `markAsPartitioned()`.

As you have discussed and pointed out, there are implications to downstream
joins or aggregation operations. Still, the operator is intended for
advanced users so my two cents is it would be a valuable addition
nonetheless. We could add this as a caution/consideration as part of the
java doc.

Let me know, thanks.
Shay



Re: [DISCUSS] KIP-941: Range queries to accept null lower and upper bounds

2023-06-29 Thread Matthias J. Sax

Thanks for the KIP. LGTM.

I believe you can start a vote.

-Matthias

On 6/26/23 11:25 AM, Lucia Cerchie wrote:

Thanks for asking for clarification, Sophie; that gives me guidance on
improving the KIP! Here's the updated version, including the JIRA link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-941%3A+Range+queries+to+accept+null+lower+and+upper+bounds


On Thu, Jun 22, 2023 at 12:57 PM Sophie Blee-Goldman 
wrote:


Hey Lucia, thanks for the KIP! Just some minor notes:

I'm in favor of the proposal overall, at least I think so -- for someone
not intimately familiar with the new IQ API and *RangeQuery* class, the KIP
was a bit difficult to follow along and I had to read in between the lines
to figure out what the old behavior was and what the new and improved logic
would do.

It would be good to state clearly in the beginning what happens when null
is passed in right now, and what will happen after this KIP is implemented.
For example in the "Public Interfaces" section, I couldn't tell if the
middle sentence was describing what was changing, or what it was changing
*to.*

One last little thing is can you link to the jira ticket at the top? And
please create one if it doesn't already exist -- it helps people figure out
when a KIP has been implemented and in which versions, as well as navigate
from the KIP to the actual code that was merged. Things can change during
implementation and the KIP document is how most people read up on new
features, but almost all of us are probably guilty of forgetting to update
the KIP document. So it's important to be able to find the code when in
doubt.

Otherwise nice KIP!

On Thu, Jun 22, 2023 at 8:19 AM Lucia Cerchie

wrote:


Thanks Kirk and John for the valuable feedback!

John, I'll update the KIP to reflect that nuance you mention -- yes it is
just about making the withRange method more permissive. Thanks for the
testing file as well, I'll be sure to write my test cases there.

On Wed, Jun 21, 2023 at 10:50 AM Kirk True  wrote:


Hi John/Lucia,

Thanks for the feedback!

Of course I only noticed the private-ness of the RangeQuery constructor
moments after sending my email ¯\_(ツ)_/¯

Just to be clear, I’m happy with the proposed change as it conforms to
Postel’s Law ;) Apologies that it was worded tersely.

Thanks,
Kirk


On Jun 21, 2023, at 10:20 AM, John Roesler 

wrote:


Hi all,

Thanks for the KIP, Lucia! This is a nice change.

To Kirk's question (1), the example is a bit misleading. The typical

case that would ease user pain is specifically using "null" to indicate

an

open-ended range, especially since null is not a valid key.


I could additionally see an empty string as being nice, but the

actual

API is generic, not String, so there's no meaningful concept of
empty/blank/whitespace that we could check for, just null or not.


Regarding (2), there's no public factory that takes Optional

parameters.

I think you're looking at the private constructor. An alternative Lucia
could consider is to instead propose adding a new factory like
`withRange(Optional lower, Optional upper)`.


FWIW, I'd be in favor of this KIP as proposed.

A couple of smaller notes:

3. In the compatibility notes, I wasn't sure what "web request" was

referring to. I think you just mean that all existing valid API calls

will

continue to work the same, and we're only making the withRange method

more

permissive with its arguments.


4. For the Test Plan, I wrote some tests that validate these queries

against every kind and configuration of store possible. Please add your

new

test cases to that one to make absolutely sure it'll work for every

store.

Obviously, you may also want to add some specific unit tests in

addition.


See





https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/IQv2StoreIntegrationTest.java


Thanks again!
-John

On 6/21/23 12:00, Kirk True wrote:

Hi Lucia,
One question:
1. Since the proposed implementation change for withRange() method

uses

Optional.ofNullable() (which only catches nulls and not

blank/whitespace

strings), wouldn’t users still need to have code like that in the

example?

2. Why don't users create RangeQuery objects that use Optional

directly? What’s the benefit of introducing what appears to be a very

thin

utility facade?

Thanks,
Kirk

On Jun 21, 2023, at 9:51 AM, Kirk True  wrote:

Hi Lucia,

Thanks for the KIP!

The KIP wasn’t in the email and I didn’t see it on the main KIP

directory. Here it is:








https://cwiki.apache.org/confluence/display/KAFKA/KIP-941%3A+Range+queries+to+accept+null+lower+and+upper+bounds


Can the KIP be added to the main KIP page (





https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals

)?

That will help with discoverability and encourage discussion.


Thanks,
Kirk


On Jun 15, 2023, at 2:13 PM, Lucia Cerchie

 wrote:


Hi everyone,

I'd like to discuss KIP-941, which will change the behavior of

range


Re: [DISCUSS] KIP-941 Support async runtimes in consumer

2023-06-29 Thread Matthias J. Sax

Seems the KIP number is 947, not 941?

Can you maybe start a new thread to avoid confusion?

Thanks.

On 6/28/23 1:11 AM, Erik van Oosten wrote:

Hello developers of the Java based consumer,

I submitted https://github.com/apache/kafka/pull/13914 to fix a long 
standing problem that the Kafka consumer on the JVM is not usable from 
asynchronous runtimes such as Kotlin co-routines and ZIO. However, since 
it extends the public API I was requested to create a KIP.


So here it is:
KIP-941 Support async runtimes in consumer 
https://cwiki.apache.org/confluence/x/chw0Dw


Any questions, comments, ideas and other additions are welcome!

The KIP should be complete except for the testing section. As far as I 
am aware there are no tests for the current behavior. Any help in this 
area would be appreciated.


Kind regards,
     Erik.




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #181

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 423044 lines...]
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:155:
 warning - Tag @link: reference not found: this#isFailure()
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
29 warnings

> Task :streams:javadocJar

> Task :clients:javadoc
/home/jenkins/workspace/Kafka_kafka_3.3/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/OAuthBearerLoginCallbackHandler.java:147:
 warning - Tag @link: reference not found: 
1 warning

> Task :clients:javadocJar
> Task :clients:testJar
> Task :core:compileScala
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/7.4.2/userguide/command_line_interface.html#sec:command_line_warnings

Execution optimizations have been disabled for 2 invalid unit(s) of work during 
this build to ensure correctness.
Please consult deprecation warnings for more details.

BUILD SUCCESSFUL in 5m 27s
79 actionable tasks: 37 executed, 42 up-to-date
[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in /home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.3.3-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.3.3-SNAPSHOT/streams-quickstart-3.3.3-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.3.3-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- resources:2.7:resources (default-resources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- resources:2.7:testResources (default-testResources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- archetype:2.2:jar (default-jar) @ streams-quickstart-java ---
[INFO] Building archetype jar: 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart/java/target/streams-quickstart-java-3.3.3-SNAPSHOT
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- archetype:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.5 #28

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 468736 lines...]
Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > shouldSetChangelogTopicProperties 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > shouldSetChangelogTopicProperties PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > shouldRestore STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > shouldRestore PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > shouldPutGetAndDelete STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > shouldPutGetAndDelete PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
VersionedKeyValueStoreIntegrationTest > 
shouldManualUpgradeFromNonVersionedTimestampedToVersioned PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
HandlingSourceTopicDeletionIntegrationTest > 
shouldThrowErrorAfterSourceTopicDeleted PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorLargeNumConsumers 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorLargeNumConsumers 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorLargePartitionCount PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyThreadsPerClient PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyStandbys STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyThreadsPerClient STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorManyThreadsPerClient PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyThreadsPerClient 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyThreadsPerClient 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargePartitionCount 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargePartitionCount 
PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargePartitionCount STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargePartitionCount PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyStandbys STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorManyStandbys 
STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testHighAvailabilityTaskAssignorManyStandbys PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 178 > 
StreamsAssignmentScaleTest > testFallbackPriorTaskAssignorLargeNumConsumers 
STARTED

Gradle Test Run 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.4 #148

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 439125 lines...]
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
> Task :core:compileScala
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava
> Task :streams:testClasses
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/7.6/userguide/command_line_interface.html#sec:command_line_warnings

Execution optimizations have been disabled for 2 invalid unit(s) of work during 
this build to ensure correctness.
Please consult deprecation warnings for more details.

BUILD SUCCESSFUL in 5m 8s
81 actionable tasks: 37 executed, 44 up-to-date
[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in /home/jenkins/workspace/Kafka_kafka_3.4/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.4.1-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.4.1-SNAPSHOT/streams-quickstart-3.4.1-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.4.1-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- resources:2.7:resources (default-resources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- resources:2.7:testResources (default-testResources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- archetype:2.2:jar (default-jar) @ streams-quickstart-java ---
[INFO] Building archetype jar: 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/quickstart/java/target/streams-quickstart-java-3.4.1-SNAPSHOT
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- archetype:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart-java ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart-java ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/quickstart/java/target/streams-quickstart-java-3.4.1-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.4.1-SNAPSHOT/streams-quickstart-java-3.4.1-SNAPSHOT.jar
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.4/streams/quickstart/java/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.4.1-SNAPSHOT/streams-quickstart-java-3.4.1-SNAPSHOT.pom
[INFO] 
[INFO] --- archetype:2.2:update-local-catalog (default-update-local-catalog) @ 
streams-quickstart-java ---
[INFO] 
[INFO] Reactor Summary for 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #180

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 337595 lines...]
Running in /home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.3.3-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.3.3-SNAPSHOT/streams-quickstart-3.3.3-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.3.3-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- resources:2.7:resources (default-resources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- resources:2.7:testResources (default-testResources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- archetype:2.2:jar (default-jar) @ streams-quickstart-java ---
[INFO] Building archetype jar: 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart/java/target/streams-quickstart-java-3.3.3-SNAPSHOT
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- archetype:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart-java ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart-java ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart/java/target/streams-quickstart-java-3.3.3-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.3.3-SNAPSHOT/streams-quickstart-java-3.3.3-SNAPSHOT.jar
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/quickstart/java/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.3.3-SNAPSHOT/streams-quickstart-java-3.3.3-SNAPSHOT.pom
[INFO] 
[INFO] --- archetype:2.2:update-local-catalog (default-update-local-catalog) @ 
streams-quickstart-java ---
[INFO] 
[INFO] Reactor Summary for Kafka Streams :: Quickstart 3.3.3-SNAPSHOT:
[INFO] 
[INFO] Kafka Streams :: Quickstart  SUCCESS [  2.115 s]
[INFO] streams-quickstart-java  SUCCESS [  0.643 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  3.067 s
[INFO] Finished at: 2023-06-29T20:02:54Z
[INFO] 
[WARNING] 
[WARNING] Plugin validation issues were detected in 7 plugin(s)
[WARNING] 
[WARNING]  * org.apache.maven.plugins:maven-remote-resources-plugin:1.5
[WARNING]  * org.apache.maven.plugins:maven-install-plugin:2.5.2
[WARNING]  * org.apache.maven.plugins:maven-archetype-plugin:2.2
[WARNING]  * org.apache.maven.plugins:maven-resources-plugin:2.7
[WARNING]  * org.apache.maven.plugins:maven-clean-plugin:3.0.0
[WARNING]  * org.apache.maven.plugins:maven-site-plugin:3.5.1
[WARNING]  * org.apache.maven.plugins:maven-gpg-plugin:1.6
[WARNING] 
[WARNING] For more or less details, use 'maven.plugin.validation' property with 
one of the values (case insensitive): [BRIEF, DEFAULT, VERBOSE]
[WARNING] 
[Pipeline] dir
Running in 

Re: Requesting permissions to contribute to Apache Kafka

2023-06-29 Thread Divij Vaidya
You should be all set.

--
Divij Vaidya



On Thu, Jun 29, 2023 at 8:45 PM Mayank Shekhar Narula <
mayanks.nar...@gmail.com> wrote:

>  - can someone grant these? Thanks!
>
> As requested here -
>
> https://cwiki.apache.org/confluence/display/kafka/kafka+improvement+proposals#KafkaImprovementProposals-GettingStarted
> "
>
> Wiki Id - mayanks*.*narula
> Jira  Id - mayanksnarula
>
> Notice that Jira Id doesn't have the ".", whereas Wiki id does have ".".
>
>
>
> --
> Regards,
> Mayank Shekhar Narula
>


Requesting permissions to contribute to Apache Kafka

2023-06-29 Thread Mayank Shekhar Narula
 - can someone grant these? Thanks!

As requested here -
https://cwiki.apache.org/confluence/display/kafka/kafka+improvement+proposals#KafkaImprovementProposals-GettingStarted
"

Wiki Id - mayanks*.*narula
Jira  Id - mayanksnarula

Notice that Jira Id doesn't have the ".", whereas Wiki id does have ".".



-- 
Regards,
Mayank Shekhar Narula


[jira] [Resolved] (KAFKA-15053) Regression for security.protocol validation starting from 3.3.0

2023-06-29 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-15053.
--
  Reviewer: Divij Vaidya
Resolution: Fixed

> Regression for security.protocol validation starting from 3.3.0
> ---
>
> Key: KAFKA-15053
> URL: https://issues.apache.org/jira/browse/KAFKA-15053
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.3.0
>Reporter: Bo Gao
>Assignee: Bo Gao
>Priority: Major
>  Labels: backport
> Fix For: 3.3.3, 3.6.0, 3.5.1, 3.4.2
>
>
> [This|https://issues.apache.org/jira/browse/KAFKA-13793] Jira issue 
> introduced validations on multiple configs. As a consequence, config 
> {{security.protocol}} now only allows upper case values such as PLAINTEXT, 
> SSL, SASL_PLAINTEXT, SASL_SSL. Before this change, lower case values like 
> sasl_ssl, ssl are also supported, there's even a case insensitive logic 
> inside 
> [SecurityProtocol|https://github.com/apache/kafka/blob/146a6976aed0d9f90c70b6f21dca8b887cc34e71/clients/src/main/java/org/apache/kafka/common/security/auth/SecurityProtocol.java#L70-L73]
>  to handle the lower case values.
> I think we should treat this as a regression bug since we don't support lower 
> case values anymore since 3.3.0. For versions later than 3.3.0, we are 
> getting error like this when using lower case value sasl_ssl
> {{Invalid value sasl_ssl for configuration security.protocol: String must be 
> one of: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Offsets: consumption and production in rollback

2023-06-29 Thread Andrew Schofield
Hi,
Rollback does result in gaps in the offsets that a read-committed consumer 
sees. Log compaction can also result in gaps in the offsets.

I am not aware of any way to force the “cleanup” of transactions.

Thanks,
Andrew

> On 29 Jun 2023, at 09:08, Henry GALVEZ  wrote:
>
> Hi Andrew,
>
> Yes, I have been using the requireStable option in the consumer group, but I 
> consistently encounter the same issue.
>
> If I understand Kafka's logic correctly, it is essential for Kafka to retain 
> those offsets even in the case of rollbacks. This is why relying on offsets 
> within the application logic is not reliable.
>
> I need to explain this to my colleagues at work and would like to provide 
> some context to the SpringKafka developer.
>
> Do you think the scenario of a rollback is similar to the effects of Log 
> Compaction?
> https://kafka.apache.org/documentation/#design_compactionbasics
>
> Furthermore, is there a way to force the cleanup of transactions? Could this 
> potentially help address the issue?
>
> Cordially,
> Henry
>
> 
> De: Andrew Schofield 
> Enviado: miércoles, 28 de junio de 2023 14:54
> Para: dev@kafka.apache.org 
> Asunto: Re: Offsets: consumption and production in rollback
>
> Hi Henry,
> Consumers get to choose an isolation level. There’s one instance I can think 
> of where AdminClient also has
> some ability to let users choose how to deal with uncommitted data. If you’ve 
> not seen KIP-851 take a look:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-851%3A+Add+requireStable+flag+into+ListConsumerGroupOffsetsOptions
> By your question, I expect you have seen it.
>
> Control records are kind of invisibly lurking like dark matter. The approach 
> I’ve taken with this kind of thing
> is to cope with the fact that the offsets of the real records are increasing 
> but not necessarily consecutive.
> If I’m using READ_COMMITTED isolation level, there are also gaps when 
> transactions roll back.
> I design my consumers so they are not surprised by the gaps and they don’t 
> try to calculate the number
> of records. Instead, they just continually consume.
>
> Hope this helps,
> Andrew
>
>> On 28 Jun 2023, at 09:28, Henry GALVEZ  wrote:
>>
>> Hi Andrew,
>>
>> Thank you for your response.
>>
>> I understand your explanation, but in both cases, I am using an "isolation 
>> level" of READ_COMMITTED. I believe the issue lies in the 
>> AdminClient.listOffsets method, as it may not be considering the isolation 
>> level, where as the consumer of AdminClient.listConsumerGroupOffsets does 
>> consider it.
>>
>> What are your thoughts on this?
>>
>> Additionally, would it be more suitable to implement a solution that reverts 
>> the offsets in case of transaction rollbacks? It's possible that there's a 
>> logic aspect I'm not fully grasping.
>>
>> Perhaps I need to utilize the internal control records and their offsets. 
>> Could you point me in the right direction for their documentation?
>>
>> Thank you,
>> Henry
>>
>> 
>> De: Andrew Schofield 
>> Enviado: martes, 27 de junio de 2023 13:22
>> Para: dev@kafka.apache.org 
>> Asunto: Fwd: Offsets: consumption and production in rollback
>>
>> Hi Henry,
>> Thanks for your message.
>>
>> Kafka transactions are a bit unusual. If you produce a message inside a 
>> transaction, it is assigned an offset on a topic-partition before
>> the transaction even commits. That offset is not “revoked” if the 
>> transaction rolls back.
>>
>> This is why the consumer has the concept of “isolation level”. It 
>> essentially controls whether the consumer can “see” the
>> uncommitted or even rolled back messages.
>>
>> A consumer using the committed isolation level only consumes committed 
>> messages, but the offsets that it observes do
>> reflect the uncommitted messages. So, if you observe the progress of the 
>> offsets of the records consumed, you see that they
>> skip the messages that were produced but then rolled back. There are also 
>> invisible control records that are used to achieve
>> transactional behaviour, and those also have offsets.
>>
>> I’m not sure that this is really “bogus lag” but, when you’re using 
>> transactions, there’s not a one-to-one relationship
>> between offsets and consumable records.
>>
>> Hope this helps,
>> Andrew
>>
>> Begin forwarded message:
>>
>> From: Henry GALVEZ 
>> Subject: Offsets: consumption and production in rollback
>> Date: 27 June 2023 at 10:48:31 BST
>> To: "us...@kafka.apache.org" , 
>> "dev@kafka.apache.org" 
>> Reply-To: dev@kafka.apache.org
>>
>> I have some doubts regarding message consumption and production, as well as 
>> transactional capabilities. I am using a Kafka template to produce a message 
>> within a transaction. After that, I execute another transaction that 
>> produces a message and intentionally 

Re: Permission to contribute to Apache Kafka

2023-06-29 Thread Bruno Cadonna

Hi Gaurav,

you should be all set up now!

Thanks for your interest in Apache Kafka!

Best,
Bruno

On 29.06.23 16:16, Gaurav Narula wrote:

Hi,

Can someone please have a look at this request. Please let me know if there's 
any further information required.

Thanks,
Gaurav

On 2023/06/26 19:01:08 ka...@gnarula.com wrote:

Hi,

I'd like to request permissions to contribute to Apache Kafka. My account 
details are as follows:

# Wiki

Email: gaurav_naru...@apple.com 
Username: gnarula

# JIRA

Email: gaurav_naru...@apple.com 
Username: gnarula

Regards,
Gaurav



Sent from my iPhone


RE: Permission to contribute to Apache Kafka

2023-06-29 Thread Gaurav Narula
Hi,

Can someone please have a look at this request. Please let me know if there's 
any further information required.

Thanks,
Gaurav

On 2023/06/26 19:01:08 ka...@gnarula.com wrote:
> Hi,
> 
> I'd like to request permissions to contribute to Apache Kafka. My account 
> details are as follows:
> 
> # Wiki
> 
> Email: gaurav_naru...@apple.com 
> Username: gnarula
> 
> # JIRA
> 
> Email: gaurav_naru...@apple.com 
> Username: gnarula
> 
> Regards,
> Gaurav


Sent from my iPhone

Re: [DISCUSS] KIP-793: Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2023-06-29 Thread Chris Egerton
Hi Yash,

Thanks for your continued work on this tricky feature. I have no further
comments or suggestions on the KIP and am ready to vote in favor of it.

That said, I did want to quickly respond to this comment:

> On a side note, this also means that the per sink record ack API
that was proposed earlier wouldn't really work for this case since Kafka
consumers themselves don't support per message acknowledgement semantics
(and any sort of manual book-keeping based on offset linearity in a topic
partition would be affected by things like log compaction, control records
for transactional use cases etc.) right?

I believe we could still use the SubmittedRecords class [1] (with some
small tweaks) to track ack'd messages and the latest-committable offsets
per topic partition, without relying on assumptions about offsets for
consecutive records consumed from Kafka always differing by one. But at
this point I think that, although this approach does come with the
advantage of also enabling fine-grained metrics on record delivery to the
sink system, it's not worth the tradeoff in intuition since it's less clear
why users should prefer that API instead of using SinkTask::preCommit.

[1] -
https://github.com/apache/kafka/blob/12be344fdd3b20f338ccab87933b89049ce202a4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/SubmittedRecords.java

Cheers,

Chris

On Wed, Jun 21, 2023 at 9:46 AM Yash Mayya  wrote:

> Hi Chris,
>
> Firstly, thanks for sharing your detailed thoughts on this thorny issue!
> Point taken on Kafka Connect being a brownfield project and I guess we
> might just need to trade off elegant / "clean" interfaces for fixing this
> gap in functionality. Also, thanks for calling out all the existing
> cross-plugin interactions and also the fact that connectors are not and
> should not be developed in silos ignoring the rest of the ecosystem. That
> said, here are my thoughts:
>
> > we could replace these methods with headers that the
> > Connect runtime automatically injects into records directly
> > before dispatching them to SinkTask::put.
>
> Hm, that's an interesting idea to get around the need for connectors to
> handle potential 'NoSuchMethodError's in calls to
> SinkRecord::originalTopic/originalKafkaPartition/originalKafkaOffset.
> However, I'm inclined to agree that retrieving these values from the record
> headers seems even less intuitive and I'm okay with adding this to the
> rejected alternatives list.
>
> > we can consider eliminating the overridden
> > SinkTask::open/close methods
>
> I tried to further explore the idea of keeping just the existing
> SinkTask::open / SinkTask::close methods but only calling them with
> post-transform topic partitions and ended up coming to the same conclusion
> that you did earlier in this thread :)
>
> The overloaded SinkTask::open / SinkTask::close are currently the biggest
> sticking points with the latest iteration of this KIP and I'd prefer this
> elimination for now. The primary reasoning is that the information from
> open / close on pre-transform topic partitions can be combined with the per
> record information of both pre-transform and post-transform topic
> partitions to handle most practical use cases without significantly
> muddying the sink connector related public interfaces. The argument that
> this makes it harder for sink connectors to deal with post-transform topic
> partitions (i.e. in terms of grouping together or batching records for
> writing to the sink system) can be countered with the fact that it'll be
> similarly challenging even with the overloaded method approach of calling
> open / close with both pre-transform and post-transform topic partitions
> since the batching would be done on post-transform topic partitions whereas
> offset tracking and reporting for commits would be done on pre-transform
> topic partitions (and the two won't necessarily serially advance in
> lockstep). On a side note, this also means that the per sink record ack API
> that was proposed earlier wouldn't really work for this case since Kafka
> consumers themselves don't support per message acknowledgement semantics
> (and any sort of manual book-keeping based on offset linearity in a topic
> partition would be affected by things like log compaction, control records
> for transactional use cases etc.) right? Overall, I think that the only
> benefit of the overloaded open / close methods approach is that the
> framework can enable the eventual closure of any post-transform topic
> partition based writers created by sink tasks using the heuristics we
> discussed earlier (via a cache with a time-based eviction policy) which
> doesn't seem worth it at this point.
>
> Thanks,
> Yash
>
> On Mon, May 22, 2023 at 7:30 PM Chris Egerton 
> wrote:
>
> > Hi Yash,
> >
> > I've been following the discussion and have some thoughts. Ultimately I'm
> > still in favor of this KIP and would hate to see it go dormant, though we
> > may end up settling for a 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1960

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 582441 lines...]
> Task :streams:javadocJar

> Task :clients:javadoc
/home/jenkins/workspace/Kafka_kafka_trunk_2/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API

 This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
 The type field in both files must match and must not change. The type field
 is used both for passing ScramCredentialUpsertion and for the internal
 UserScramCredentialRecord. Do not change the type field."
/home/jenkins/workspace/Kafka_kafka_trunk_2/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
5 warnings

> Task :clients:javadocJar
> Task :clients:srcJar
> Task :clients:testJar
> Task :clients:testSrcJar
> Task :clients:publishMavenJavaPublicationToMavenLocal
> Task :clients:publishToMavenLocal
> Task :core:compileScala
> Task :core:classes
> Task :core:compileTestJava NO-SOURCE
> Task :core:compileTestScala
> Task :core:testClasses
> Task :streams:compileTestJava UP-TO-DATE
> Task :streams:testClasses UP-TO-DATE
> Task :streams:testJar
> Task :streams:testSrcJar
> Task :streams:publishMavenJavaPublicationToMavenLocal
> Task :streams:publishToMavenLocal

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.1.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 3m 5s
89 actionable tasks: 33 executed, 56 up-to-date
[Pipeline] sh
+ grep ^version= gradle.properties
+ cut -d= -f 2
[Pipeline] dir
Running in /home/jenkins/workspace/Kafka_kafka_trunk_2/streams/quickstart
[Pipeline] {
[Pipeline] sh
+ mvn clean install -Dgpg.skip
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Kafka Streams :: Quickstart[pom]
[INFO] streams-quickstart-java[maven-archetype]
[INFO] 
[INFO] < org.apache.kafka:streams-quickstart >-
[INFO] Building Kafka Streams :: Quickstart 3.6.0-SNAPSHOT[1/2]
[INFO]   from pom.xml
[INFO] [ pom ]-
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart ---
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart ---
[INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_trunk_2/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.6.0-SNAPSHOT/streams-quickstart-3.6.0-SNAPSHOT.pom
[INFO] 
[INFO] --< org.apache.kafka:streams-quickstart-java >--
[INFO] Building streams-quickstart-java 3.6.0-SNAPSHOT[2/2]
[INFO]   from java/pom.xml
[INFO] --[ maven-archetype ]---
[INFO] 
[INFO] --- clean:3.0.0:clean (default-clean) @ streams-quickstart-java ---
[INFO] 
[INFO] --- remote-resources:1.5:process (process-resource-bundles) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- resources:2.7:resources (default-resources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 6 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- resources:2.7:testResources (default-testResources) @ 
streams-quickstart-java ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- archetype:2.2:jar (default-jar) @ streams-quickstart-java ---
[INFO] Building archetype jar: 
/home/jenkins/workspace/Kafka_kafka_trunk_2/streams/quickstart/java/target/streams-quickstart-java-3.6.0-SNAPSHOT
[INFO] 
[INFO] --- site:3.5.1:attach-descriptor (attach-descriptor) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- archetype:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[INFO] 
[INFO] --- gpg:1.6:sign (sign-artifacts) @ streams-quickstart-java ---
[INFO] 
[INFO] --- install:2.5.2:install (default-install) @ streams-quickstart-java ---
[INFO] Installing 

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.4 #147

2023-06-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.5 #27

2023-06-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1959

2023-06-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 578876 lines...]
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
StreamsAssignmentScaleTest > testStickyTaskAssignorLargeNumConsumers PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() STARTED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':streams:unitTest'.
> Process 'Gradle Test Executor 136' finished with non-zero exit value 134
  This problem might be caused by incorrect test process configuration.
  Please refer to the test execution section in the User Manual at 
https://docs.gradle.org/8.1.1/userguide/java_testing.html#sec:test_execution

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 9.0.

You can use '--warning-mode all' to show the individual deprecation warnings 
and determine if they come from your own scripts or plugins.

See 
https://docs.gradle.org/8.1.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 3h 51m 10s
230 actionable tasks: 124 executed, 106 up-to-date

See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2023-06-29-06-21-38.html
A fine-grained performance profile is available: use the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 17 and Scala 2.12

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
EmitOnChangeIntegrationTest > shouldEmitSameRecordAfterFailover() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndPersistentStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
HighAvailabilityTaskAssignorIntegrationTest > 
shouldScaleOutWithWarmupTasksAndInMemoryStores(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamAggregationDedupIntegrationTest > shouldReduce(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamAggregationDedupIntegrationTest > shouldReduce(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamAggregationDedupIntegrationTest > shouldGroupByKey(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamAggregationDedupIntegrationTest > shouldGroupByKey(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamAggregationDedupIntegrationTest > shouldReduceWindowed(TestInfo) STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamAggregationDedupIntegrationTest > shouldReduceWindowed(TestInfo) PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamKStreamIntegrationTest > shouldOuterJoin() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KStreamKStreamIntegrationTest > shouldOuterJoin() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration() STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicNotWrittenToDuringRestoration() PASSED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 
KTableSourceTopicRestartIntegrationTest > 
shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled()
 STARTED

Gradle Test Run :streams:integrationTest > Gradle Test Executor 188 > 

[jira] [Created] (KAFKA-15134) Enrich the prompt reason in CommitFailedException

2023-06-29 Thread hudeqi (Jira)
hudeqi created KAFKA-15134:
--

 Summary: Enrich the prompt reason in CommitFailedException
 Key: KAFKA-15134
 URL: https://issues.apache.org/jira/browse/KAFKA-15134
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 3.5.0
Reporter: hudeqi
Assignee: hudeqi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14498) flaky org.apache.kafka.tools.MetadataQuorumCommandTest

2023-06-29 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya resolved KAFKA-14498.
--
Resolution: Fixed

> flaky org.apache.kafka.tools.MetadataQuorumCommandTest
> --
>
> Key: KAFKA-14498
> URL: https://issues.apache.org/jira/browse/KAFKA-14498
> Project: Kafka
>  Issue Type: Test
>  Components: admin
>Affects Versions: 3.3.1
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.4.0
>
>
> Build / JDK 11 and Scala 2.13 / 
> org.apache.kafka.tools.MetadataQuorumCommandTest.[3] Type=Raft-CoReside, 
> Name=testDescribeQuorumReplicationSuccessful, MetadataVersion=3.4-IV0, 
> Security=PLAINTEXT
> Failing for the past 1 build (Since 
> [#33|https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-12753/33/] )
> [Took 1 min 10 
> sec.|https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-12753/33/testReport/junit/org.apache.kafka.tools/MetadataQuorumCommandTest/Build___JDK_11_and_Scala_2_133__Type_Raft_CoReside__Name_testDescribeQuorumReplicationSuccessful__MetadataVersion_3_4_IV0__Security_PLAINTEXT/history]
>  
>  
> h3. Error Message
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Received 
> a fatal error while waiting for the broker to catch up with the current 
> cluster metadata.
> h3. Stacktrace
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Received 
> a fatal error while waiting for the broker to catch up with the current 
> cluster metadata. at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421) at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:107)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:191)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:202)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:202)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:190)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:136)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>  at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
>  at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  at 
> org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:142)
>  at 
> org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.lambda$execute$2(TestTemplateTestDescriptor.java:110)
>  at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
>  at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
>  at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
>  at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
>  at 
> 

RE: Offsets: consumption and production in rollback

2023-06-29 Thread Henry GALVEZ
Hi Andrew,

Yes, I have been using the requireStable option in the consumer group, but I 
consistently encounter the same issue.

If I understand Kafka's logic correctly, it is essential for Kafka to retain 
those offsets even in the case of rollbacks. This is why relying on offsets 
within the application logic is not reliable.

I need to explain this to my colleagues at work and would like to provide some 
context to the SpringKafka developer.

Do you think the scenario of a rollback is similar to the effects of Log 
Compaction?
https://kafka.apache.org/documentation/#design_compactionbasics

Furthermore, is there a way to force the cleanup of transactions? Could this 
potentially help address the issue?

Cordially,
Henry


De: Andrew Schofield 
Enviado: miércoles, 28 de junio de 2023 14:54
Para: dev@kafka.apache.org 
Asunto: Re: Offsets: consumption and production in rollback

Hi Henry,
Consumers get to choose an isolation level. There’s one instance I can think of 
where AdminClient also has
some ability to let users choose how to deal with uncommitted data. If you’ve 
not seen KIP-851 take a look:
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-851%253A%2BAdd%2BrequireStable%2Bflag%2Binto%2BListConsumerGroupOffsetsOptions=05%7C01%7Chenry.galvez%40intm.fr%7C8aaeb5128f6c402edad408db77d6e2c7%7C73ed3e4cd13c4c27b5c27ea43a0b9720%7C0%7C0%7C638235537039134102%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=uJ591Hxxvp7%2BsEaYz3pR6EcEpFP4lG8Qo9AXMIstGs8%3D=0
By your question, I expect you have seen it.

Control records are kind of invisibly lurking like dark matter. The approach 
I’ve taken with this kind of thing
is to cope with the fact that the offsets of the real records are increasing 
but not necessarily consecutive.
If I’m using READ_COMMITTED isolation level, there are also gaps when 
transactions roll back.
I design my consumers so they are not surprised by the gaps and they don’t try 
to calculate the number
of records. Instead, they just continually consume.

Hope this helps,
Andrew

> On 28 Jun 2023, at 09:28, Henry GALVEZ  wrote:
>
> Hi Andrew,
>
> Thank you for your response.
>
> I understand your explanation, but in both cases, I am using an "isolation 
> level" of READ_COMMITTED. I believe the issue lies in the 
> AdminClient.listOffsets method, as it may not be considering the isolation 
> level, where as the consumer of AdminClient.listConsumerGroupOffsets does 
> consider it.
>
> What are your thoughts on this?
>
> Additionally, would it be more suitable to implement a solution that reverts 
> the offsets in case of transaction rollbacks? It's possible that there's a 
> logic aspect I'm not fully grasping.
>
> Perhaps I need to utilize the internal control records and their offsets. 
> Could you point me in the right direction for their documentation?
>
> Thank you,
> Henry
>
> 
> De: Andrew Schofield 
> Enviado: martes, 27 de junio de 2023 13:22
> Para: dev@kafka.apache.org 
> Asunto: Fwd: Offsets: consumption and production in rollback
>
> Hi Henry,
> Thanks for your message.
>
> Kafka transactions are a bit unusual. If you produce a message inside a 
> transaction, it is assigned an offset on a topic-partition before
> the transaction even commits. That offset is not “revoked” if the transaction 
> rolls back.
>
> This is why the consumer has the concept of “isolation level”. It essentially 
> controls whether the consumer can “see” the
> uncommitted or even rolled back messages.
>
> A consumer using the committed isolation level only consumes committed 
> messages, but the offsets that it observes do
> reflect the uncommitted messages. So, if you observe the progress of the 
> offsets of the records consumed, you see that they
> skip the messages that were produced but then rolled back. There are also 
> invisible control records that are used to achieve
> transactional behaviour, and those also have offsets.
>
> I’m not sure that this is really “bogus lag” but, when you’re using 
> transactions, there’s not a one-to-one relationship
> between offsets and consumable records.
>
> Hope this helps,
> Andrew
>
> Begin forwarded message:
>
> From: Henry GALVEZ 
> Subject: Offsets: consumption and production in rollback
> Date: 27 June 2023 at 10:48:31 BST
> To: "us...@kafka.apache.org" , "dev@kafka.apache.org" 
> 
> Reply-To: dev@kafka.apache.org
>
> I have some doubts regarding message consumption and production, as well as 
> transactional capabilities. I am using a Kafka template to produce a message 
> within a transaction. After that, I execute another transaction that produces 
> a message and intentionally throws a runtime exception to simulate a 
> transaction rollback.
>
> 

[jira] [Reopened] (KAFKA-14498) flaky org.apache.kafka.tools.MetadataQuorumCommandTest

2023-06-29 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya reopened KAFKA-14498:
--
  Assignee: (was: Luke Chen)

> flaky org.apache.kafka.tools.MetadataQuorumCommandTest
> --
>
> Key: KAFKA-14498
> URL: https://issues.apache.org/jira/browse/KAFKA-14498
> Project: Kafka
>  Issue Type: Test
>  Components: admin
>Affects Versions: 3.3.1
>Reporter: Luke Chen
>Priority: Major
> Fix For: 3.4.0
>
>
> Build / JDK 11 and Scala 2.13 / 
> org.apache.kafka.tools.MetadataQuorumCommandTest.[3] Type=Raft-CoReside, 
> Name=testDescribeQuorumReplicationSuccessful, MetadataVersion=3.4-IV0, 
> Security=PLAINTEXT
> Failing for the past 1 build (Since 
> [#33|https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-12753/33/] )
> [Took 1 min 10 
> sec.|https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-12753/33/testReport/junit/org.apache.kafka.tools/MetadataQuorumCommandTest/Build___JDK_11_and_Scala_2_133__Type_Raft_CoReside__Name_testDescribeQuorumReplicationSuccessful__MetadataVersion_3_4_IV0__Security_PLAINTEXT/history]
>  
>  
> h3. Error Message
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Received 
> a fatal error while waiting for the broker to catch up with the current 
> cluster metadata.
> h3. Stacktrace
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Received 
> a fatal error while waiting for the broker to catch up with the current 
> cluster metadata. at 
> java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at 
> kafka.testkit.KafkaClusterTestKit.startup(KafkaClusterTestKit.java:421) at 
> kafka.test.junit.RaftClusterInvocationContext.lambda$getAdditionalExtensions$5(RaftClusterInvocationContext.java:107)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeTestExecutionCallbacks$5(TestMethodTestDescriptor.java:191)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:202)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeMethodsOrCallbacksUntilExceptionOccurs(TestMethodTestDescriptor.java:202)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeBeforeTestExecutionCallbacks(TestMethodTestDescriptor.java:190)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:136)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>  at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 
> at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
>  at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:226)
>  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask$DefaultDynamicTestExecutor.execute(NodeTestTask.java:204)
>  at 
> org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.execute(TestTemplateTestDescriptor.java:142)
>  at 
> org.junit.jupiter.engine.descriptor.TestTemplateTestDescriptor.lambda$execute$2(TestTemplateTestDescriptor.java:110)
>  at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
>  at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
>  at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
>  at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
>  at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
> 

Re: [DISCUSS] Apache Kafka 3.5.1 release

2023-06-29 Thread Tom Bentley
SGTM, thanks Divij

On Thu, 29 Jun 2023 at 03:16, Luke Chen  wrote:

> Hi Divij,
>
> Thanks for volunteering!
>
> Luke
>
> On Wed, Jun 28, 2023 at 11:54 PM Manyanda Chitimbo <
> manyanda.chiti...@gmail.com> wrote:
>
> > Thank you Divij for volunteering to perform the release.
> >
> > On Wed 28 Jun 2023 at 13:52, Divij Vaidya 
> wrote:
> >
> > > Hey folks
> > >
> > > Looks like we are ready to perform a release for 3.5.1 to provide a fix
> > for
> > > the vulnerability in snappy-java [1]
> > >
> > > I would like to volunteer as release manager for the 3.5.1 release.
> > >
> > > If there are no objections, I will start a release plan next Monday, on
> > 3rd
> > > July.
> > >
> > > [1] https://nvd.nist.gov/vuln/detail/CVE-2023-34455
> > >
> > > --
> > > Divij Vaidya
> > >
> > --
> > Manyanda Chitimbo.
> >
>