[jira] [Resolved] (KAFKA-12380) Executor in Connect's Worker is not shut down when the worker is

2022-04-28 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-12380.
---
Fix Version/s: 3.3.0
   Resolution: Fixed

> Executor in Connect's Worker is not shut down when the worker is
> 
>
> Key: KAFKA-12380
> URL: https://issues.apache.org/jira/browse/KAFKA-12380
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Randall Hauch
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0
>
>
> The `Worker` class has an [`executor` 
> field|https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L100]
>  that the public constructor initializes with a new cached thread pool 
> ([https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L127|https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L127].]).
> When the worker is stopped, it does not shutdown this executor. This is 
> normally okay in the Connect runtime and MirrorMaker 2 runtimes, because the 
> worker is stopped only when the JVM is stopped (via the shutdown hook in the 
> herders).
> However, we instantiate and stop the herder many times in our integration 
> tests, and this means we're not necessarily shutting down the herder's 
> executor. Normally this won't hurt, as long as all of the runnables that the 
> executor threads run actually do terminate. But it's possible those threads 
> *might* not terminate in all tests. TBH, I don't know that such cases 
> actually exist.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (KAFKA-13859) SCRAM authentication issues with kafka-clients 3.0.1

2022-04-28 Thread dengziming (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengziming resolved KAFKA-13859.

Resolution: Not A Problem

In 
[KIP-679]([https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default#KIP679:Producerwillenablethestrongestdeliveryguaranteebydefault-%60IDEMPOTENT_WRITE%60Deprecation)]
 

We are relaxing the ACL restriction from {{IDEMPOTENT_WRITE}} to {{WRITE}} 
earlier (release version 2.8) and changing the producer defaults later (release 
version 3.0) in order to give the community users enough time to upgrade their 
broker first. So their later client-side upgrading, which enables idempotence 
by default, won't get blocked by the {{IDEMPOTENT_WRITE}} ACL required by the 
old version brokers.

so this is designed intentionally, we should help the users to make this change.

> SCRAM authentication issues with kafka-clients 3.0.1
> 
>
> Key: KAFKA-13859
> URL: https://issues.apache.org/jira/browse/KAFKA-13859
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.0.1
>Reporter: Oliver Payne
>Assignee: dengziming
>Priority: Major
>
> When attempting to produce records to Kafka using a client configured with 
> SCRAM authentication, the authentication is being rejected, and the following 
> exception is thrown:
> {{org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster 
> authorization failed.}}
> I am seeing this happen with a Springboot service that was recently upgraded 
> to 2.6.5. After looking into this, I learned that Springboot moved to 
> kafka-clients 3.0.1 from 3.0.0 in that version. And sure enough, downgrading 
> to kafka-clients resolved the issue, with no changes made to the configs.
> I have also attempted to connect to a separate server with kafka-clients 
> 3.0.1, using plaintext authentication. That works fine. So the issue appears 
> to be with SCRAM authentication.
> I will note that I am attempting to connect to an AWS MSK instance. We use 
> SCRAM-SHA-512 as our sasl mechanism, using the basic {{ScramLoginModule.}} 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: Possible Bug: kafka-reassign-partitions causing the data retention time to be reset

2022-04-28 Thread Fares Oueslati
Thanks for your help!

I'm not sure how that would help me though. I'm not actually trying to
decommission a Kafka broker.
I would like to move all the data from one disk (log.dir) to another within
the same broker while keeping the original modification time of the moved
segment files.
After that I would like to delete the disk, not the broker.

Kind Regards,
Fares

On Thu, Apr 28, 2022 at 7:05 PM lqjacklee  wrote:

> The resource (https://mike.seid.io/blog/decommissiong-a-kafka-node.html)
> may help you.
> I have created (https://issues.apache.org/jira/browse/KAFKA-13860) to
> replay the case .
>
> On Thu, Apr 28, 2022 at 10:33 PM Fares Oueslati 
> wrote:
>
>> Hello,
>>
>> I'm not sure how to report this properly but I didn't get any answer in
>> the
>> user mailing list.
>>
>> In order to remove a disk in a JBOD setup, I moved all data from one disk
>> to another on every Kafka broker using kafka-reassign-partitions, then I
>> went through some weird behaviour.
>> Basically, the disk storage kept increasing even though there is no change
>> on bytes in metric per broker.
>> After investigation, I’ve seen that all segment log files in the new
>> log.dir had a modification date set to the moment when the move had been
>> done.
>> So I guess the process applying the retention policy (log cleaner?) uses
>> that timestamp to check whether the segment file should be deleted or not.
>> So I ended up with a lot more data than we were supposed to store, since
>> we
>> are basically doubling the retention time of all the freshly moved data.
>>
>> This seems to me to be a buggy behavior of the command, is it possible to
>> create a JIRA to track and eventually fix this?
>> The only option I see to fix it is to keep the modification date before
>> moving the data and applying it manually afterwards for every segment
>> file, touching those files manually doesn't seem very safe imho.
>>
>> Thanks
>> Fares Oueslati
>>
>


[jira] [Created] (KAFKA-13861) validateOnly request field does not work for CreatePartition requests in Kraft mode.

2022-04-28 Thread Akhilesh Chaganti (Jira)
Akhilesh Chaganti created KAFKA-13861:
-

 Summary: validateOnly request field does not work for 
CreatePartition requests in Kraft mode.
 Key: KAFKA-13861
 URL: https://issues.apache.org/jira/browse/KAFKA-13861
 Project: Kafka
  Issue Type: Bug
Reporter: Akhilesh Chaganti
Assignee: Akhilesh Chaganti


`ControllerApis` ignores the validateOnly field and the `QuorumController` does 
not have any logic to handle the `validateOnly` requests for `CreatePartitions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #109

2022-04-28 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 505321 lines...]
[2022-04-28T17:46:51.752Z] 
[2022-04-28T17:46:51.752Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDeletion() STARTED
[2022-04-28T17:46:51.752Z] 
[2022-04-28T17:46:51.752Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDeletion() PASSED
[2022-04-28T17:46:51.752Z] 
[2022-04-28T17:46:51.752Z] ZooKeeperClientTest > testGetAclNonExistentZNode() 
STARTED
[2022-04-28T17:46:51.752Z] 
[2022-04-28T17:46:51.752Z] ZooKeeperClientTest > testGetAclNonExistentZNode() 
PASSED
[2022-04-28T17:46:51.752Z] 
[2022-04-28T17:46:51.752Z] ZooKeeperClientTest > 
testStateChangeHandlerForAuthFailure() STARTED
[2022-04-28T17:46:52.773Z] 
[2022-04-28T17:46:52.773Z] ZooKeeperClientTest > 
testStateChangeHandlerForAuthFailure() PASSED
[2022-04-28T17:46:53.286Z] 
[2022-04-28T17:46:53.286Z] ServerGenerateBrokerIdTest > 
testDisableGeneratedBrokerId() PASSED
[2022-04-28T17:46:53.286Z] 
[2022-04-28T17:46:53.286Z] ServerGenerateBrokerIdTest > 
testUserConfigAndGeneratedBrokerId() STARTED
[2022-04-28T17:46:53.798Z] 
[2022-04-28T17:46:53.798Z] 1378 tests completed, 1 failed, 8 skipped
[2022-04-28T17:46:53.798Z] There were failing tests. See the report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.1@2/core/build/reports/tests/integrationTest/index.html
[2022-04-28T17:46:54.823Z] 
[2022-04-28T17:46:54.823Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2022-04-28T17:46:54.823Z] 
[2022-04-28T17:46:54.823Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2022-04-28T17:46:54.823Z] 
[2022-04-28T17:46:54.823Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2022-04-28T17:46:54.823Z] 
[2022-04-28T17:46:54.823Z] BUILD SUCCESSFUL in 2h 2m 57s
[2022-04-28T17:46:54.823Z] 202 actionable tasks: 109 executed, 93 up-to-date
[2022-04-28T17:46:54.823Z] 
[2022-04-28T17:46:54.823Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.1@2/build/reports/profile/profile-2022-04-28-15-44-01.html
[2022-04-28T17:46:54.823Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2022-04-28T17:46:55.844Z] Recording test results
[2022-04-28T17:46:58.555Z] 
[2022-04-28T17:46:58.555Z] ServerGenerateBrokerIdTest > 
testUserConfigAndGeneratedBrokerId() PASSED
[2022-04-28T17:46:58.555Z] 
[2022-04-28T17:46:58.555Z] ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps() STARTED
[2022-04-28T17:47:03.920Z] 
[2022-04-28T17:47:03.920Z] ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps() PASSED
[2022-04-28T17:47:03.920Z] 
[2022-04-28T17:47:03.920Z] MultipleListenersWithDefaultJaasContextTest > 
testProduceConsume() STARTED
[2022-04-28T17:47:06.815Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2022-04-28T17:47:06.816Z] Skipping Kafka Streams archetype test for Java 17
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] MultipleListenersWithDefaultJaasContextTest > 
testProduceConsume() PASSED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() STARTED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForDataChange() PASSED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() STARTED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testZooKeeperSessionStateMetric() PASSED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() STARTED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testExceptionInBeforeInitializingSession() PASSED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > testGetChildrenExistingZNode() 
STARTED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > testGetChildrenExistingZNode() 
PASSED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > testConnection() STARTED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > testConnection() PASSED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() STARTED
[2022-04-28T17:47:34.470Z] 
[2022-04-28T17:47:34.470Z] ZooKeeperClientTest > 
testZNodeChangeHandlerForCreation() PASSED
[2022-04-28T17:47:34.470Z] 

Re: Possible Bug: kafka-reassign-partitions causing the data retention time to be reset

2022-04-28 Thread lqjacklee
The resource (https://mike.seid.io/blog/decommissiong-a-kafka-node.html)
may help you.
I have created (https://issues.apache.org/jira/browse/KAFKA-13860) to
replay the case .

On Thu, Apr 28, 2022 at 10:33 PM Fares Oueslati 
wrote:

> Hello,
>
> I'm not sure how to report this properly but I didn't get any answer in the
> user mailing list.
>
> In order to remove a disk in a JBOD setup, I moved all data from one disk
> to another on every Kafka broker using kafka-reassign-partitions, then I
> went through some weird behaviour.
> Basically, the disk storage kept increasing even though there is no change
> on bytes in metric per broker.
> After investigation, I’ve seen that all segment log files in the new
> log.dir had a modification date set to the moment when the move had been
> done.
> So I guess the process applying the retention policy (log cleaner?) uses
> that timestamp to check whether the segment file should be deleted or not.
> So I ended up with a lot more data than we were supposed to store, since we
> are basically doubling the retention time of all the freshly moved data.
>
> This seems to me to be a buggy behavior of the command, is it possible to
> create a JIRA to track and eventually fix this?
> The only option I see to fix it is to keep the modification date before
> moving the data and applying it manually afterwards for every segment
> file, touching those files manually doesn't seem very safe imho.
>
> Thanks
> Fares Oueslati
>


[jira] [Created] (KAFKA-13860) add Decommissioning feature to kafka-reassign-partitions.sh

2022-04-28 Thread lqjacklee (Jira)
lqjacklee created KAFKA-13860:
-

 Summary: add Decommissioning feature to 
kafka-reassign-partitions.sh 
 Key: KAFKA-13860
 URL: https://issues.apache.org/jira/browse/KAFKA-13860
 Project: Kafka
  Issue Type: Task
Reporter: lqjacklee


try to replay the issue


 * 1, startup cluster with broker 1, 2
 * 2, create topic a which assigned to 1 and 2
 * 3, add the new broker 3 to cluster
 * 4, assignment the topic a from 1,2 to 3
#

 # Step 1 (generate reassignment JSON; this script):
#
 # $ kafka-move-leadership.sh --broker-id 4 --first-broker-id 0 
--last-broker-id 8 --zookeeper zookeeper1:2181 > partitions-to-move.json
#
 # Step 2 (start reassignment process; Kafka built-in script):
#
 # $ kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 
--reassignment-json-file partitions-to-move.json --execute
#
 # Step 3 (monitor progress of reassignment process; Kafka built-in script):
#
 # $ kafka-reassign-partitions.sh --zookeeper zookeeper1:2181 
--reassignment-json-file partitions-to-move.json --verify
*
*

 * 5,assert that no service alive in the broker 1 and 2
 * 6, finally close the broker 1 and 2



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.2 #41

2022-04-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #896

2022-04-28 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #108

2022-04-28 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 439762 lines...]
[2022-04-28T15:31:14.935Z] > Task :metadata:testClasses UP-TO-DATE
[2022-04-28T15:31:14.935Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2022-04-28T15:31:14.935Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2022-04-28T15:31:15.880Z] 
[2022-04-28T15:31:15.880Z] > Task :streams:processMessages
[2022-04-28T15:31:15.880Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2022-04-28T15:31:15.880Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.1/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2022-04-28T15:31:15.880Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2022-04-28T15:31:15.880Z] 
[2022-04-28T15:31:15.880Z] > Task :streams:compileJava UP-TO-DATE
[2022-04-28T15:31:15.880Z] > Task :streams:classes UP-TO-DATE
[2022-04-28T15:31:15.880Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2022-04-28T15:31:15.880Z] > Task :streams:copyDependantLibs
[2022-04-28T15:31:15.880Z] > Task :streams:jar UP-TO-DATE
[2022-04-28T15:31:15.880Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2022-04-28T15:31:19.490Z] > Task :connect:api:javadoc
[2022-04-28T15:31:19.490Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2022-04-28T15:31:19.490Z] > Task :connect:api:jar UP-TO-DATE
[2022-04-28T15:31:19.490Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2022-04-28T15:31:19.490Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2022-04-28T15:31:19.490Z] > Task :connect:json:jar UP-TO-DATE
[2022-04-28T15:31:19.490Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2022-04-28T15:31:19.490Z] > Task :connect:api:javadocJar
[2022-04-28T15:31:19.490Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2022-04-28T15:31:19.490Z] > Task :connect:json:publishToMavenLocal
[2022-04-28T15:31:19.490Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2022-04-28T15:31:19.490Z] > Task :connect:api:testClasses UP-TO-DATE
[2022-04-28T15:31:19.490Z] > Task :connect:api:testJar
[2022-04-28T15:31:19.490Z] > Task :connect:api:testSrcJar
[2022-04-28T15:31:19.490Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2022-04-28T15:31:19.490Z] > Task :connect:api:publishToMavenLocal
[2022-04-28T15:31:22.139Z] > Task :streams:javadoc
[2022-04-28T15:31:22.139Z] > Task :streams:javadocJar
[2022-04-28T15:31:23.084Z] 
[2022-04-28T15:31:23.084Z] > Task :clients:javadoc
[2022-04-28T15:31:23.084Z] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.1/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/OAuthBearerLoginCallbackHandler.java:147:
 warning - Tag @link: reference not found: 
[2022-04-28T15:31:24.029Z] 1 warning
[2022-04-28T15:31:24.974Z] 
[2022-04-28T15:31:24.974Z] > Task :clients:javadocJar
[2022-04-28T15:31:24.974Z] 
[2022-04-28T15:31:24.974Z] > Task :clients:srcJar
[2022-04-28T15:31:24.974Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2022-04-28T15:31:24.974Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.1/clients/src/generated/java'.
 Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2022-04-28T15:31:25.918Z] 
[2022-04-28T15:31:25.919Z] > Task :clients:testJar
[2022-04-28T15:31:25.919Z] > Task :clients:testSrcJar
[2022-04-28T15:31:26.864Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2022-04-28T15:31:26.864Z] > Task :clients:publishToMavenLocal
[2022-04-28T15:31:45.933Z] > Task :core:compileScala
[2022-04-28T15:32:43.061Z] > Task :core:classes
[2022-04-28T15:32:43.061Z] > Task :core:compileTestJava NO-SOURCE
[2022-04-28T15:33:13.360Z] > Task :core:compileTestScala
[2022-04-28T15:33:54.982Z] > Task :core:testClasses
[2022-04-28T15:34:05.041Z] > Task :streams:compileTestJava
[2022-04-28T15:34:05.041Z] > Task :streams:testClasses
[2022-04-28T15:34:05.041Z] > Task :streams:testJar
[2022-04-28T15:34:05.985Z] > 

Possible Bug: kafka-reassign-partitions causing the data retention time to be reset

2022-04-28 Thread Fares Oueslati
Hello,

I'm not sure how to report this properly but I didn't get any answer in the
user mailing list.

In order to remove a disk in a JBOD setup, I moved all data from one disk
to another on every Kafka broker using kafka-reassign-partitions, then I
went through some weird behaviour.
Basically, the disk storage kept increasing even though there is no change
on bytes in metric per broker.
After investigation, I’ve seen that all segment log files in the new
log.dir had a modification date set to the moment when the move had been
done.
So I guess the process applying the retention policy (log cleaner?) uses
that timestamp to check whether the segment file should be deleted or not.
So I ended up with a lot more data than we were supposed to store, since we
are basically doubling the retention time of all the freshly moved data.

This seems to me to be a buggy behavior of the command, is it possible to
create a JIRA to track and eventually fix this?
The only option I see to fix it is to keep the modification date before
moving the data and applying it manually afterwards for every segment
file, touching those files manually doesn't seem very safe imho.

Thanks
Fares Oueslati


[jira] [Resolved] (KAFKA-6084) ReassignPartitionsCommand should propagate JSON parsing failures

2022-04-28 Thread Viktor Somogyi-Vass (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi-Vass resolved KAFKA-6084.

Fix Version/s: 2.8.0
   Resolution: Fixed

> ReassignPartitionsCommand should propagate JSON parsing failures
> 
>
> Key: KAFKA-6084
> URL: https://issues.apache.org/jira/browse/KAFKA-6084
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.11.0.0
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2017-10-18 at 23.31.22.png
>
>
> Basically looking at Json.scala it will always swallow any parsing errors:
> {code}
>   def parseFull(input: String): Option[JsonValue] =
> try Option(mapper.readTree(input)).map(JsonValue(_))
> catch { case _: JsonProcessingException => None }
> {code}
> However sometimes it is easy to figure out the problem by simply looking at 
> the JSON, in some cases it is not very trivial, such as some invisible 
> characters (like byte order mark) won't be displayed by most of the text 
> editors and can people spend time on figuring out what's the problem.
> As Jackson provides a really detailed exception about what failed and how, it 
> is easy to propagate the failure to the user.
> As an example I attached a BOM prefixed JSON which fails with the following 
> error which is very counterintuitive:
> {noformat}
> [root@localhost ~]# kafka-reassign-partitions --zookeeper localhost:2181 
> --reassignment-json-file /root/increase-replication-factor.json --execute
> Partitions reassignment failed due to Partition reassignment data file 
> /root/increase-replication-factor.json is empty
> kafka.common.AdminCommandFailedException: Partition reassignment data file 
> /root/increase-replication-factor.json is empty
> at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:120)
> at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:52)
> at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> ...
> {noformat}
> In case of the above error it would be much better to see what fails exactly:
> {noformat}
> kafka.common.AdminCommandFailedException: Admin command failed
>   at 
> kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:267)
>   at 
> kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:275)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:197)
>   at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:193)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:64)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
> Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected 
> character ('' (code 65279 / 0xfeff)): expected a valid value (number, 
> String, array, object, 'true', 'false' or 'null')
>  at [Source: (String)"{"version":1,
>   "partitions":[
>{"topic": "test1", "partition": 0, "replicas": [1,2]},
>{"topic": "test2", "partition": 1, "replicas": [2,3]}
> ]}"; line: 1, column: 2]
>   at 
> com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798)
>   at 
> com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663)
>   at 
> com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561)
>   at 
> com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1892)
>   at 
> com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:747)
>   at 
> com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030)
>   at 
> com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539)
>   at kafka.utils.Json$.kafka$utils$Json$$doParseFull(Json.scala:46)
>   at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44)
>   at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44)
>   at scala.util.Try$.apply(Try.scala:192)
>   at kafka.utils.Json$.tryParseFull(Json.scala:44)
>   at 
> kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:241)
>   ... 5 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Not Able to Update Replication Factor of Kafka Topic

2022-04-28 Thread Ankit Bhalla
Hi Team,

We have a 3 node kafka-zookeeper cluster setup with kafka-zookeeper
communicating on SSL.
We are currently using apache kafka 2.5 and zookeeper 3.5.7 . We are
trying to increase the replication factor in kafka topics using the
below method:

To increase the number of replicas for a given topic you have to:

1. Specify the extra replicas in a custom reassignment json file
For example, you could create increase-replication-factor.json and put
this content in it:

{"version":1,
  "partitions":[
 {"topic":"signals","partition":0,"replicas":[0,1,2]},
 {"topic":"signals","partition":1,"replicas":[0,1,2]},
 {"topic":"signals","partition":2,"replicas":[0,1,2]}
]}
2. Use the file with the --execute option of the kafka-reassign-partitions tool
[or kafka-reassign-partitions.sh - depending on the kafka package]

For example:

$ kafka-reassign-partitions --zookeeper localhost:2182
--reassignment-json-file increase-replication-factor.json --execute
--command-config zookeeper_client.properties

But we are facing the problem while running the
kafka-reassign-partitions , while running this command the connection
to zookeeper fails with below error:

2022-04-28 05:56:46,963 [myid:1] - ERROR
[nioEventLoopGroup-7-3:NettyServerCnxnFactory$CertificateVerifier@363]


   - Unsuccessful handshake with session 0x0 2022-04-28 05:56:46,963
   [myid:1] - WARN
   [nioEventLoopGroup-7-3:NettyServerCnxnFactory$CnxnChannelHandler@220]
   - Exception caught io.netty.handler.codec.DecoderException:
   io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:
   
002d7530001000
   at
   
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
   at
   
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
   at
   
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
   at
   
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
   at
   
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
   at
   
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
   at
   
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
   at
   
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
   at
   
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
   at
   
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
   at
   io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
   at
   
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
   at
   io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at
   
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
   at
   io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
   at
   
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
   at java.base/java.lang.Thread.run(Unknown Source) Caused by:
   io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:
   
002d7530001000
   at
   io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1198)
   at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1266) at
   
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:498)
   at
   
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:437)

We are passing all the certificate and keystore data through
--command-config , the zookeeper_client.properties is as below:

zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=kafka.truststore.jks
zookeeper.ssl.truststore.password=changeme
zookeeper.ssl.keystore.location=kafka.keystore.jks
zookeeper.ssl.keystore.password=changeme
zookeeper.ssl.endpoint.identification.algorithm=
zookeeper.ssl.hostnameVerification=false

We have also tried to set CLIENT_JVMFLAGS and KAFKA_OPTS with same jvm
arguments but that doesn't help.

The option of passing zookeeper_client.properties via
-zk-tls-config-file  is not available in kafka-reassign-partitions.sh.

Can some please help how we can solve the issue.

Thanks


-- 

[image: Ping Identity]

Ankit Bhalla
Senior Software Engineer
abha...@pingidentity.com

Connect with us: [image: Glassdoor logo]

Re: [VOTE] 3.1.1 RC0

2022-04-28 Thread Ismael Juma
Hi Tom,

I merged and cherry-picked the fix.

Ismael

On Thu, Apr 28, 2022 at 12:33 AM Tom Bentley  wrote:

> Hi Dongjoon,
>
> I apologise, I should have been a bit more communicative. I was waiting for
> a better fix to the issue previously highlighted by David. Ismael has
> kindly provided a patch [1], so I will roll RC1 once this is merged and
> cherry-picked.
>
> Kind regards,
>
> Tom
>
> [1]: https://github.com/apache/kafka/pull/12096
>
>
> On Thu, 28 Apr 2022 at 04:38, Luke Chen  wrote:
>
> > Hi Dongjoon,
> >
> > The Apache Kafka community doesn't recommend users to use which version
> of
> > Kafka.
> > The 2 releases v3.1.1 and v3.2.0 are running in parallel, and we don't
> > guarantee which version will be released earlier.
> >
> > Thank you.
> > Luke
> >
> >
> >
> >
> > On Thu, Apr 28, 2022 at 4:54 AM Dongjoon Hyun 
> wrote:
> >
> > > Hi, All.
> > >
> > > It seems that Apache Kafka 3.2.0 RC0 vote started already instead of
> > > Apache Kafka 3.1.1 release.
> > >
> > > Does Apache Kafka community recommend to use Apache Kafka 3.2.0 instead
> > of
> > > Apache Kafka 3.1.1?
> > >
> > > Dongjoon.
> > >
> > > On 2022/04/14 01:00:40 Ismael Juma wrote:
> > > > I added a comment to that PR. Let's figure out if we need an
> additional
> > > > change before doing the next RC.
> > > >
> > > > Ismael
> > > >
> > > > On Tue, Apr 12, 2022 at 7:47 PM Luke Chen  wrote:
> > > >
> > > > > Thanks for pointing that out, David.
> > > > > +1 to include this PR since we've already included the first fix
> for
> > > > > KAFKA-13794, and this is a follow up fix for it.
> > > > >
> > > > > Thank you.
> > > > > Luke
> > > > >
> > > > > On Wed, Apr 13, 2022 at 2:31 AM David Jacot
> > > 
> > > > > wrote:
> > > > >
> > > > > > Hi Tom,
> > > > > >
> > > > > > Thanks for running the release. I wonder if we should include:
> > > > > >
> > > > > >
> > > > >
> > >
> >
> https://github.com/apache/kafka/commit/134c432d6452de1bfb99d0f6b455a58c16bc626a
> > > > > > .
> > > > > >
> > > > > > This is a follow up of KAFKA-13794. What do you think?
> > > > > >
> > > > > > Best,
> > > > > > David
> > > > > >
> > > > > > On Fri, Apr 8, 2022 at 6:18 PM Tom Bentley 
> > > wrote:
> > > > > > >
> > > > > > > Hello Kafka users, developers and client-developers,
> > > > > > >
> > > > > > > This is the first candidate for release of Apache Kafka 3.1.1.
> > > > > > >
> > > > > > > Apache Kafka 3.1.1 is a bugfix release and 29 issues have been
> > > fixed
> > > > > > > since 3.1.0.
> > > > > > >
> > > > > > > Release notes for the 3.1.1 release:
> > > > > > >
> > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/RELEASE_NOTES.html
> > > > > > >
> > > > > > > *** Please download, test and vote by Friday 15 April, 12:00
> UTC
> > > > > > >
> > > > > > > Kafka's KEYS file containing PGP keys we use to sign the
> release:
> > > > > > > https://kafka.apache.org/KEYS
> > > > > > >
> > > > > > > * Release artifacts to be voted upon (source and binary):
> > > > > > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/
> > > > > > >
> > > > > > > * Maven artifacts to be voted upon:
> > > > > > >
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > > > >
> > > > > > > * Javadoc:
> > > > > > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/javadoc/
> > > > > > >
> > > > > > > * Tag to be voted upon (off 3.1 branch) is the 3.1.1 tag:
> > > > > > > https://github.com/apache/kafka/releases/tag/3.1.1-rc0
> > > > > > >
> > > > > > > * Documentation:
> > > > > > > https://kafka.apache.org/31/documentation.html
> > > > > > >
> > > > > > > * Protocol:
> > > > > > > https://kafka.apache.org/31/protocol.html
> > > > > > >
> > > > > > > * Successful Jenkins builds for the 3.1 branch:
> > > > > > > I will share a link one the build is complete.
> > > > > > >
> > > > > > > /**
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Tom
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] 3.1.1 RC0

2022-04-28 Thread Ismael Juma
Hi Dongjoon,

I think it makes sense to go with 3.1.1 for the Spark release you are
currently stabilizing.

Ismael

On Wed, Apr 27, 2022 at 1:54 PM Dongjoon Hyun  wrote:

> Hi, All.
>
> It seems that Apache Kafka 3.2.0 RC0 vote started already instead of
> Apache Kafka 3.1.1 release.
>
> Does Apache Kafka community recommend to use Apache Kafka 3.2.0 instead of
> Apache Kafka 3.1.1?
>
> Dongjoon.
>
> On 2022/04/14 01:00:40 Ismael Juma wrote:
> > I added a comment to that PR. Let's figure out if we need an additional
> > change before doing the next RC.
> >
> > Ismael
> >
> > On Tue, Apr 12, 2022 at 7:47 PM Luke Chen  wrote:
> >
> > > Thanks for pointing that out, David.
> > > +1 to include this PR since we've already included the first fix for
> > > KAFKA-13794, and this is a follow up fix for it.
> > >
> > > Thank you.
> > > Luke
> > >
> > > On Wed, Apr 13, 2022 at 2:31 AM David Jacot
> 
> > > wrote:
> > >
> > > > Hi Tom,
> > > >
> > > > Thanks for running the release. I wonder if we should include:
> > > >
> > > >
> > >
> https://github.com/apache/kafka/commit/134c432d6452de1bfb99d0f6b455a58c16bc626a
> > > > .
> > > >
> > > > This is a follow up of KAFKA-13794. What do you think?
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Fri, Apr 8, 2022 at 6:18 PM Tom Bentley 
> wrote:
> > > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the first candidate for release of Apache Kafka 3.1.1.
> > > > >
> > > > > Apache Kafka 3.1.1 is a bugfix release and 29 issues have been
> fixed
> > > > > since 3.1.0.
> > > > >
> > > > > Release notes for the 3.1.1 release:
> > > > >
> https://home.apache.org/~tombentley/kafka-3.1.1-rc0/RELEASE_NOTES.html
> > > > >
> > > > > *** Please download, test and vote by Friday 15 April, 12:00 UTC
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > https://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > >
> > > > > * Javadoc:
> > > > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 3.1 branch) is the 3.1.1 tag:
> > > > > https://github.com/apache/kafka/releases/tag/3.1.1-rc0
> > > > >
> > > > > * Documentation:
> > > > > https://kafka.apache.org/31/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > https://kafka.apache.org/31/protocol.html
> > > > >
> > > > > * Successful Jenkins builds for the 3.1 branch:
> > > > > I will share a link one the build is complete.
> > > > >
> > > > > /**
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Tom
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-13442) REST API endpoint for fetching a connector's config definition

2022-04-28 Thread Viktor Somogyi-Vass (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi-Vass resolved KAFKA-13442.
-
Resolution: Duplicate

> REST API endpoint for fetching a connector's config definition
> --
>
> Key: KAFKA-13442
> URL: https://issues.apache.org/jira/browse/KAFKA-13442
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 3.2.0
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> To enhance UI based applications' capability in helping users to create new 
> connectors from default configurations, it would be very good to have an API 
> which can fetch a connector type's configuration definition which will be 
> filled out by users and sent back for validation and then creating a new 
> connector out of it.
> The API should be placed under {{connector-plugins}} and since 
> {{connector-plugins/\{connectorType\}/config/validate}} already exists, 
> {{connector-plugins/\{connectorType\}/config}} might be a good option for the 
> new API.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (KAFKA-13452) MM2 creates invalid checkpoint when offset mapping is not available

2022-04-28 Thread Viktor Somogyi-Vass (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi-Vass resolved KAFKA-13452.
-
Resolution: Duplicate

> MM2 creates invalid checkpoint when offset mapping is not available
> ---
>
> Key: KAFKA-13452
> URL: https://issues.apache.org/jira/browse/KAFKA-13452
> Project: Kafka
>  Issue Type: Improvement
>  Components: mirrormaker
>Reporter: Daniel Urban
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>
> MM2 checkpointing reads the offset-syncs topic to create offset mappings for 
> committed consumer group offsets. In some corner cases, it is possible that a 
> mapping is not available in offset-syncs - in that case, MM2 simply copies 
> the source offset, which might not be a valid offset in the replica topic at 
> all.
> One possible situation is if there is an empty topic in the source cluster 
> with a non-zero endoffset (e.g. retention already removed the records), and a 
> consumer group which has a committed offset set to the end offset. If 
> replication is configured to start replicating this topic, it will not have 
> an offset mapping available in offset-syncs (as the topic is empty), causing 
> MM2 to copy the source offset.
> This can cause issues when auto offset sync is enabled, as the consumer group 
> offset can be potentially set to a high number. MM2 never rewinds these 
> offsets, so even when there is a correct offset mapping available, the offset 
> will not be updated correctly.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: [kafka-clients] Re: [VOTE] 3.2.0 RC0

2022-04-28 Thread Bruno Cadonna

Hi Luke,

Thanks for your message!

I agree and I will include this PR in the next RC.

Best,
Bruno

On 28.04.22 09:38, Luke Chen wrote:

Hi Bruno,

I think this PR might also need to be included in next release candidate
because it's a bug fix/optimization for an issue introduced in v3.2.0.
https://github.com/apache/kafka/pull/12096

cc Ismael

Thank you.
Luke

On Thu, Apr 28, 2022 at 3:36 AM Guozhang Wang  wrote:


Hi Bruno,

Could I also have this commit (

https://github.com/apache/kafka/commit/e026384ffb3170a2e71053a4163db58b9bd8fba6
)
in the next release candidate? It's fixing a performance regression that
was just introduced, and not yet released in older versions.


Guozhang

On Tue, Apr 26, 2022 at 11:01 AM Jun Rao  wrote:


Hi, Bruno.

Thanks for the reply. Your understanding is correct. This is a regression
introduced only in the 3.2 branch.

Sorry for the late notice.

Jun

On Tue, Apr 26, 2022 at 10:04 AM Bruno Cadonna 

wrote:



Hi Jun,

Thank you for your message!

Now I see how this issue was introduced in 3.2.0. The fix for the bug
described in KAFKA-12841 introduced it, right? I initially understood
that the PR you want to include is the fix for the bug described in
KAFKA-12841 which dates back to 2.6.

I think that classifies as a regression.

I will abort the voting and create a new release candidate.

Best,
Bruno

On 26.04.22 18:09, 'Jun Rao' via kafka-clients wrote:

Hi, Bruno,

Could we include https://github.com/apache/kafka/pull/12064
 in 3.2.0? This fixes an
issue introduced in 3.2.0 where in some of the error cases, the

producer

interceptor is called twice for the same record.

Thanks,

Jun

On Tue, Apr 26, 2022 at 6:34 AM Bruno Cadonna mailto:cado...@apache.org>> wrote:

 Hi all,

 This is a gently reminder to vote for the first candidate for
 release of
 Apache Kafka 3.2.0.

 I added the 3.2 documentation to the kafka site. That means
 https://kafka.apache.org/32/documentation.html
  works now.

 A successful system tests run can be found here:
 https://jenkins.confluent.io/job/system-test-kafka/job/3.2/24/
 

 Thank you to Michal for voting on the release candidate.

 Best,
 Bruno

 On 15.04.22 21:05, Bruno Cadonna wrote:
  > Hello Kafka users, developers and client-developers,
  >
  > This is the first candidate for release of Apache Kafka 3.2.0.
  >
  > * log4j 1.x is replaced with reload4j (KAFKA-9366)
  > * StandardAuthorizer for KRaft (KIP-801)
  > * Send a hint to the partition leader to recover the partition
 (KIP-704)
  > * Top-level error code field in DescribeLogDirsResponse

(KIP-784)

  > * kafka-console-producer writes headers and null values

(KIP-798

and

  > KIP-810)
  > * JoinGroupRequest and LeaveGroupRequest have a reason

attached

 (KIP-800)
  > * Static membership protocol lets the leader skip assignment
 (KIP-814)
  > * Rack-aware standby task assignment in Kafka Streams

(KIP-708)

  > * Interactive Query v2 (KIP-796, KIP-805, and KIP-806)
  > * Connect APIs list all connector plugins and retrieve their
  > configuration (KIP-769)
  > * TimestampConverter SMT supports different unix time

precisions

 (KIP-808)
  > * Connect source tasks handle producer exceptions (KIP-779)
  >
  > Release notes for the 3.2.0 release:
  >


https://home.apache.org/~cadonna/kafka-3.2.0-rc0/RELEASE_NOTES.html

 <

https://home.apache.org/~cadonna/kafka-3.2.0-rc0/RELEASE_NOTES.html


  >
  > *** Please download, test and vote by Monday, April 25, 9am

CEST

  >
  > Kafka's KEYS file containing PGP keys we use to sign the

release:

  > https://kafka.apache.org/KEYS 
  >
  > * Release artifacts to be voted upon (source and binary):
  > https://home.apache.org/~cadonna/kafka-3.2.0-rc0/
 
  >
  > * Maven artifacts to be voted upon:
  >


https://repository.apache.org/content/groups/staging/org/apache/kafka/

 <

https://repository.apache.org/content/groups/staging/org/apache/kafka/



  >
  > * Javadoc:
  > https://home.apache.org/~cadonna/kafka-3.2.0-rc0/javadoc/
 
  >
  > * Tag to be voted upon (off 3.2 branch) is the 3.2.0 tag:
  > https://github.com/apache/kafka/releases/tag/3.2.0-rc0
 
  >
  > * Documentation (not yet ported to kafka-site):
  > https://kafka.apache.org/32/documentation.html
 
  >
  > * Protocol:
  > https://kafka.apache.org/32/protocol.html
 

Re: [kafka-clients] Re: [VOTE] 3.2.0 RC0

2022-04-28 Thread Luke Chen
Hi Bruno,

I think this PR might also need to be included in next release candidate
because it's a bug fix/optimization for an issue introduced in v3.2.0.
https://github.com/apache/kafka/pull/12096

cc Ismael

Thank you.
Luke

On Thu, Apr 28, 2022 at 3:36 AM Guozhang Wang  wrote:

> Hi Bruno,
>
> Could I also have this commit (
>
> https://github.com/apache/kafka/commit/e026384ffb3170a2e71053a4163db58b9bd8fba6
> )
> in the next release candidate? It's fixing a performance regression that
> was just introduced, and not yet released in older versions.
>
>
> Guozhang
>
> On Tue, Apr 26, 2022 at 11:01 AM Jun Rao  wrote:
>
> > Hi, Bruno.
> >
> > Thanks for the reply. Your understanding is correct. This is a regression
> > introduced only in the 3.2 branch.
> >
> > Sorry for the late notice.
> >
> > Jun
> >
> > On Tue, Apr 26, 2022 at 10:04 AM Bruno Cadonna 
> wrote:
> >
> > > Hi Jun,
> > >
> > > Thank you for your message!
> > >
> > > Now I see how this issue was introduced in 3.2.0. The fix for the bug
> > > described in KAFKA-12841 introduced it, right? I initially understood
> > > that the PR you want to include is the fix for the bug described in
> > > KAFKA-12841 which dates back to 2.6.
> > >
> > > I think that classifies as a regression.
> > >
> > > I will abort the voting and create a new release candidate.
> > >
> > > Best,
> > > Bruno
> > >
> > > On 26.04.22 18:09, 'Jun Rao' via kafka-clients wrote:
> > > > Hi, Bruno,
> > > >
> > > > Could we include https://github.com/apache/kafka/pull/12064
> > > >  in 3.2.0? This fixes an
> > > > issue introduced in 3.2.0 where in some of the error cases, the
> > producer
> > > > interceptor is called twice for the same record.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Apr 26, 2022 at 6:34 AM Bruno Cadonna  > > > > wrote:
> > > >
> > > > Hi all,
> > > >
> > > > This is a gently reminder to vote for the first candidate for
> > > > release of
> > > > Apache Kafka 3.2.0.
> > > >
> > > > I added the 3.2 documentation to the kafka site. That means
> > > > https://kafka.apache.org/32/documentation.html
> > > >  works now.
> > > >
> > > > A successful system tests run can be found here:
> > > > https://jenkins.confluent.io/job/system-test-kafka/job/3.2/24/
> > > > 
> > > >
> > > > Thank you to Michal for voting on the release candidate.
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On 15.04.22 21:05, Bruno Cadonna wrote:
> > > >  > Hello Kafka users, developers and client-developers,
> > > >  >
> > > >  > This is the first candidate for release of Apache Kafka 3.2.0.
> > > >  >
> > > >  > * log4j 1.x is replaced with reload4j (KAFKA-9366)
> > > >  > * StandardAuthorizer for KRaft (KIP-801)
> > > >  > * Send a hint to the partition leader to recover the partition
> > > > (KIP-704)
> > > >  > * Top-level error code field in DescribeLogDirsResponse
> > (KIP-784)
> > > >  > * kafka-console-producer writes headers and null values
> (KIP-798
> > > and
> > > >  > KIP-810)
> > > >  > * JoinGroupRequest and LeaveGroupRequest have a reason
> attached
> > > > (KIP-800)
> > > >  > * Static membership protocol lets the leader skip assignment
> > > > (KIP-814)
> > > >  > * Rack-aware standby task assignment in Kafka Streams
> (KIP-708)
> > > >  > * Interactive Query v2 (KIP-796, KIP-805, and KIP-806)
> > > >  > * Connect APIs list all connector plugins and retrieve their
> > > >  > configuration (KIP-769)
> > > >  > * TimestampConverter SMT supports different unix time
> precisions
> > > > (KIP-808)
> > > >  > * Connect source tasks handle producer exceptions (KIP-779)
> > > >  >
> > > >  > Release notes for the 3.2.0 release:
> > > >  >
> > > >
> > https://home.apache.org/~cadonna/kafka-3.2.0-rc0/RELEASE_NOTES.html
> > > > <
> > https://home.apache.org/~cadonna/kafka-3.2.0-rc0/RELEASE_NOTES.html
> > > >
> > > >  >
> > > >  > *** Please download, test and vote by Monday, April 25, 9am
> CEST
> > > >  >
> > > >  > Kafka's KEYS file containing PGP keys we use to sign the
> > release:
> > > >  > https://kafka.apache.org/KEYS 
> > > >  >
> > > >  > * Release artifacts to be voted upon (source and binary):
> > > >  > https://home.apache.org/~cadonna/kafka-3.2.0-rc0/
> > > > 
> > > >  >
> > > >  > * Maven artifacts to be voted upon:
> > > >  >
> > > >
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > <
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > > >  >
> > > >  > * Javadoc:
> > > >  > 

Re: [VOTE] 3.1.1 RC0

2022-04-28 Thread Tom Bentley
Hi Dongjoon,

I apologise, I should have been a bit more communicative. I was waiting for
a better fix to the issue previously highlighted by David. Ismael has
kindly provided a patch [1], so I will roll RC1 once this is merged and
cherry-picked.

Kind regards,

Tom

[1]: https://github.com/apache/kafka/pull/12096


On Thu, 28 Apr 2022 at 04:38, Luke Chen  wrote:

> Hi Dongjoon,
>
> The Apache Kafka community doesn't recommend users to use which version of
> Kafka.
> The 2 releases v3.1.1 and v3.2.0 are running in parallel, and we don't
> guarantee which version will be released earlier.
>
> Thank you.
> Luke
>
>
>
>
> On Thu, Apr 28, 2022 at 4:54 AM Dongjoon Hyun  wrote:
>
> > Hi, All.
> >
> > It seems that Apache Kafka 3.2.0 RC0 vote started already instead of
> > Apache Kafka 3.1.1 release.
> >
> > Does Apache Kafka community recommend to use Apache Kafka 3.2.0 instead
> of
> > Apache Kafka 3.1.1?
> >
> > Dongjoon.
> >
> > On 2022/04/14 01:00:40 Ismael Juma wrote:
> > > I added a comment to that PR. Let's figure out if we need an additional
> > > change before doing the next RC.
> > >
> > > Ismael
> > >
> > > On Tue, Apr 12, 2022 at 7:47 PM Luke Chen  wrote:
> > >
> > > > Thanks for pointing that out, David.
> > > > +1 to include this PR since we've already included the first fix for
> > > > KAFKA-13794, and this is a follow up fix for it.
> > > >
> > > > Thank you.
> > > > Luke
> > > >
> > > > On Wed, Apr 13, 2022 at 2:31 AM David Jacot
> > 
> > > > wrote:
> > > >
> > > > > Hi Tom,
> > > > >
> > > > > Thanks for running the release. I wonder if we should include:
> > > > >
> > > > >
> > > >
> >
> https://github.com/apache/kafka/commit/134c432d6452de1bfb99d0f6b455a58c16bc626a
> > > > > .
> > > > >
> > > > > This is a follow up of KAFKA-13794. What do you think?
> > > > >
> > > > > Best,
> > > > > David
> > > > >
> > > > > On Fri, Apr 8, 2022 at 6:18 PM Tom Bentley 
> > wrote:
> > > > > >
> > > > > > Hello Kafka users, developers and client-developers,
> > > > > >
> > > > > > This is the first candidate for release of Apache Kafka 3.1.1.
> > > > > >
> > > > > > Apache Kafka 3.1.1 is a bugfix release and 29 issues have been
> > fixed
> > > > > > since 3.1.0.
> > > > > >
> > > > > > Release notes for the 3.1.1 release:
> > > > > >
> > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/RELEASE_NOTES.html
> > > > > >
> > > > > > *** Please download, test and vote by Friday 15 April, 12:00 UTC
> > > > > >
> > > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > > https://kafka.apache.org/KEYS
> > > > > >
> > > > > > * Release artifacts to be voted upon (source and binary):
> > > > > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/
> > > > > >
> > > > > > * Maven artifacts to be voted upon:
> > > > > >
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > > >
> > > > > > * Javadoc:
> > > > > > https://home.apache.org/~tombentley/kafka-3.1.1-rc0/javadoc/
> > > > > >
> > > > > > * Tag to be voted upon (off 3.1 branch) is the 3.1.1 tag:
> > > > > > https://github.com/apache/kafka/releases/tag/3.1.1-rc0
> > > > > >
> > > > > > * Documentation:
> > > > > > https://kafka.apache.org/31/documentation.html
> > > > > >
> > > > > > * Protocol:
> > > > > > https://kafka.apache.org/31/protocol.html
> > > > > >
> > > > > > * Successful Jenkins builds for the 3.1 branch:
> > > > > > I will share a link one the build is complete.
> > > > > >
> > > > > > /**
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Tom
> > > > >
> > > >
> > >
> >
>