[GitHub] [kafka] omkreddy merged pull request #14445: KAFKA-15502: Update SslEngineValidator to handle large stores

2023-09-25 Thread via GitHub


omkreddy merged PR #14445:
URL: https://github.com/apache/kafka/pull/14445


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] omkreddy commented on pull request #14445: KAFKA-15502: Update SslEngineValidator to handle large stores

2023-09-25 Thread via GitHub


omkreddy commented on PR #14445:
URL: https://github.com/apache/kafka/pull/14445#issuecomment-1734866919

   Test failures are not related. Merging trunk and older branches


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15169) Add tests for RemoteIndexCache

2023-09-25 Thread Arpit Goyal (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17769006#comment-17769006
 ] 

Arpit Goyal commented on KAFKA-15169:
-

[~divijvaidya] [~satish.duggana] In the test case 1  which you mentioned to 
test for the corrupted scenario , I discovered we are not handling  one more 
use case which can be problematic ? 
Example 
1. we call getIndexEntry
2. It succeeded 
3. Somehow the offsetIndex file is corrupted  
4. we call getIndexEntry 
5. As it is already in the cache , it would always return the corrupted file 
without validating the sanityCheck ?



> Add tests for RemoteIndexCache
> --
>
> Key: KAFKA-15169
> URL: https://issues.apache.org/jira/browse/KAFKA-15169
> Project: Kafka
>  Issue Type: Test
>Reporter: Satish Duggana
>Assignee: Arpit Goyal
>Priority: Major
>  Labels: KIP-405
> Fix For: 3.7.0
>
>
> Follow-up from 
> https://github.com/apache/kafka/pull/13275#discussion_r1257490978



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] tombentley commented on a diff in pull request #13862: KAFKA-15050: format the prompts in the quickstart

2023-09-25 Thread via GitHub


tombentley commented on code in PR #13862:
URL: https://github.com/apache/kafka/pull/13862#discussion_r1336627061


##
docs/quickstart.html:
##
@@ -154,9 +154,9 @@ 
 By default, each line you enter will result in a separate event 
being written to the topic.
 
 
-$ 
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server 
localhost:9092
-This is my first event
-This is my second event
+$ 
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server 
localhost:9092
+> This is my first event
+> This is my second event

Review Comment:
   The console producer doesn't actually include a space after the `>`.



##
docs/quickstart.html:
##
@@ -32,7 +32,7 @@ 
 the latest Kafka release and extract it:
 
 
-$ tar -xzf 
kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
+$ tar -xzf 
kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz

Review Comment:
   It works fine for me with ``, using both FF 
and a chrome-based browser. 



##
docs/streams/quickstart.html:
##
@@ -152,8 +150,8 @@ Step 3
 
 Next, we create the input topic named streams-plaintext-input and the 
output topic named streams-wordcount-output:
 
- bin/kafka-topics.sh 
--create \
---bootstrap-server localhost:9092 \
+$ bin/kafka-topics.sh 
--create \
+--bootstrap-server localhost:9092 \

Review Comment:
   We should keep the indentation of this line (as we do for the following 
continuation lines).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma opened a new pull request, #14451: MINOR: Replace Java 20 with Java 21 in `README.md`

2023-09-25 Thread via GitHub


ijuma opened a new pull request, #14451:
URL: https://github.com/apache/kafka/pull/14451

   I intended to include this in 99e6f12dd099, but somehow missed it.
   
   I will wait for KAFKA-15943 before updating the site docs with Java 21 (we 
never added Java 20 there).
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-13882) Dockerfile for previewing website

2023-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768985#comment-17768985
 ] 

ASF GitHub Bot commented on KAFKA-13882:


tombentley commented on PR #410:
URL: https://github.com/apache/kafka-site/pull/410#issuecomment-173498

   @mimaison are you happy with the latest changes which don't touch the 
`.htaccess`? While I think the duplication is not _ideal_, this change would 
make it easier docs contributors. 




> Dockerfile for previewing website
> -
>
> Key: KAFKA-13882
> URL: https://issues.apache.org/jira/browse/KAFKA-13882
> Project: Kafka
>  Issue Type: Task
>  Components: docs, website
>Reporter: Tom Bentley
>Assignee: Lim Qing Wei
>Priority: Trivial
>  Labels: newbie
>
> Previewing changes to the website/documentation is rather difficult because 
> you either have to [hack with the 
> HTML|https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes#ContributingWebsiteDocumentationChanges-KafkaWebsiteRepository]
>  or [install 
> httpd|https://cwiki.apache.org/confluence/display/KAFKA/Setup+Kafka+Website+on+Local+Apache+Server].
>  This is a barrier to contribution.
> Having a Dockerfile for previewing the Kafka website (i.e. with httpd 
> properly set up) would make it easier for people to contribute website/docs 
> changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-13882) Dockerfile for previewing website

2023-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768982#comment-17768982
 ] 

ASF GitHub Bot commented on KAFKA-13882:


tombentley commented on code in PR #410:
URL: https://github.com/apache/kafka-site/pull/410#discussion_r1336588112


##
start-preview.sh:
##
@@ -0,0 +1,7 @@
+#!/usr/bin/env bash
+
+set -euxo pipefail
+
+docker build -t kafka-site-preview .
+
+docker run -it --rm --name mypreview -p 8080:80 -v 
"$PWD":/usr/local/apache2/htdocs/ kafka-site-preview

Review Comment:
   I needed to add :z to the volume mount for this to work with podman (as 
opposed to docker) on Fedora)
   
   ```suggestion
   docker run -it --rm --name mypreview -p 8080:80 -v 
"$PWD":/usr/local/apache2/htdocs/:z kafka-site-preview
   ```





> Dockerfile for previewing website
> -
>
> Key: KAFKA-13882
> URL: https://issues.apache.org/jira/browse/KAFKA-13882
> Project: Kafka
>  Issue Type: Task
>  Components: docs, website
>Reporter: Tom Bentley
>Assignee: Lim Qing Wei
>Priority: Trivial
>  Labels: newbie
>
> Previewing changes to the website/documentation is rather difficult because 
> you either have to [hack with the 
> HTML|https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes#ContributingWebsiteDocumentationChanges-KafkaWebsiteRepository]
>  or [install 
> httpd|https://cwiki.apache.org/confluence/display/KAFKA/Setup+Kafka+Website+on+Local+Apache+Server].
>  This is a barrier to contribution.
> Having a Dockerfile for previewing the Kafka website (i.e. with httpd 
> properly set up) would make it easier for people to contribute website/docs 
> changes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] github-actions[bot] commented on pull request #13852: KAFKA-15086:Set a reasonable segment size upper limit for MM2 internal topics

2023-09-25 Thread via GitHub


github-actions[bot] commented on PR #13852:
URL: https://github.com/apache/kafka/pull/13852#issuecomment-1734772707

   This PR is being marked as stale since it has not had any activity in 90 
days. If you would like to keep this PR alive, please ask a committer for 
review. If the PR has  merge conflicts, please update it with the latest from 
trunk (or appropriate release branch)  If this PR is no longer valid or 
desired, please feel free to close it. If no activity occurs in the next 30 
days, it will be automatically closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15505) MM2 consumer group subscription different on target cluster

2023-09-25 Thread Srinivas Boga (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srinivas Boga updated KAFKA-15505:
--
Description: 
I am running mirrormaker2 in distributed mode with 3 nodes and 10 tasks each , 
kafka version: 3.5.1

On the source cluster , i see the consumer group has subscriptions of topic-A 
and topic-B, but on the target cluster i am seeing only the subscription of 
topic-A

restarting did not help, Also the the consumed offsets of the topic-A are 
different on the target cluster, i see the lag of zero for all partitions on 
source cluster, but the lag keeps building up on the target cluster
{code:java}
sync.group.offsets.enabled = true
sync.group.offsets.interval.seconds = 1{code}
 

Any help on this would be appreciated

 

Thanks,

-srini

 

 

  was:
I am running mirrormaker2 in distributed mode with 3 nodes and 10 tasks each , 
kafka version: 3.5.1

On the source cluster , i see the consumer group has subscriptions of topic-A 
and topic-B, but on the target cluster i am seeing only the subscription of 
topic-A

restarting did not help, Also the the consumed offsets of the topic-A are 
different on the target cluster, i see the lag of zero for all partitions on 
source cluster, but the lag keeps building up on the target cluster

 

Any help on this would be appreciated

 

Thanks,

-srini

 

 


> MM2 consumer group subscription different on target cluster
> ---
>
> Key: KAFKA-15505
> URL: https://issues.apache.org/jira/browse/KAFKA-15505
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Reporter: Srinivas Boga
>Priority: Major
>
> I am running mirrormaker2 in distributed mode with 3 nodes and 10 tasks each 
> , kafka version: 3.5.1
> On the source cluster , i see the consumer group has subscriptions of topic-A 
> and topic-B, but on the target cluster i am seeing only the subscription of 
> topic-A
> restarting did not help, Also the the consumed offsets of the topic-A are 
> different on the target cluster, i see the lag of zero for all partitions on 
> source cluster, but the lag keeps building up on the target cluster
> {code:java}
> sync.group.offsets.enabled = true
> sync.group.offsets.interval.seconds = 1{code}
>  
> Any help on this would be appreciated
>  
> Thanks,
> -srini
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15505) MM2 consumer group subscription different on target cluster

2023-09-25 Thread Srinivas Boga (Jira)
Srinivas Boga created KAFKA-15505:
-

 Summary: MM2 consumer group subscription different on target 
cluster
 Key: KAFKA-15505
 URL: https://issues.apache.org/jira/browse/KAFKA-15505
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Reporter: Srinivas Boga


I am running mirrormaker2 in distributed mode with 3 nodes and 10 tasks each , 
kafka version: 3.5.1

On the source cluster , i see the consumer group has subscriptions of topic-A 
and topic-B, but on the target cluster i am seeing only the subscription of 
topic-A

restarting did not help, Also the the consumed offsets of the topic-A are 
different on the target cluster, i see the lag of zero for all partitions on 
source cluster, but the lag keeps building up on the target cluster

 

Any help on this would be appreciated

 

Thanks,

-srini

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] hgeraldino commented on pull request #14093: KAFKA-15248 Add BooleanConverter

2023-09-25 Thread via GitHub


hgeraldino commented on PR #14093:
URL: https://github.com/apache/kafka/pull/14093#issuecomment-1734640996

   > Thanks for the updates @hgeraldino, the build looks good now. My apologies 
for not noticing these earlier, but I just had a couple more minor comments. 
This should be good to be merged once they're addressed. Thanks for your 
continued patience!
   
   Addressed both comments, good catch!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jeffkbkim commented on a diff in pull request #14408: KAFKA-14506: Implement DeleteGroups API and OffsetDelete API

2023-09-25 Thread via GitHub


jeffkbkim commented on code in PR #14408:
URL: https://github.com/apache/kafka/pull/14408#discussion_r1336348269


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/Group.java:
##
@@ -90,4 +92,29 @@ void validateOffsetFetch(
 int memberEpoch,
 long lastCommittedOffset
 ) throws KafkaException;
+
+/**
+ * Validates the OffsetDelete request.
+ */
+void validateOffsetDelete() throws KafkaException;
+
+/**
+ * Validates the GroupDelete request.
+ */
+void validateGroupDelete() throws KafkaException;
+
+/**
+ * Returns true if the group is actively subscribed to the topic.
+ *
+ * @param topic The topic name.
+ * @return whether the group is subscribed to the topic.
+ */
+boolean isSubscribedToTopic(String topic);
+
+/**
+ * Creates tombstone(s) for deleting the group.
+ *
+ * @return The list of tombstone record(s).
+ */
+List createMetadataTombstoneRecords();

Review Comment:
   i wonder if createGroupTombstoneRecords() makes more sense



##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupCoordinatorShardTest.java:
##
@@ -105,6 +114,70 @@ public void testCommitOffset() {
 assertEquals(result, coordinator.commitOffset(context, request));
 }
 
+@Test
+public void testDeleteGroup() {

Review Comment:
   nit: testDeleteGroups
   
   also, can we verify the number of method invocations and also test that we 
append records correctly for multiple groups?



##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/OffsetMetadataManagerTest.java:
##
@@ -1561,6 +1567,156 @@ public void 
testConsumerGroupOffsetFetchWithStaleMemberEpoch() {
 () -> context.fetchAllOffsets("group", "member", 10, 
Long.MAX_VALUE));
 }
 
+private void testOffsetDeleteWith(

Review Comment:
   should this be a static method?



##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/generic/GenericGroupTest.java:
##
@@ -1026,6 +1028,30 @@ public void testValidateOffsetCommit() {
 () -> group.validateOffsetCommit("member-id", "new-instance-id", 
1));
 }
 
+@Test
+public void testValidateOffsetDelete() {
+group.transitionTo(PREPARING_REBALANCE);
+assertThrows(GroupNotEmptyException.class, () -> 
group.validateOffsetDelete());
+group.transitionTo(COMPLETING_REBALANCE);
+assertThrows(GroupNotEmptyException.class, () -> 
group.validateOffsetDelete());
+group.transitionTo(STABLE);
+assertThrows(GroupNotEmptyException.class, () -> 
group.validateOffsetDelete());
+group.transitionTo(DEAD);
+assertThrows(GroupIdNotFoundException.class, () -> 
group.validateOffsetDelete());

Review Comment:
   should we add EMPTY test case? also for testValidateGroupDelete



##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupCoordinatorShard.java:
##
@@ -262,6 +267,44 @@ public HeartbeatResponseData genericGroupHeartbeat(
 );
 }
 
+/**
+ * Handles a GroupDelete request.

Review Comment:
   nit: "DeleteGroups" request.
   
   This should reflect the actual ApiKeys#DELETE_GROUPS name



##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupCoordinatorShardTest.java:
##
@@ -105,6 +114,70 @@ public void testCommitOffset() {
 assertEquals(result, coordinator.commitOffset(context, request));
 }
 
+@Test
+public void testDeleteGroup() {
+GroupMetadataManager groupMetadataManager = 
mock(GroupMetadataManager.class);
+OffsetMetadataManager offsetMetadataManager = 
mock(OffsetMetadataManager.class);
+GroupCoordinatorShard coordinator = new GroupCoordinatorShard(
+groupMetadataManager,
+offsetMetadataManager
+);
+
+RequestContext context = requestContext(ApiKeys.DELETE_GROUPS);
+List groupIds = Collections.singletonList("group-id");
+DeleteGroupsResponseData.DeletableGroupResultCollection 
expectedResultCollection = new 
DeleteGroupsResponseData.DeletableGroupResultCollection();
+expectedResultCollection.add(new 
DeleteGroupsResponseData.DeletableGroupResult().setGroupId("group-id"));
+List expectedRecords = Arrays.asList(
+RecordHelpers.newOffsetCommitTombstoneRecord("group-id", 
"topic-name", 0),
+RecordHelpers.newGroupMetadataTombstoneRecord("group-id")
+);
+
CoordinatorResult expectedResult = new CoordinatorResult<>(
+expectedRecords,
+expectedResultCollection
+);
+
+
doNothing().when(groupMetadataManager).validateGroupDelete(ArgumentMatchers.eq("group-id"));
+doAnswer(invocation -> {
+List records = invocation.getArgument(1);
+

[GitHub] [kafka] jolshan commented on a diff in pull request #14370: KAFKA-15449: Verify transactional offset commits (KIP-890 part 1)

2023-09-25 Thread via GitHub


jolshan commented on code in PR #14370:
URL: https://github.com/apache/kafka/pull/14370#discussion_r1336455099


##
core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala:
##
@@ -449,6 +452,10 @@ class GroupMetadataManager(brokerId: Int,
| Errors.INVALID_FETCH_SIZE =>
 Errors.INVALID_COMMIT_OFFSET_SIZE
 
+  case Errors.INVALID_PRODUCER_ID_MAPPING
+   | Errors.INVALID_TXN_STATE =>
+Errors.UNKNOWN_MEMBER_ID

Review Comment:
   Sorry I resolved the PR comment, but I had a discussion with myself here: 
https://github.com/apache/kafka/pull/14370#discussion_r135993
   
   Similar to the produce request, we wanted to have a non-fatal error here. (I 
suppose there is an argument for the invalid PID mapping being fatal) 
   What do you think? There are only 2 abortable errors returned by this 
request. 
   
   ```
   } else if (error == Errors.UNKNOWN_MEMBER_ID
   || error == Errors.ILLEGAL_GENERATION) {
   abortableError(new CommitFailedException("Transaction 
offset Commit failed " +
   "due to consumer group metadata mismatch: " + 
error.exception().getMessage()));
```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14370: KAFKA-15449: Verify transactional offset commits (KIP-890 part 1)

2023-09-25 Thread via GitHub


dajac commented on code in PR #14370:
URL: https://github.com/apache/kafka/pull/14370#discussion_r1336429881


##
core/src/main/scala/kafka/coordinator/group/GroupCoordinator.scala:
##
@@ -958,7 +959,8 @@ private[group] class GroupCoordinator(
  producerEpoch: Short,
  offsetMetadata: 
immutable.Map[TopicIdPartition, OffsetAndMetadata],
  requestLocal: RequestLocal,
- responseCallback: 
immutable.Map[TopicIdPartition, Errors] => Unit): Unit = {
+ responseCallback: 
immutable.Map[TopicIdPartition, Errors] => Unit,
+ transactionalId: String): Unit = {

Review Comment:
   nit: ditto about the position.



##
core/src/test/scala/unit/kafka/coordinator/group/GroupCoordinatorTest.scala:
##
@@ -4072,6 +4072,8 @@ class GroupCoordinatorTest {
 
 val capturedArgument: ArgumentCaptor[scala.collection.Map[TopicPartition, 
PartitionResponse] => Unit] = 
ArgumentCaptor.forClass(classOf[scala.collection.Map[TopicPartition, 
PartitionResponse] => Unit])
 
+// Since transactional ID is only used in appendRecords, we can use a 
dummy value. Ensure it passes through.
+val transactionalId = producerId.toString

Review Comment:
   ditto.



##
core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala:
##
@@ -449,6 +452,10 @@ class GroupMetadataManager(brokerId: Int,
| Errors.INVALID_FETCH_SIZE =>
 Errors.INVALID_COMMIT_OFFSET_SIZE
 
+  case Errors.INVALID_PRODUCER_ID_MAPPING
+   | Errors.INVALID_TXN_STATE =>
+Errors.UNKNOWN_MEMBER_ID

Review Comment:
   Could you please explain the rational behind using UNKNOWN_MEMBER_ID here? 
We should also add a comment about it because this is not obvious.



##
core/src/main/scala/kafka/coordinator/group/GroupCoordinator.scala:
##
@@ -901,6 +901,7 @@ private[group] class GroupCoordinator(
  generationId: Int,
  offsetMetadata: immutable.Map[TopicIdPartition, 
OffsetAndMetadata],
  responseCallback: immutable.Map[TopicIdPartition, 
Errors] => Unit,
+ transactionalId: String,

Review Comment:
   nit: Would it make sense to move `transactionalId` to right before 
`producerId`/`producerEpoch`?



##
core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala:
##
@@ -349,7 +351,8 @@ class GroupMetadataManager(brokerId: Int,
responseCallback: immutable.Map[TopicIdPartition, Errors] 
=> Unit,
producerId: Long = RecordBatch.NO_PRODUCER_ID,
producerEpoch: Short = RecordBatch.NO_PRODUCER_EPOCH,
-   requestLocal: RequestLocal = RequestLocal.NoCaching): Unit 
= {
+   requestLocal: RequestLocal = RequestLocal.NoCaching,
+   transactionalId: String = null): Unit = {

Review Comment:
   nit: ditto about the position.



##
core/src/test/scala/unit/kafka/coordinator/group/GroupCoordinatorConcurrencyTest.scala:
##
@@ -311,9 +311,10 @@ class GroupCoordinatorConcurrencyTest extends 
AbstractCoordinatorConcurrencyTest
   }
   lock.foreach(_.lock())
   try {
+// Since the replica manager is mocked we can use a dummy value for 
transactionalId.
 groupCoordinator.handleTxnCommitOffsets(member.group.groupId, 
producerId, producerEpoch,
   JoinGroupRequest.UNKNOWN_MEMBER_ID, Option.empty, 
JoinGroupRequest.UNKNOWN_GENERATION_ID,
-  offsets, callbackWithTxnCompletion)
+  offsets, callbackWithTxnCompletion, producerId.toString)

Review Comment:
   nit: Would using `dummy-transaction-id` be better than using the producer id?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on pull request #14364: KAFKA-15278: Implement HeartbeatRequestManager to handle heartbeat requests

2023-09-25 Thread via GitHub


philipnee commented on PR #14364:
URL: https://github.com/apache/kafka/pull/14364#issuecomment-1734529330

   Hey @lianetm - I made some updates based on your last comments. Let me know 
your thoughts!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on a diff in pull request #14364: KAFKA-15278: Implement HeartbeatRequestManager to handle heartbeat requests

2023-09-25 Thread via GitHub


philipnee commented on code in PR #14364:
URL: https://github.com/apache/kafka/pull/14364#discussion_r1336429379


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManager.java:
##
@@ -0,0 +1,294 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.clients.consumer.internals;
+
+import org.apache.kafka.clients.CommonClientConfigs;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.errors.GroupAuthorizationException;
+import org.apache.kafka.common.message.ConsumerGroupHeartbeatRequestData;
+import org.apache.kafka.common.protocol.Errors;
+import org.apache.kafka.common.requests.ConsumerGroupHeartbeatRequest;
+import org.apache.kafka.common.requests.ConsumerGroupHeartbeatResponse;
+import org.apache.kafka.common.utils.LogContext;
+import org.apache.kafka.common.utils.Time;
+import org.apache.kafka.common.utils.Timer;
+import org.slf4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+
+/**
+ * Manages the request creation and response handling for the heartbeat. The 
module creates a {@link ConsumerGroupHeartbeatRequest}
+ * using the state stored in the {@link MembershipManager} and enqueue it to 
the network queue to be sent out. Once
+ * the response is received, the module will update the state in the {@link 
MembershipManager} and handle any errors.
+ *
+ * The manager only emits heartbeat when the member is in a group, tries to 
join a group, or tries rejoin the group.
+ * If the member does not have groupId configured, left the group, or 
encountering fatal exceptions, the heartbeat will
+ * not be sent. If the coordinator not is not found, we will skip sending the 
heartbeat and tries to find a coordinator first.
+ *
+ * If the heartbeat failed due to retriable errors, such as, TimeoutException. 
 The subsequent attempt will be backoff
+ * exponentially.
+ *
+ * If the member completes the partition revocation process, a heartbeat 
request will be sent in the next event loop.
+ *
+ * {@link HeartbeatRequestState} for more details.
+ */
+public class HeartbeatRequestManager implements RequestManager {
+private final Logger logger;
+
+private final int rebalanceTimeoutMs;
+
+private final CoordinatorRequestManager coordinatorRequestManager;
+private final SubscriptionState subscriptions;
+private final HeartbeatRequestState heartbeatRequestState;
+private final MembershipManager membershipManager;
+private final ErrorEventHandler nonRetriableErrorHandler;
+
+public HeartbeatRequestManager(
+final Time time,
+final LogContext logContext,
+final ConsumerConfig config,
+final CoordinatorRequestManager coordinatorRequestManager,
+final SubscriptionState subscriptions,
+final MembershipManager membershipManager,
+final ErrorEventHandler nonRetriableErrorHandler) {
+this.coordinatorRequestManager = coordinatorRequestManager;
+this.logger = logContext.logger(getClass());
+this.subscriptions = subscriptions;
+this.membershipManager = membershipManager;
+this.nonRetriableErrorHandler = nonRetriableErrorHandler;
+this.rebalanceTimeoutMs = 
config.getInt(CommonClientConfigs.MAX_POLL_INTERVAL_MS_CONFIG);
+long retryBackoffMs = 
config.getLong(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG);
+long retryBackoffMaxMs = 
config.getLong(ConsumerConfig.RETRY_BACKOFF_MAX_MS_CONFIG);
+this.heartbeatRequestState = new HeartbeatRequestState(logContext, 
time, 0, retryBackoffMs,
+retryBackoffMaxMs, rebalanceTimeoutMs);
+}
+
+// Visible for testing
+HeartbeatRequestManager(
+final Time time,
+final LogContext logContext,
+final ConsumerConfig config,
+final CoordinatorRequestManager coordinatorRequestManager,
+final SubscriptionState subscriptions,
+final MembershipManager membershipManager,
+final HeartbeatRequestState heartbeatRequestState,
+final ErrorEventHandler nonRetriableErrorHandler) {
+this.logger = logContext.logger(this.getClass());
+ 

[GitHub] [kafka] zhengyd2014 commented on a diff in pull request #14444: KIP-951: Server side and protocol changes for KIP-951

2023-09-25 Thread via GitHub


zhengyd2014 commented on code in PR #1:
URL: https://github.com/apache/kafka/pull/1#discussion_r1336412576


##
clients/src/main/java/org/apache/kafka/common/requests/FetchResponse.java:
##
@@ -276,11 +289,16 @@ private static FetchResponseData toMessage(Errors error,
 .setPartitions(partitionResponses));
 }
 }
-
-return new FetchResponseData()
-.setThrottleTimeMs(throttleTimeMs)
-.setErrorCode(error.code())
-.setSessionId(sessionId)
-.setResponses(topicResponseList);
+data.setThrottleTimeMs(throttleTimeMs)
+.setErrorCode(error.code())
+.setSessionId(sessionId)
+.setResponses(topicResponseList);
+nodeEndpoints.forEach(endpoint -> data.nodeEndpoints().add(

Review Comment:
   I don't think we need to populate nodeEndpoints field for all the 
Fetch/Produce response, which has certain overhead.  to my understanding, the 
info needed only for error condition.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] nizhikov commented on pull request #14064: KAFKA-15030: Add connect-plugin-path command-line tool.

2023-09-25 Thread via GitHub


nizhikov commented on PR #14064:
URL: https://github.com/apache/kafka/pull/14064#issuecomment-1734469315

   Hello, @gharris1727 , @C0urante 
   I'm working on #13247 and this commit somehow interfere with my change :)
   `ReassignPartitionsIntegrationTest` that I rewrite in java works before this 
commit and hangs after.
   Meanwhile I don't find a reason - why this breaks my test, but removing all 
connect dependencies from classpath of tools makes test pass without any 
additional changes.
   
   Do you have any idea - how `connect:api` dependency can affect test cluster 
or similar code?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] philipnee commented on a diff in pull request #14364: KAFKA-15278: Implement HeartbeatRequestManager to handle heartbeat requests

2023-09-25 Thread via GitHub


philipnee commented on code in PR #14364:
URL: https://github.com/apache/kafka/pull/14364#discussion_r1336390801


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManager.java:
##
@@ -0,0 +1,294 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.kafka.clients.consumer.internals;
+
+import org.apache.kafka.clients.CommonClientConfigs;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.common.KafkaException;
+import org.apache.kafka.common.errors.GroupAuthorizationException;
+import org.apache.kafka.common.message.ConsumerGroupHeartbeatRequestData;
+import org.apache.kafka.common.protocol.Errors;
+import org.apache.kafka.common.requests.ConsumerGroupHeartbeatRequest;
+import org.apache.kafka.common.requests.ConsumerGroupHeartbeatResponse;
+import org.apache.kafka.common.utils.LogContext;
+import org.apache.kafka.common.utils.Time;
+import org.apache.kafka.common.utils.Timer;
+import org.slf4j.Logger;
+
+import java.util.ArrayList;
+import java.util.Collections;
+
+/**
+ * Manages the request creation and response handling for the heartbeat. The 
module creates a {@link ConsumerGroupHeartbeatRequest}
+ * using the state stored in the {@link MembershipManager} and enqueue it to 
the network queue to be sent out. Once
+ * the response is received, the module will update the state in the {@link 
MembershipManager} and handle any errors.
+ *
+ * The manager only emits heartbeat when the member is in a group, tries to 
join a group, or tries rejoin the group.
+ * If the member does not have groupId configured, left the group, or 
encountering fatal exceptions, the heartbeat will
+ * not be sent. If the coordinator not is not found, we will skip sending the 
heartbeat and tries to find a coordinator first.
+ *
+ * If the heartbeat failed due to retriable errors, such as, TimeoutException. 
 The subsequent attempt will be backoff
+ * exponentially.
+ *
+ * If the member completes the partition revocation process, a heartbeat 
request will be sent in the next event loop.
+ *
+ * {@link HeartbeatRequestState} for more details.
+ */
+public class HeartbeatRequestManager implements RequestManager {
+private final Logger logger;
+
+private final int rebalanceTimeoutMs;
+
+private final CoordinatorRequestManager coordinatorRequestManager;
+private final SubscriptionState subscriptions;
+private final HeartbeatRequestState heartbeatRequestState;
+private final MembershipManager membershipManager;
+private final ErrorEventHandler nonRetriableErrorHandler;
+
+public HeartbeatRequestManager(
+final Time time,
+final LogContext logContext,
+final ConsumerConfig config,
+final CoordinatorRequestManager coordinatorRequestManager,
+final SubscriptionState subscriptions,
+final MembershipManager membershipManager,
+final ErrorEventHandler nonRetriableErrorHandler) {
+this.coordinatorRequestManager = coordinatorRequestManager;
+this.logger = logContext.logger(getClass());
+this.subscriptions = subscriptions;
+this.membershipManager = membershipManager;
+this.nonRetriableErrorHandler = nonRetriableErrorHandler;
+this.rebalanceTimeoutMs = 
config.getInt(CommonClientConfigs.MAX_POLL_INTERVAL_MS_CONFIG);
+long retryBackoffMs = 
config.getLong(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG);
+long retryBackoffMaxMs = 
config.getLong(ConsumerConfig.RETRY_BACKOFF_MAX_MS_CONFIG);
+this.heartbeatRequestState = new HeartbeatRequestState(logContext, 
time, 0, retryBackoffMs,
+retryBackoffMaxMs, rebalanceTimeoutMs);
+}
+
+// Visible for testing
+HeartbeatRequestManager(
+final Time time,
+final LogContext logContext,
+final ConsumerConfig config,
+final CoordinatorRequestManager coordinatorRequestManager,
+final SubscriptionState subscriptions,
+final MembershipManager membershipManager,
+final HeartbeatRequestState heartbeatRequestState,
+final ErrorEventHandler nonRetriableErrorHandler) {
+this.logger = logContext.logger(this.getClass());
+ 

[GitHub] [kafka] philipnee commented on a diff in pull request #14364: KAFKA-15278: Implement HeartbeatRequestManager to handle heartbeat requests

2023-09-25 Thread via GitHub


philipnee commented on code in PR #14364:
URL: https://github.com/apache/kafka/pull/14364#discussion_r1336389075


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/MembershipManagerImpl.java:
##
@@ -115,25 +115,37 @@ public int memberEpoch() {
 
 @Override
 public void updateState(ConsumerGroupHeartbeatResponseData response) {
-if (response.errorCode() == Errors.NONE.code()) {
-this.memberId = response.memberId();
-this.memberEpoch = response.memberEpoch();
-ConsumerGroupHeartbeatResponseData.Assignment assignment = 
response.assignment();
-if (assignment != null) {
-setTargetAssignment(assignment);
-}
-maybeTransitionToStable();
-} else {
-if (response.errorCode() == Errors.FENCED_MEMBER_EPOCH.code() || 
response.errorCode() == Errors.UNKNOWN_MEMBER_ID.code()) {
-resetEpoch();
-transitionTo(MemberState.FENCED);
-} else if (response.errorCode() == 
Errors.UNRELEASED_INSTANCE_ID.code()) {
-transitionTo(MemberState.FAILED);
-}
-// TODO: handle other errors here to update state accordingly, 
mainly making the
-//  distinction between the recoverable errors and the fatal ones, 
that should FAILED
-//  the member
+if (response.errorCode() != Errors.NONE.code()) {
+String errorMessage = String.format(
+"Unexpected error in Heartbeat response. Expected no 
error, but received: %s",
+Errors.forCode(response.errorCode())
+);
+throw new IllegalStateException(errorMessage);
+}
+this.memberId = response.memberId();
+this.memberEpoch = response.memberEpoch();
+ConsumerGroupHeartbeatResponseData.Assignment assignment = 
response.assignment();
+if (assignment != null) {
+setTargetAssignment(assignment);
 }
+maybeTransitionToStable();
+}
+
+@Override
+public void fenceMember() {
+resetEpoch();
+transitionTo(MemberState.FENCED);
+}
+
+@Override
+public void transitionToFailure() {
+transitionTo(MemberState.FAILED);
+}
+
+@Override
+public boolean shouldSendHeartbeat() {

Review Comment:
   i think you are right.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] bmscomp opened a new pull request, #14450: KAFKA-15504: Upgrade snappy java to version 1.1.10.4

2023-09-25 Thread via GitHub


bmscomp opened a new pull request, #14450:
URL: https://github.com/apache/kafka/pull/14450

   The version 1.1.10.4 contains a Fix of 
[CVE-2023-43642](https://github.com/xerial/snappy-java/security/advisories/GHSA-55g7-9cwv-5qfv)

   and As mentioned on the release notes of the library 
https://github.com/xerial/snappy-java/releases/tag/v1.1.10.4  Fixed 
SnappyInputStream so as not to allocate too large memory when decompressing 
data with an extremely large chunk size 
   
   and much dependencies updates
   
   
   ### Committer Checklist (excluded from commit message)
   - [x] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15504) Upgrade snappy java to version 1.1.10.4

2023-09-25 Thread Said BOUDJELDA (Jira)
Said BOUDJELDA created KAFKA-15504:
--

 Summary: Upgrade snappy java to version 1.1.10.4
 Key: KAFKA-15504
 URL: https://issues.apache.org/jira/browse/KAFKA-15504
 Project: Kafka
  Issue Type: Improvement
Reporter: Said BOUDJELDA
Assignee: Said BOUDJELDA


The version 1.1.10.4 contains a fix of 
[CVE-2023-43642|https://github.com/xerial/snappy-java/security/advisories/GHSA-55g7-9cwv-5qfv]
 as mentioned on the release notes of the library 
[https://github.com/xerial/snappy-java/releases/tag/v1.1.10.4]  Fixed 
SnappyInputStream so as not to allocate too large memory when decompressing 
data with an extremely large chunk size by

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15493) Ensure system tests work with Java 21

2023-09-25 Thread Said BOUDJELDA (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768902#comment-17768902
 ] 

Said BOUDJELDA commented on KAFKA-15493:


[~ijuma]  I took your comment as a description of this ticket 

 

> Ensure system tests work with Java 21
> -
>
> Key: KAFKA-15493
> URL: https://issues.apache.org/jira/browse/KAFKA-15493
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Said BOUDJELDA
>Priority: Major
> Fix For: 3.7.0
>
>
> Run the system tests as described below with Java 21:
> [https://github.com/apache/kafka/tree/trunk/tests]
> One relevant portion:
> Run tests with a different JVM (it may be as easy as replacing 11 with 21)
> {code:java}
> bash tests/docker/ducker-ak up -j 'openjdk:11'; 
> tests/docker/run_tests.sh{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15493) Ensure system tests work with Java 21

2023-09-25 Thread Said BOUDJELDA (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Said BOUDJELDA updated KAFKA-15493:
---
Description: 
Run the system tests as described below with Java 21:

[https://github.com/apache/kafka/tree/trunk/tests]

One relevant portion:

Run tests with a different JVM (it may be as easy as replacing 11 with 21)
{code:java}
bash tests/docker/ducker-ak up -j 'openjdk:11'; tests/docker/run_tests.sh{code}

> Ensure system tests work with Java 21
> -
>
> Key: KAFKA-15493
> URL: https://issues.apache.org/jira/browse/KAFKA-15493
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Said BOUDJELDA
>Priority: Major
> Fix For: 3.7.0
>
>
> Run the system tests as described below with Java 21:
> [https://github.com/apache/kafka/tree/trunk/tests]
> One relevant portion:
> Run tests with a different JVM (it may be as easy as replacing 11 with 21)
> {code:java}
> bash tests/docker/ducker-ak up -j 'openjdk:11'; 
> tests/docker/run_tests.sh{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] bmscomp opened a new pull request, #14449: MINOR: Upgrade version of zstd-jni to the latest stable version 1.5.5-5

2023-09-25 Thread via GitHub


bmscomp opened a new pull request, #14449:
URL: https://github.com/apache/kafka/pull/14449

   Upgrading zstd-jni library to the version 1.5.5-5 can bring much bug fixes, 
there is no clear release notes on the official repository we can see the 
changed that are made since the current stable version and how much bugs fixed 
and improvements made 
   
   For more details about the great job done on the library to bring those 
enhancements please refer to 
   https://github.com/luben/zstd-jni/compare/v1.5.5-1...v.1.5.5-5 
   
   
   
   ### Committer Checklist (excluded from commit message)
   - [x] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Cerchie commented on pull request #14448: KAFKA-15307: Update/errors for deprecated config

2023-09-25 Thread via GitHub


Cerchie commented on PR #14448:
URL: https://github.com/apache/kafka/pull/14448#issuecomment-1734371164

   tagging @mjsax for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] Cerchie opened a new pull request, #14448: KAFKA-15307: Update/errors for deprecated config

2023-09-25 Thread via GitHub


Cerchie opened a new pull request, #14448:
URL: https://github.com/apache/kafka/pull/14448

   This PR is a follow-up on https://github.com/apache/kafka/pull/14360. 
   It ensures that warnings are shown if the dep'd configs outlined in that PR 
are used (note topology.optimization's variable has changed rather than the 
configuration itself; this is already addressed in the code via KIP-626). 
   
   Tests have been added to StreamsConfigTest.java to ensure the proper 
warnings show up. 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on pull request #14418: MINOR: Move TopicIdPartition class to server-common

2023-09-25 Thread via GitHub


dajac commented on PR #14418:
URL: https://github.com/apache/kafka/pull/14418#issuecomment-1734321777

   Re-triggered jenkins.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] antonagestam commented on a diff in pull request #14124: KAFKA-14509; [1/2] Define ConsumerGroupDescribe API request and response schemas and classes.

2023-09-25 Thread via GitHub


antonagestam commented on code in PR #14124:
URL: https://github.com/apache/kafka/pull/14124#discussion_r1336283242


##
clients/src/main/resources/common/message/ConsumerGroupDescribeResponse.json:
##
@@ -0,0 +1,99 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+{
+  "apiKey": 69,
+  "type": "response",
+  "name": "ConsumerGroupDescribeResponse",
+  "validVersions": "0",
+  "flexibleVersions": "0+",
+  // Supported errors:
+  // - GROUP_AUTHORIZATION_FAILED (version 0+)
+  // - NOT_COORDINATOR (version 0+)
+  // - COORDINATOR_NOT_AVAILABLE (version 0+)
+  // - COORDINATOR_LOAD_IN_PROGRESS (version 0+)
+  // - INVALID_REQUEST (version 0+)
+  // - INVALID_GROUP_ID (version 0+)
+  // - GROUP_ID_NOT_FOUND (version 0+)
+  "fields": [
+{ "name": "ThrottleTimeMs", "type": "int32", "versions": "0+",
+  "about": "The duration in milliseconds for which the request was 
throttled due to a quota violation, or zero if the request did not violate any 
quota." },
+{ "name": "Groups", "type": "[]DescribedGroup", "versions": "0+",
+  "about": "Each described group.",
+  "fields": [
+{ "name": "ErrorCode", "type": "int16", "versions": "0+",
+  "about": "The describe error, or 0 if there was no error." },
+{ "name": "ErrorMessage", "type": "string", "versions": "0+", 
"nullableVersions": "0+", "default": "null",
+  "about": "The top-level error message, or null if there was no 
error." },
+{ "name": "GroupId", "type": "string", "versions": "0+", "entityType": 
"groupId",
+  "about": "The group ID string." },
+{ "name": "GroupState", "type": "string", "versions": "0+",
+  "about": "The group state string, or the empty string." },
+{ "name": "GroupEpoch", "type": "int32", "versions": "0+",
+  "about": "The group epoch." },
+{ "name": "AssignmentEpoch", "type": "int32", "versions": "0+",
+  "about": "The assignment epoch." },
+{ "name": "AssignorName", "type": "string", "versions": "0+",
+  "about": "The selected assignor." },
+{ "name": "Members", "type": "[]Member", "versions": "0+",
+  "about": "The members.",
+  "fields": [
+{ "name": "MemberId", "type": "uuid", "versions": "0+",
+  "about": "The member ID." },
+{ "name": "InstanceId", "type": "string", "versions": "0+", 
"nullableVersions": "0+", "default": "null",
+  "about": "The member instance ID." },
+{ "name": "RackId", "type": "string", "versions": "0+", 
"nullableVersions": "0+", "default": "null",
+  "about": "The member rack ID." },
+{ "name": "MemberEpoch", "type": "int32", "versions": "0+",
+  "about": "The current member epoch." },
+{ "name": "ClientId", "type": "string", "versions": "0+",
+  "about": "The client ID." },
+{ "name": "ClientHost", "type": "string", "versions": "0+",
+  "about": "The client host." },
+{ "name": "SubscribedTopicNames", "type": "[]string", "versions": 
"0+",
+  "about": "The subscribed topic names." },
+{ "name": "SubscribedTopicRegex", "type": "string", "versions": 
"0+", "nullableVersions": "0+", "default": "null",
+  "about": "the subscribed topic regex otherwise or null of not 
provided." },
+{ "name": "Assignment", "type": "Assignment", "versions": "0+",
+  "about": "The current assignment." },
+{ "name": "TargetAssignment", "type": "Assignment", "versions": 
"0+",
+  "about": "The target assignment." }
+  ]},
+{ "name": "AuthorizedOperations", "type": "int32", "versions": "3+", 
"default": "-2147483648",

Review Comment:
   Sure! Here's a PR: https://github.com/apache/kafka/pull/14447
   
   I found this by running the code generator for 
https://github.com/Aiven-Open/kio on Kafka trunk. Is this something it'd be 
worthwhile to have a linter for?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[GitHub] [kafka] antonagestam opened a new pull request, #14447: MINOR: Fix incorrect versions in ConsumerGroupDescribeResponse schema

2023-09-25 Thread via GitHub


antonagestam opened a new pull request, #14447:
URL: https://github.com/apache/kafka/pull/14447

   See discussion in 
https://github.com/apache/kafka/pull/14124#discussion_r1320822476
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on a diff in pull request #14124: KAFKA-14509; [1/2] Define ConsumerGroupDescribe API request and response schemas and classes.

2023-09-25 Thread via GitHub


dajac commented on code in PR #14124:
URL: https://github.com/apache/kafka/pull/14124#discussion_r1336257819


##
clients/src/main/resources/common/message/ConsumerGroupDescribeResponse.json:
##
@@ -0,0 +1,99 @@
+// Licensed to the Apache Software Foundation (ASF) under one or more
+// contributor license agreements.  See the NOTICE file distributed with
+// this work for additional information regarding copyright ownership.
+// The ASF licenses this file to You under the Apache License, Version 2.0
+// (the "License"); you may not use this file except in compliance with
+// the License.  You may obtain a copy of the License at
+//
+//http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+{
+  "apiKey": 69,
+  "type": "response",
+  "name": "ConsumerGroupDescribeResponse",
+  "validVersions": "0",
+  "flexibleVersions": "0+",
+  // Supported errors:
+  // - GROUP_AUTHORIZATION_FAILED (version 0+)
+  // - NOT_COORDINATOR (version 0+)
+  // - COORDINATOR_NOT_AVAILABLE (version 0+)
+  // - COORDINATOR_LOAD_IN_PROGRESS (version 0+)
+  // - INVALID_REQUEST (version 0+)
+  // - INVALID_GROUP_ID (version 0+)
+  // - GROUP_ID_NOT_FOUND (version 0+)
+  "fields": [
+{ "name": "ThrottleTimeMs", "type": "int32", "versions": "0+",
+  "about": "The duration in milliseconds for which the request was 
throttled due to a quota violation, or zero if the request did not violate any 
quota." },
+{ "name": "Groups", "type": "[]DescribedGroup", "versions": "0+",
+  "about": "Each described group.",
+  "fields": [
+{ "name": "ErrorCode", "type": "int16", "versions": "0+",
+  "about": "The describe error, or 0 if there was no error." },
+{ "name": "ErrorMessage", "type": "string", "versions": "0+", 
"nullableVersions": "0+", "default": "null",
+  "about": "The top-level error message, or null if there was no 
error." },
+{ "name": "GroupId", "type": "string", "versions": "0+", "entityType": 
"groupId",
+  "about": "The group ID string." },
+{ "name": "GroupState", "type": "string", "versions": "0+",
+  "about": "The group state string, or the empty string." },
+{ "name": "GroupEpoch", "type": "int32", "versions": "0+",
+  "about": "The group epoch." },
+{ "name": "AssignmentEpoch", "type": "int32", "versions": "0+",
+  "about": "The assignment epoch." },
+{ "name": "AssignorName", "type": "string", "versions": "0+",
+  "about": "The selected assignor." },
+{ "name": "Members", "type": "[]Member", "versions": "0+",
+  "about": "The members.",
+  "fields": [
+{ "name": "MemberId", "type": "uuid", "versions": "0+",
+  "about": "The member ID." },
+{ "name": "InstanceId", "type": "string", "versions": "0+", 
"nullableVersions": "0+", "default": "null",
+  "about": "The member instance ID." },
+{ "name": "RackId", "type": "string", "versions": "0+", 
"nullableVersions": "0+", "default": "null",
+  "about": "The member rack ID." },
+{ "name": "MemberEpoch", "type": "int32", "versions": "0+",
+  "about": "The current member epoch." },
+{ "name": "ClientId", "type": "string", "versions": "0+",
+  "about": "The client ID." },
+{ "name": "ClientHost", "type": "string", "versions": "0+",
+  "about": "The client host." },
+{ "name": "SubscribedTopicNames", "type": "[]string", "versions": 
"0+",
+  "about": "The subscribed topic names." },
+{ "name": "SubscribedTopicRegex", "type": "string", "versions": 
"0+", "nullableVersions": "0+", "default": "null",
+  "about": "the subscribed topic regex otherwise or null of not 
provided." },
+{ "name": "Assignment", "type": "Assignment", "versions": "0+",
+  "about": "The current assignment." },
+{ "name": "TargetAssignment", "type": "Assignment", "versions": 
"0+",
+  "about": "The target assignment." }
+  ]},
+{ "name": "AuthorizedOperations", "type": "int32", "versions": "3+", 
"default": "-2147483648",

Review Comment:
   Good catch @aiven-anton! It should be `0+` here. Are you interested in doing 
a small PR to fix this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rajinisivaram commented on a diff in pull request #14445: KAFKA-15502: Update SslEngineValidator to handle large stores

2023-09-25 Thread via GitHub


rajinisivaram commented on code in PR #14445:
URL: https://github.com/apache/kafka/pull/14445#discussion_r1336233705


##
clients/src/main/java/org/apache/kafka/common/security/ssl/SslFactory.java:
##
@@ -419,15 +419,15 @@ void handshake(SslEngineValidator peerValidator) throws 
SSLException {
 while (true) {
 switch (handshakeStatus) {
 case NEED_WRAP:
-if (netBuffer.position() != 0) // Wait for peer to 
consume previously wrapped data
-return;
 handshakeResult = sslEngine.wrap(EMPTY_BUF, netBuffer);
 switch (handshakeResult.getStatus()) {
 case OK: break;
 case BUFFER_OVERFLOW:
 netBuffer.compact();
 netBuffer = Utils.ensureCapacity(netBuffer, 
sslEngine.getSession().getPacketBufferSize());
 netBuffer.flip();
+if (netBuffer.position() != 0) // Wait for 
peer to consume previously wrapped data

Review Comment:
   I think we can do this return at the start of `case BUFFER_OVERFLOW` since 
we don't need to expand the buffer at this time (we are waiting for peer to 
consume the data in the buffer by returning).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] lqxshay opened a new pull request, #14446: Initial commit: new DSL operation on KStreams interface

2023-09-25 Thread via GitHub


lqxshay opened a new pull request, #14446:
URL: https://github.com/apache/kafka/pull/14446

   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15001) CVE vulnerabilities in Jetty

2023-09-25 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-15001:
---
Fix Version/s: (was: 3.6.0)

> CVE vulnerabilities in Jetty 
> -
>
> Key: KAFKA-15001
> URL: https://issues.apache.org/jira/browse/KAFKA-15001
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.4.0, 3.3.2
>Reporter: Arushi Rai
>Priority: Critical
> Fix For: 3.5.1, 3.4.2
>
>
> Kafka is using org.eclipse.jetty_jetty-server and org.eclipse.jetty_jetty-io 
> version 9.4.48.v20220622 where 3 moderate and medium vulnerabilities have 
> been reported. 
> Moderate [CVE-2023-26048|https://nvd.nist.gov/vuln/detail/CVE-2023-26048] in 
> org.eclipse.jetty_jetty-server
> Medium [CVE-2023-26049|https://nvd.nist.gov/vuln/detail/CVE-2023-26049] in 
> org.eclipse.jetty_jetty-io
> Medium [CVE-2023-26048|https://nvd.nist.gov/vuln/detail/CVE-2023-26048] in 
> org.eclipse.jetty_jetty-io
> These are fixed in jetty versions 11.0.14, 10.0.14, 9.4.51 and Kafka should 
> use the same. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15503) CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1

2023-09-25 Thread Satish Duggana (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768810#comment-17768810
 ] 

Satish Duggana commented on KAFKA-15503:


https://github.com/apache/kafka/pull/10526 is cherrypicked to 3.6 branch.

> CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 
> 12.0.1
> --
>
> Key: KAFKA-15503
> URL: https://issues.apache.org/jira/browse/KAFKA-15503
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Rafael Rios Saavedra
>Assignee: Divij Vaidya
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.6.0
>
>
> CVE-2023-40167 and CVE-2023-36479 vulnerabilities affects Jetty version 
> {*}9.4.51{*}. For more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40167] 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-364749] 
> Upgrading to Jetty version *9.4.52, 10.0.16, 11.0.16, 12.0.1* should address 
> this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15503) CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1

2023-09-25 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-15503:
---
Fix Version/s: 3.6.0
   (was: 3.0.0)
   (was: 2.8.0)
   (was: 2.7.1)
   (was: 2.6.2)

> CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 
> 12.0.1
> --
>
> Key: KAFKA-15503
> URL: https://issues.apache.org/jira/browse/KAFKA-15503
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Rafael Rios Saavedra
>Assignee: Divij Vaidya
>Priority: Major
>  Labels: CVE, security
> Fix For: 3.6.0
>
>
> CVE-2023-40167 and CVE-2023-36479 vulnerabilities affects Jetty version 
> {*}9.4.51{*}. For more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40167] 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-364749] 
> Upgrading to Jetty version *9.4.52, 10.0.16, 11.0.16, 12.0.1* should address 
> this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15503) CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1

2023-09-25 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-15503:
---
Affects Version/s: (was: 2.7.0)
   (was: 2.6.1)
   (was: 3.4.1)
   (was: 3.5.1)

> CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 
> 12.0.1
> --
>
> Key: KAFKA-15503
> URL: https://issues.apache.org/jira/browse/KAFKA-15503
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Rafael Rios Saavedra
>Assignee: Divij Vaidya
>Priority: Major
>  Labels: CVE, security
> Fix For: 2.8.0, 2.7.1, 2.6.2, 3.0.0
>
>
> CVE-2023-40167 and CVE-2023-36479 vulnerabilities affects Jetty version 
> {*}9.4.51{*}. For more information see 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40167] 
> [https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-364749] 
> Upgrading to Jetty version *9.4.52, 10.0.16, 11.0.16, 12.0.1* should address 
> this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15503) CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 10.0.16, 11.0.16, 12.0.1

2023-09-25 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-15503:
--

 Summary: CVE-2023-40167, CVE-2023-36479 - Upgrade jetty to 9.4.52, 
10.0.16, 11.0.16, 12.0.1
 Key: KAFKA-15503
 URL: https://issues.apache.org/jira/browse/KAFKA-15503
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.7.0, 2.6.1, 3.4.1, 3.6.0, 3.5.1
Reporter: Rafael Rios Saavedra
Assignee: Divij Vaidya
 Fix For: 2.8.0, 2.7.1, 2.6.2, 3.0.0


CVE-2023-40167 and CVE-2023-36479 vulnerabilities affects Jetty version 
{*}9.4.51{*}. For more information see 
[https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40167] 
[https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-364749] 

Upgrading to Jetty version *9.4.52, 10.0.16, 11.0.16, 12.0.1* should address 
this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15495) KRaft partition truncated when the only ISR member restarts with an empty disk

2023-09-25 Thread Ron Dagostino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768787#comment-17768787
 ] 

Ron Dagostino commented on KAFKA-15495:
---

Yes.  The Disk UUID from JBOD is not needed for this because ELR takes care of 
it as described above.  It feels right to have the Disk UUID anyway since it is 
a simple concept that is easy to understand and reason about, and we can 
consider it as part of "defense in depth" -- it doesn't hurt to get multiple 
signals.  Also we will probably need the Disk UUID for Raft, which doesn't use 
ISR, and ELR won't help there.

This ticket will likely get resolved when [Eligible Leader 
Replicas|https://issues.apache.org/jira/browse/KAFKA-15332] gets resolved.

> KRaft partition truncated when the only ISR member restarts with an empty disk
> --
>
> Key: KAFKA-15495
> URL: https://issues.apache.org/jira/browse/KAFKA-15495
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.2, 3.4.1, 3.6.0, 3.5.1
>Reporter: Ron Dagostino
>Priority: Critical
>
> Assume a topic-partition in KRaft has just a single leader replica in the 
> ISR.  Assume next that this replica goes offline.  This replica's log will 
> define the contents of that partition when the replica restarts, which is 
> correct behavior.  However, assume now that the replica has a disk failure, 
> and we then replace the failed disk with a new, empty disk that we also 
> format with the storage tool so it has the correct cluster ID.  If we then 
> restart the broker, the topic-partition will have no data in it, and any 
> other replicas that might exist will truncate their logs to match, which 
> results in data loss.  See below for a step-by-step demo of how to reproduce 
> this.
> [KIP-858: Handle JBOD broker disk failure in 
> KRaft|https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft]
>  introduces the concept of a Disk UUID that we can use to solve this problem. 
>  Specifically, when the leader restarts with an empty (but 
> correctly-formatted) disk, the actual UUID associated with the disk will be 
> different.  The controller will notice upon broker re-registration that its 
> disk UUID differs from what was previously registered.  Right now we have no 
> way of detecting this situation, but the disk UUID gives us that capability.
> STEPS TO REPRODUCE:
> Create a single broker cluster with single controller.  The standard files 
> under config/kraft work well:
> bin/kafka-storage.sh random-uuid
> J8qXRwI-Qyi2G0guFTiuYw
> #ensure we start clean
> /bin/rm -rf /tmp/kraft-broker-logs /tmp/kraft-controller-logs
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/controller.properties
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/broker.properties
> bin/kafka-server-start.sh config/kraft/controller.properties
> bin/kafka-server-start.sh config/kraft/broker.properties
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic foo1 
> --partitions 1 --replication-factor 1
> #create __consumer-offsets topics
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic foo1 
> --from-beginning
> ^C
> #confirm that __consumer_offsets topic partitions are all created and on 
> broker with node id 2
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
> Now create 2 more brokers, with node IDs 11 and 12
> cat config/kraft/broker.properties | sed 's/node.id=2/node.id=11/' | sed 
> 's/localhost:9092/localhost:9011/g' |  sed 
> 's#log.dirs=/tmp/kraft-broker-logs#log.dirs=/tmp/kraft-broker-logs11#' > 
> config/kraft/broker11.properties
> cat config/kraft/broker.properties | sed 's/node.id=2/node.id=12/' | sed 
> 's/localhost:9092/localhost:9012/g' |  sed 
> 's#log.dirs=/tmp/kraft-broker-logs#log.dirs=/tmp/kraft-broker-logs12#' > 
> config/kraft/broker12.properties
> #ensure we start clean
> /bin/rm -rf /tmp/kraft-broker-logs11 /tmp/kraft-broker-logs12
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/broker11.properties
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/broker12.properties
> bin/kafka-server-start.sh config/kraft/broker11.properties
> bin/kafka-server-start.sh config/kraft/broker12.properties
> #create a topic with a single partition replicated on two brokers
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic foo2 
> --partitions 1 --replication-factor 2
> #reassign partitions onto brokers with Node IDs 11 and 12
> echo '{"partitions":[{"topic": "foo2","partition": 0,"replicas": [11,12]}], 
> "version":1}' > /tmp/reassign.json
> bin/kafka-reassign-partitions.sh --bootstrap-server 

[GitHub] [kafka] satishd merged pull request #14438: KAFKA-15487: Upgrade Jetty to 9.4.52.v20230823

2023-09-25 Thread via GitHub


satishd merged PR #14438:
URL: https://github.com/apache/kafka/pull/14438


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd commented on pull request #14438: KAFKA-15487: Upgrade Jetty to 9.4.52.v20230823

2023-09-25 Thread via GitHub


satishd commented on PR #14438:
URL: https://github.com/apache/kafka/pull/14438#issuecomment-1734142844

   Merging it to trunk and 3.6 as the failed tests are unrelated to the change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #14408: KAFKA-14506: Implement DeleteGroups API and OffsetDelete API

2023-09-25 Thread via GitHub


rreddy-22 commented on code in PR #14408:
URL: https://github.com/apache/kafka/pull/14408#discussion_r1336165006


##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupCoordinatorServiceTest.java:
##
@@ -936,4 +939,204 @@ public void 
testLeaveGroupThrowsUnknownMemberIdException() throws Exception {
 
 assertEquals(expectedResponse, future.get());
 }
+
+@Test
+public void testDeleteOffsets() throws Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+requestTopicCollection.add(
+new OffsetDeleteRequestData.OffsetDeleteRequestTopic()
+.setName("topic")
+.setPartitions(Collections.singletonList(
+new 
OffsetDeleteRequestData.OffsetDeleteRequestPartition().setPartitionIndex(0)
+))
+);
+OffsetDeleteRequestData request = new 
OffsetDeleteRequestData().setGroupId("group")
+.setTopics(requestTopicCollection);
+
+OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection 
responsePartitionCollection =
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection();
+responsePartitionCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartition().setPartitionIndex(0)
+);
+OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection 
responseTopicCollection =
+new OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection();
+responseTopicCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponseTopic().setPartitions(responsePartitionCollection)
+);
+OffsetDeleteResponseData response = new OffsetDeleteResponseData()
+.setTopics(responseTopicCollection);
+
+
+when(runtime.scheduleWriteOperation(
+ArgumentMatchers.eq("delete-offset"),
+ArgumentMatchers.eq(new TopicPartition("__consumer_offsets", 0)),
+ArgumentMatchers.any()
+)).thenReturn(CompletableFuture.completedFuture(response));
+
+CompletableFuture future = 
service.deleteOffsets(
+requestContext(ApiKeys.OFFSET_DELETE),
+request,
+BufferSupplier.NO_CACHING
+);
+
+assertTrue(future.isDone());
+assertEquals(response, future.get());
+}

Review Comment:
   nit: can we add new lines between the tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #14408: KAFKA-14506: Implement DeleteGroups API and OffsetDelete API

2023-09-25 Thread via GitHub


rreddy-22 commented on code in PR #14408:
URL: https://github.com/apache/kafka/pull/14408#discussion_r1336168777


##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupCoordinatorServiceTest.java:
##
@@ -936,4 +939,204 @@ public void 
testLeaveGroupThrowsUnknownMemberIdException() throws Exception {
 
 assertEquals(expectedResponse, future.get());
 }
+
+@Test
+public void testDeleteOffsets() throws Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+requestTopicCollection.add(
+new OffsetDeleteRequestData.OffsetDeleteRequestTopic()
+.setName("topic")
+.setPartitions(Collections.singletonList(
+new 
OffsetDeleteRequestData.OffsetDeleteRequestPartition().setPartitionIndex(0)
+))
+);
+OffsetDeleteRequestData request = new 
OffsetDeleteRequestData().setGroupId("group")
+.setTopics(requestTopicCollection);
+
+OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection 
responsePartitionCollection =
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection();
+responsePartitionCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartition().setPartitionIndex(0)
+);
+OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection 
responseTopicCollection =
+new OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection();
+responseTopicCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponseTopic().setPartitions(responsePartitionCollection)
+);
+OffsetDeleteResponseData response = new OffsetDeleteResponseData()
+.setTopics(responseTopicCollection);
+
+
+when(runtime.scheduleWriteOperation(
+ArgumentMatchers.eq("delete-offset"),
+ArgumentMatchers.eq(new TopicPartition("__consumer_offsets", 0)),
+ArgumentMatchers.any()
+)).thenReturn(CompletableFuture.completedFuture(response));
+
+CompletableFuture future = 
service.deleteOffsets(
+requestContext(ApiKeys.OFFSET_DELETE),
+request,
+BufferSupplier.NO_CACHING
+);
+
+assertTrue(future.isDone());
+assertEquals(response, future.get());
+}
+@Test
+public void testDeleteOffsetsInvalidGroupId() throws Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+requestTopicCollection.add(
+new OffsetDeleteRequestData.OffsetDeleteRequestTopic()
+.setName("topic")
+.setPartitions(Collections.singletonList(
+new 
OffsetDeleteRequestData.OffsetDeleteRequestPartition().setPartitionIndex(0)
+))
+);
+OffsetDeleteRequestData request = new 
OffsetDeleteRequestData().setGroupId("")
+.setTopics(requestTopicCollection);
+
+OffsetDeleteResponseData response = new OffsetDeleteResponseData()
+.setErrorCode(Errors.INVALID_GROUP_ID.code());
+
+when(runtime.scheduleWriteOperation(
+ArgumentMatchers.eq("delete-offset"),
+ArgumentMatchers.eq(new TopicPartition("__consumer_offsets", 0)),
+ArgumentMatchers.any()
+)).thenReturn(CompletableFuture.completedFuture(response));
+
+CompletableFuture future = 
service.deleteOffsets(
+requestContext(ApiKeys.OFFSET_DELETE),
+request,
+BufferSupplier.NO_CACHING
+);
+
+assertTrue(future.isDone());
+assertEquals(response, future.get());
+}
+@Test
+public void testDeleteOffsetsCoordinatorNotAvailableException() throws 
Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+

[GitHub] [kafka] omkreddy commented on pull request #14445: KAFKA-15502: Update SslEngineValidator to handle large stores

2023-09-25 Thread via GitHub


omkreddy commented on PR #14445:
URL: https://github.com/apache/kafka/pull/14445#issuecomment-1734140374

   > @omkreddy Thanks for the PR, looks good. This doesn't include the change 
to process data for WRAP when there is already data in the buffer?
   
   @rajinisivaram  missed the change. update the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #14408: KAFKA-14506: Implement DeleteGroups API and OffsetDelete API

2023-09-25 Thread via GitHub


rreddy-22 commented on code in PR #14408:
URL: https://github.com/apache/kafka/pull/14408#discussion_r1336165306


##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupCoordinatorServiceTest.java:
##
@@ -936,4 +939,204 @@ public void 
testLeaveGroupThrowsUnknownMemberIdException() throws Exception {
 
 assertEquals(expectedResponse, future.get());
 }
+
+@Test
+public void testDeleteOffsets() throws Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+requestTopicCollection.add(
+new OffsetDeleteRequestData.OffsetDeleteRequestTopic()
+.setName("topic")
+.setPartitions(Collections.singletonList(
+new 
OffsetDeleteRequestData.OffsetDeleteRequestPartition().setPartitionIndex(0)
+))
+);
+OffsetDeleteRequestData request = new 
OffsetDeleteRequestData().setGroupId("group")
+.setTopics(requestTopicCollection);
+
+OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection 
responsePartitionCollection =
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection();
+responsePartitionCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartition().setPartitionIndex(0)
+);
+OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection 
responseTopicCollection =
+new OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection();
+responseTopicCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponseTopic().setPartitions(responsePartitionCollection)
+);
+OffsetDeleteResponseData response = new OffsetDeleteResponseData()
+.setTopics(responseTopicCollection);
+
+
+when(runtime.scheduleWriteOperation(
+ArgumentMatchers.eq("delete-offset"),
+ArgumentMatchers.eq(new TopicPartition("__consumer_offsets", 0)),
+ArgumentMatchers.any()
+)).thenReturn(CompletableFuture.completedFuture(response));
+
+CompletableFuture future = 
service.deleteOffsets(
+requestContext(ApiKeys.OFFSET_DELETE),
+request,
+BufferSupplier.NO_CACHING
+);
+
+assertTrue(future.isDone());
+assertEquals(response, future.get());
+}
+@Test
+public void testDeleteOffsetsInvalidGroupId() throws Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+requestTopicCollection.add(
+new OffsetDeleteRequestData.OffsetDeleteRequestTopic()
+.setName("topic")
+.setPartitions(Collections.singletonList(
+new 
OffsetDeleteRequestData.OffsetDeleteRequestPartition().setPartitionIndex(0)
+))
+);
+OffsetDeleteRequestData request = new 
OffsetDeleteRequestData().setGroupId("")
+.setTopics(requestTopicCollection);
+
+OffsetDeleteResponseData response = new OffsetDeleteResponseData()
+.setErrorCode(Errors.INVALID_GROUP_ID.code());
+
+when(runtime.scheduleWriteOperation(
+ArgumentMatchers.eq("delete-offset"),
+ArgumentMatchers.eq(new TopicPartition("__consumer_offsets", 0)),
+ArgumentMatchers.any()
+)).thenReturn(CompletableFuture.completedFuture(response));
+
+CompletableFuture future = 
service.deleteOffsets(
+requestContext(ApiKeys.OFFSET_DELETE),
+request,
+BufferSupplier.NO_CACHING
+);
+
+assertTrue(future.isDone());
+assertEquals(response, future.get());
+}
+@Test

Review Comment:
   nit: line



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #14408: KAFKA-14506: Implement DeleteGroups API and OffsetDelete API

2023-09-25 Thread via GitHub


rreddy-22 commented on code in PR #14408:
URL: https://github.com/apache/kafka/pull/14408#discussion_r1336165006


##
group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupCoordinatorServiceTest.java:
##
@@ -936,4 +939,204 @@ public void 
testLeaveGroupThrowsUnknownMemberIdException() throws Exception {
 
 assertEquals(expectedResponse, future.get());
 }
+
+@Test
+public void testDeleteOffsets() throws Exception {
+CoordinatorRuntime runtime = 
mockRuntime();
+GroupCoordinatorService service = new GroupCoordinatorService(
+new LogContext(),
+createConfig(),
+runtime
+);
+service.startup(() -> 1);
+
+OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection 
requestTopicCollection =
+new OffsetDeleteRequestData.OffsetDeleteRequestTopicCollection();
+requestTopicCollection.add(
+new OffsetDeleteRequestData.OffsetDeleteRequestTopic()
+.setName("topic")
+.setPartitions(Collections.singletonList(
+new 
OffsetDeleteRequestData.OffsetDeleteRequestPartition().setPartitionIndex(0)
+))
+);
+OffsetDeleteRequestData request = new 
OffsetDeleteRequestData().setGroupId("group")
+.setTopics(requestTopicCollection);
+
+OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection 
responsePartitionCollection =
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartitionCollection();
+responsePartitionCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponsePartition().setPartitionIndex(0)
+);
+OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection 
responseTopicCollection =
+new OffsetDeleteResponseData.OffsetDeleteResponseTopicCollection();
+responseTopicCollection.add(
+new 
OffsetDeleteResponseData.OffsetDeleteResponseTopic().setPartitions(responsePartitionCollection)
+);
+OffsetDeleteResponseData response = new OffsetDeleteResponseData()
+.setTopics(responseTopicCollection);
+
+
+when(runtime.scheduleWriteOperation(
+ArgumentMatchers.eq("delete-offset"),
+ArgumentMatchers.eq(new TopicPartition("__consumer_offsets", 0)),
+ArgumentMatchers.any()
+)).thenReturn(CompletableFuture.completedFuture(response));
+
+CompletableFuture future = 
service.deleteOffsets(
+requestContext(ApiKeys.OFFSET_DELETE),
+request,
+BufferSupplier.NO_CACHING
+);
+
+assertTrue(future.isDone());
+assertEquals(response, future.get());
+}

Review Comment:
   nit: add line between tests



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rreddy-22 commented on a diff in pull request #14408: KAFKA-14506: Implement DeleteGroups API and OffsetDelete API

2023-09-25 Thread via GitHub


rreddy-22 commented on code in PR #14408:
URL: https://github.com/apache/kafka/pull/14408#discussion_r1336162536


##
group-coordinator/src/main/java/org/apache/kafka/coordinator/group/Group.java:
##
@@ -90,4 +90,29 @@ void validateOffsetFetch(
 int memberEpoch,
 long lastCommittedOffset
 ) throws KafkaException;
+
+/**
+ * Validates the OffsetDelete request.
+ */
+void validateOffsetDelete() throws KafkaException;
+
+/**
+ * Validates the GroupDelete request
+ */
+void validateGroupDelete() throws KafkaException;
+
+/**
+ * Returns true if the group is actively subscribed to the topic.
+ *
+ * @param topic the topic name.
+ * @return whether the group is subscribed to the topic.
+ */
+boolean isSubscribedToTopic(String topic);

Review Comment:
   got it okie!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


jlprat commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1734119638

   To add an extra data point, it seems I can reproduce this failure 
consistently on my machine. I'll try to allocate some time tomorrow to find the 
root cause. But if somebody on other timezones wants to take a stab at it, go 
ahead!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-15117) SslTransportLayerTest.testValidEndpointIdentificationCN fails with Java 20 & 21

2023-09-25 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-15117.

  Reviewer: Rajini Sivaram
Resolution: Fixed

> SslTransportLayerTest.testValidEndpointIdentificationCN fails with Java 20 & 
> 21
> ---
>
> Key: KAFKA-15117
> URL: https://issues.apache.org/jira/browse/KAFKA-15117
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Purshotam Chauhan
>Priority: Major
> Fix For: 3.7.0
>
>
> All variations fail as seen below. These tests have been disabled when run 
> with Java 20 & 21 for now.
> {code:java}
> Gradle Test Run :clients:test > Gradle Test Executor 12 > 
> SslTransportLayerTest > testValidEndpointIdentificationCN(Args) > [1] 
> tlsProtocol=TLSv1.2, useInlinePem=false FAILED
>     org.opentest4j.AssertionFailedError: Channel 0 was not ready after 30 
> seconds ==> expected:  but was: 
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>         at 
> app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>         at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>         at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
>         at 
> app//org.apache.kafka.common.network.NetworkTestUtils.waitForChannelReady(NetworkTestUtils.java:107)
>         at 
> app//org.apache.kafka.common.network.NetworkTestUtils.checkClientConnection(NetworkTestUtils.java:70)
>         at 
> app//org.apache.kafka.common.network.SslTransportLayerTest.verifySslConfigs(SslTransportLayerTest.java:1296)
>         at 
> app//org.apache.kafka.common.network.SslTransportLayerTest.testValidEndpointIdentificationCN(SslTransportLayerTest.java:202)
> org.apache.kafka.common.network.SslTransportLayerTest.testValidEndpointIdentificationCN(Args)[2]
>  failed, log available in 
> /home/ijuma/src/kafka/clients/build/reports/testOutput/org.apache.kafka.common.network.SslTransportLayerTest.testValidEndpointIdentificationCN(Args)[2].test.stdout
> Gradle Test Run :clients:test > Gradle Test Executor 12 > 
> SslTransportLayerTest > testValidEndpointIdentificationCN(Args) > [2] 
> tlsProtocol=TLSv1.2, useInlinePem=true FAILED
>     org.opentest4j.AssertionFailedError: Channel 0 was not ready after 30 
> seconds ==> expected:  but was: 
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>         at 
> app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>         at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>         at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
>         at 
> app//org.apache.kafka.common.network.NetworkTestUtils.waitForChannelReady(NetworkTestUtils.java:107)
>         at 
> app//org.apache.kafka.common.network.NetworkTestUtils.checkClientConnection(NetworkTestUtils.java:70)
>         at 
> app//org.apache.kafka.common.network.SslTransportLayerTest.verifySslConfigs(SslTransportLayerTest.java:1296)
>         at 
> app//org.apache.kafka.common.network.SslTransportLayerTest.testValidEndpointIdentificationCN(SslTransportLayerTest.java:202)
> org.apache.kafka.common.network.SslTransportLayerTest.testValidEndpointIdentificationCN(Args)[3]
>  failed, log available in 
> /home/ijuma/src/kafka/clients/build/reports/testOutput/org.apache.kafka.common.network.SslTransportLayerTest.testValidEndpointIdentificationCN(Args)[3].test.stdout
> Gradle Test Run :clients:test > Gradle Test Executor 12 > 
> SslTransportLayerTest > testValidEndpointIdentificationCN(Args) > [3] 
> tlsProtocol=TLSv1.3, useInlinePem=false FAILED
>     org.opentest4j.AssertionFailedError: Channel 0 was not ready after 30 
> seconds ==> expected:  but was: 
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>         at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>         at 
> app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
>         at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
>         at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211)
>         at 
> app//org.apache.kafka.common.network.NetworkTestUtils.waitForChannelReady(NetworkTestUtils.java:107)
>         at 
> app//org.apache.kafka.common.network.NetworkTestUtils.checkClientConnection(NetworkTestUtils.java:70)
>  

[GitHub] [kafka] rajinisivaram merged pull request #14440: [KAFKA-15117] In TestSslUtils set SubjectAlternativeNames to null if there are no hostnames

2023-09-25 Thread via GitHub


rajinisivaram merged PR #14440:
URL: https://github.com/apache/kafka/pull/14440


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15502) Handle large keystores in SslEngineValidator

2023-09-25 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-15502:
--
Description: 
We have observed an issue where inter broker SSL listener is not coming up for 
large keystores (size >16K)

1. Currently validator code doesn't work well with large stores. Right now, 
WRAP returns if there is already data in the buffer. But if we need more data 
to be wrapped for UNWRAP to succeed, we end up looping forever.

2. Observed large TLSv3 post handshake messages are not getting read and 
causing validator code loop forever. This is observed with JDK17+
 

  was:
We have observed an issue where inter broker SSL listener is not coming up for 
large keystores (size >16K)

1. Currently validator code doesn't work well with large stores. Right now, 
WRAP returns if there is already data in the buffer. But if we need more data 
to be wrapped for UNWRAP to succeed, we end up looping forever.

2. Observed large TLSv3 post handshake messages are not getting read and 
causing UNWRAP loop forever. This is observed with JDK17+
 


> Handle large keystores in SslEngineValidator
> 
>
> Key: KAFKA-15502
> URL: https://issues.apache.org/jira/browse/KAFKA-15502
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> We have observed an issue where inter broker SSL listener is not coming up 
> for large keystores (size >16K)
> 1. Currently validator code doesn't work well with large stores. Right now, 
> WRAP returns if there is already data in the buffer. But if we need more data 
> to be wrapped for UNWRAP to succeed, we end up looping forever.
> 2. Observed large TLSv3 post handshake messages are not getting read and 
> causing validator code loop forever. This is observed with JDK17+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] dajac closed pull request #14429: ignore

2023-09-25 Thread via GitHub


dajac closed pull request #14429: ignore
URL: https://github.com/apache/kafka/pull/14429


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac merged pull request #14402: MINOR: Push logic to resolve the transaction coordinator into the AddPartitionsToTxnManager

2023-09-25 Thread via GitHub


dajac merged PR #14402:
URL: https://github.com/apache/kafka/pull/14402


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] dajac commented on pull request #14402: MINOR: Push logic to resolve the transaction coordinator into the AddPartitionsToTxnManager

2023-09-25 Thread via GitHub


dajac commented on PR #14402:
URL: https://github.com/apache/kafka/pull/14402#issuecomment-1734049140

   I've got two good builds for this PR:
   * 
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14402/17/tests
   * 
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14429/1/tests/
 (same code ran in another PR)
   
   I am going to merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] omkreddy opened a new pull request, #14445: KAFKA-15502: Update SslEngineValidator to handle large stores

2023-09-25 Thread via GitHub


omkreddy opened a new pull request, #14445:
URL: https://github.com/apache/kafka/pull/14445

   We have observed an issue where inter broker SSL listener is not coming up 
when running with TLSv3/JDK 17 . 
   SSL debug logs shows that TLSv3 post handshake messages >16K are not getting 
read and causing SslEngineValidator process to stuck while validating the 
provided trust/key store. 
   
   1. Right now, WRAP returns if there is already data in the buffer. But if we 
need more data to be wrapped for UNWRAP to succeed, we end up looping forever. 
To fix this, now we always attempt WRAP and only return early on 
BUFFER_OVERFLOW.
   2. Update SslEngineValidator to unwrap post-handshake messages from peer 
when local handshake status is FINISHED. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15498) Upgrade Snappy-Java to 1.1.10.4

2023-09-25 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-15498:
---
Priority: Blocker  (was: Major)

> Upgrade Snappy-Java to 1.1.10.4
> ---
>
> Key: KAFKA-15498
> URL: https://issues.apache.org/jira/browse/KAFKA-15498
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.1, 3.5.1
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Blocker
> Fix For: 3.6.0
>
>
> Snappy-java published a new vulnerability
> <[https://github.com/xerial/snappy-java/security/advisories/GHSA-55g7-9cwv-5qfv]>
> that will cause OOM error in the server.
> Kafka is also impacted by this vulnerability since it's like CVE-2023-34455
> <[https://nvd.nist.gov/vuln/detail/CVE-2023-34455]>.
> We'd better bump the snappy-java version to bypass this vulnerability.
> PR <[https://github.com/apache/kafka/pull/14434]> is created to run the CI
> build.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15502) Handle large keystores in SslEngineValidator

2023-09-25 Thread Manikumar (Jira)
Manikumar created KAFKA-15502:
-

 Summary: Handle large keystores in SslEngineValidator
 Key: KAFKA-15502
 URL: https://issues.apache.org/jira/browse/KAFKA-15502
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Manikumar
Assignee: Manikumar


We have observed an issue where inter broker SSL listener is not coming up for 
large keystores (size >16K)

1. Currently validator code doesn't work well with large stores. Right now, 
WRAP returns if there is already data in the buffer. But if we need more data 
to be wrapped for UNWRAP to succeed, we end up looping forever.

2. Observed large TLSv3 post handshake messages are not getting read and 
causing UNWRAP loop forever. This is observed with JDK17+
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15001) CVE vulnerabilities in Jetty

2023-09-25 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-15001:
---
Fix Version/s: 3.6.0

> CVE vulnerabilities in Jetty 
> -
>
> Key: KAFKA-15001
> URL: https://issues.apache.org/jira/browse/KAFKA-15001
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.4.0, 3.3.2
>Reporter: Arushi Rai
>Priority: Critical
> Fix For: 3.6.0, 3.5.1, 3.4.2
>
>
> Kafka is using org.eclipse.jetty_jetty-server and org.eclipse.jetty_jetty-io 
> version 9.4.48.v20220622 where 3 moderate and medium vulnerabilities have 
> been reported. 
> Moderate [CVE-2023-26048|https://nvd.nist.gov/vuln/detail/CVE-2023-26048] in 
> org.eclipse.jetty_jetty-server
> Medium [CVE-2023-26049|https://nvd.nist.gov/vuln/detail/CVE-2023-26049] in 
> org.eclipse.jetty_jetty-io
> Medium [CVE-2023-26048|https://nvd.nist.gov/vuln/detail/CVE-2023-26048] in 
> org.eclipse.jetty_jetty-io
> These are fixed in jetty versions 11.0.14, 10.0.14, 9.4.51 and Kafka should 
> use the same. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15497) Refactor build.gradle and split each module configuration on the module itself

2023-09-25 Thread Bertalan Kondrat (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768753#comment-17768753
 ] 

Bertalan Kondrat commented on KAFKA-15497:
--

Hi [~bmscomp] 
I am not able to assign the ticket to myself, but I have created a PR with the 
change.

> Refactor build.gradle and split each module configuration on the module 
> itself 
> ---
>
> Key: KAFKA-15497
> URL: https://issues.apache.org/jira/browse/KAFKA-15497
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Said BOUDJELDA
>Priority: Major
>
> The *build.gradle* file is getting too big and hard to maintain, a good 
> reason to split this files over modules of the project, and let the root 
> *build.gradle* file manage just the common parts of the projet, this will 
> increase readability
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] chb2ab opened a new pull request, #14444: KIP-951: Server side and protocol changes for KIP-951

2023-09-25 Thread via GitHub


chb2ab opened a new pull request, #1:
URL: https://github.com/apache/kafka/pull/1

   This is the protocol and server changes to populate the fields in KIP-951. 
On NOT_LEADER_OR_FOLLOWER errors in both FETCH and PRODUCE the new leader ID 
and epoch are retrieved from the local cache through ReplicaManager and 
included in the response, falling back to the metadata cache if they are 
unavailable there. The endpoint for the new leader is retrieved from the 
metadata cache. The new fields are all optional (tagged) and an IBP bump is not 
required.
   
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-951%3A+Leader+discovery+optimisations+for+the+client
   
   ### Testing
   Benchmarking described here 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-951%3A+Leader+discovery+optimisations+for+the+client#KIP951:Leaderdiscoveryoptimisationsfortheclient-BenchmarkResults
   `./gradlew core:test --tests kafka.server.KafkaApisTest`
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] k0b3rIT opened a new pull request, #14443: KAFKA-15497 Refactor build.gradle and split each module configuration…

2023-09-25 Thread via GitHub


k0b3rIT opened a new pull request, #14443:
URL: https://github.com/apache/kafka/pull/14443

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15491) RackId doesn't exist error while running WordCountDemo

2023-09-25 Thread Hao Li (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768739#comment-17768739
 ] 

Hao Li commented on KAFKA-15491:


Fixing it in https://github.com/apache/kafka/pull/14415.

> RackId doesn't exist error while running WordCountDemo
> --
>
> Key: KAFKA-15491
> URL: https://issues.apache.org/jira/browse/KAFKA-15491
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Luke Chen
>Priority: Major
>
> While running the WordCountDemo following the 
> [docs|https://kafka.apache.org/documentation/streams/quickstart], I saw the 
> following error logs in the stream application output. Though everything 
> still works fine, it'd be better there are no ERROR logs in the demo app.
> {code:java}
> [2023-09-24 14:15:11,723] ERROR RackId doesn't exist for process 
> e2391098-23e8-47eb-8d5e-ff6e697c33f5 and consumer 
> streams-wordcount-e2391098-23e8-47eb-8d5e-ff6e697c33f5-StreamThread-1-consumer-adae58be-f5f5-429b-a2b4-67bf732726e8
>  
> (org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor)
> [2023-09-24 14:15:11,757] ERROR RackId doesn't exist for process 
> e2391098-23e8-47eb-8d5e-ff6e697c33f5 and consumer 
> streams-wordcount-e2391098-23e8-47eb-8d5e-ff6e697c33f5-StreamThread-1-consumer-adae58be-f5f5-429b-a2b4-67bf732726e8
>  
> (org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] lihaosky commented on pull request #14415: MINOR: only log error when rack aware assignment is enabled

2023-09-25 Thread via GitHub


lihaosky commented on PR #14415:
URL: https://github.com/apache/kafka/pull/14415#issuecomment-1733908720

   cc @showuon 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2023-09-25 Thread Victor van den Hoven (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768550#comment-17768550
 ] 

Victor van den Hoven edited comment on KAFKA-15417 at 9/25/23 2:22 PM:
---

What to do with "Flaky tests"?

 

I do not think my PR has anything to do with it.

 

Just commit and try again and pray?

!Afbeelding 1.png|width=133,height=156!

!Afbeelding 1-1.png|width=483,height=99!

 


was (Author: victorvandenhoven):
What to do with "Flaky tests"?

 

I do not think my PR has anything to do with it.

 

Just commit and try again and pray?

!Afbeelding 1.png|width=133,height=156!

 

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15485) Support building with Java 21 (LTS release)

2023-09-25 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-15485.
-
Resolution: Fixed

> Support building with Java 21 (LTS release)
> ---
>
> Key: KAFKA-15485
> URL: https://issues.apache.org/jira/browse/KAFKA-15485
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Divij Vaidya
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 3.7.0
>
>
> JDK-21 is the latest LTS release which reached GA on 19th Sept 2023. This 
> ticket aims to upgrade JDK used by Kafka to JDK-21 (currently it's JDK20).
> Thanks to proactive work done by [~ijuma] earlier [1][2][3], I do not 
> anticipate major hiccups while upgrading to JDK-21.
> As part of this JIRA we want to:
> 1. Upgrade Kafka to JDK 21
> 2. Replace the CI build for JDK 20 with JDK 21 (similar to [3] below)
> 3. Update the README (see[4]) to mention Kafka's support for JDK-21
> [1] [https://github.com/apache/kafka/pull/13840]
> [2] [https://github.com/apache/kafka/pull/13582]
> [3] [https://github.com/apache/kafka/pull/12948] 
> [4] [https://github.com/apache/kafka/pull/14061] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2023-09-25 Thread Victor van den Hoven (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor van den Hoven updated KAFKA-15417:
-
Attachment: Afbeelding 1-1.png

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Attachments: Afbeelding 1-1.png, Afbeelding 1.png, 
> SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] lianetm commented on a diff in pull request #14406: KAFKA-14274 [6, 7]: Introduction of fetch request manager

2023-09-25 Thread via GitHub


lianetm commented on code in PR #14406:
URL: https://github.com/apache/kafka/pull/14406#discussion_r1335942472


##
clients/src/main/java/org/apache/kafka/clients/consumer/internals/PrototypeAsyncConsumer.java:
##
@@ -636,42 +857,148 @@ public void assign(Collection 
partitions) {
 }
 
 @Override
-public void subscribe(Pattern pattern, ConsumerRebalanceListener callback) 
{
-throw new KafkaException("method not implemented");
+public void subscribe(Pattern pattern, ConsumerRebalanceListener listener) 
{
+maybeThrowInvalidGroupIdException();
+if (pattern == null || pattern.toString().equals(""))
+throw new IllegalArgumentException("Topic pattern to subscribe to 
cannot be " + (pattern == null ?
+"null" : "empty"));
+
+throwIfNoAssignorsConfigured();
+log.info("Subscribed to pattern: '{}'", pattern);
+this.subscriptions.subscribe(pattern, listener);
+this.updatePatternSubscription(metadata.fetch());
+this.metadata.requestUpdateForNewTopics();
+}
+
+/**
+ * TODO: remove this when we implement the KIP-848 protocol.
+ *
+ * 
+ * The contents of this method are shamelessly stolen from
+ * {@link ConsumerCoordinator#updatePatternSubscription(Cluster)} and are 
used here because we won't have access
+ * to a {@link ConsumerCoordinator} in this code. Perhaps it could be 
moved to a ConsumerUtils class?
+ *
+ * @param cluster Cluster from which we get the topics
+ */
+private void updatePatternSubscription(Cluster cluster) {
+final Set topicsToSubscribe = cluster.topics().stream()
+.filter(subscriptions::matchesSubscribedPattern)
+.collect(Collectors.toSet());
+if (subscriptions.subscribeFromPattern(topicsToSubscribe))
+metadata.requestUpdateForNewTopics();
 }
 
 @Override
 public void subscribe(Pattern pattern) {
-throw new KafkaException("method not implemented");
+subscribe(pattern, new NoOpConsumerRebalanceListener());
 }
 
 @Override
 public void unsubscribe() {
-throw new KafkaException("method not implemented");
+fetchBuffer.retainAll(Collections.emptySet());
+this.subscriptions.unsubscribe();
 }
 
 @Override
 @Deprecated
-public ConsumerRecords poll(long timeout) {
-throw new KafkaException("method not implemented");
+public ConsumerRecords poll(final long timeoutMs) {
+return poll(Duration.ofMillis(timeoutMs));
 }
 
 // Visible for testing
 WakeupTrigger wakeupTrigger() {
 return wakeupTrigger;
 }
 
-private static  ClusterResourceListeners 
configureClusterResourceListeners(
-final Deserializer keyDeserializer,
-final Deserializer valueDeserializer,
-final List... candidateLists) {
-ClusterResourceListeners clusterResourceListeners = new 
ClusterResourceListeners();
-for (List candidateList: candidateLists)
-clusterResourceListeners.maybeAddAll(candidateList);
+private void sendFetches() {
+FetchEvent event = new FetchEvent();
+eventHandler.add(event);
+
+event.future().whenComplete((completedFetches, error) -> {
+if (completedFetches != null && !completedFetches.isEmpty()) {
+fetchBuffer.addAll(completedFetches);
+}
+});
+}
+
+/**
+ * @throws KafkaException if the rebalance callback throws exception
+ */
+private Fetch pollForFetches(Timer timer) {
+long pollTimeout = timer.remainingMs();
+
+// if data is available already, return it immediately
+final Fetch fetch = fetchCollector.collectFetch(fetchBuffer);
+if (!fetch.isEmpty()) {
+return fetch;
+}
+
+// send any new fetches (won't resend pending fetches)
+sendFetches();
+
+// We do not want to be stuck blocking in poll if we are missing some 
positions
+// since the offset lookup may be backing off after a failure
+
+// NOTE: the use of cachedSubscriptionHasAllFetchPositions means we 
MUST call
+// updateAssignmentMetadataIfNeeded before this method.
+if (!cachedSubscriptionHasAllFetchPositions && pollTimeout > 
retryBackoffMs) {
+pollTimeout = retryBackoffMs;
+}
+
+log.trace("Polling for fetches with timeout {}", pollTimeout);
+
+Timer pollTimer = time.timer(pollTimeout);
+
+// Attempt to fetch any data. It's OK if we time out here; it's a best 
case effort. The
+// data may not be immediately available, but the calling method 
(poll) will correctly
+// handle the overall timeout.
+try {
+Queue completedFetches = 
eventHandler.addAndGet(new FetchEvent(), pollTimer);
+if (completedFetches != null && !completedFetches.isEmpty()) {
+  

[GitHub] [kafka] jlprat commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


jlprat commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733780756

   @ijuma I agree that it is extremely suspicious that is starting to fail now, 
I just wanted to point out, that it seems the failure is not 100% reproducible 
if I look at the CI test results:
   
![image](https://github.com/apache/kafka/assets/3337739/60dc2ff6-f07a-434d-b6e1-db930de62d1c)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


ijuma commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733772288

   @jlprat The fact that the snappy variant is failing is suspicious though - I 
suspect we're running into the newly introduced limit in that test. Perhaps 
during compaction? I haven't checked though.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15501) Kafka to KRaft combined mode migration v3.5

2023-09-25 Thread Ritvik Gupta (Jira)
Ritvik Gupta created KAFKA-15501:


 Summary: Kafka to KRaft combined mode migration v3.5
 Key: KAFKA-15501
 URL: https://issues.apache.org/jira/browse/KAFKA-15501
 Project: Kafka
  Issue Type: Improvement
  Components: controller, kraft
Reporter: Ritvik Gupta


This is regarding the 
[KIP-866|https://cwiki.apache.org/confluence/display/KAFKA/KIP-866+ZooKeeper+to+KRaft+Migration]
 KRaft migration steps for {*}Kafka v3.5.1{*}.

We want to migrate our Kafka ZK mode cluster to KRaft mode and are following 
the steps mentioned 
[here|https://cwiki.apache.org/confluence/display/KAFKA/KIP-866+ZooKeeper+to+KRaft+Migration#KIP866ZooKeepertoKRaftMigration-MigrationOverview]
 for separate KRaft brokers and controllers migration.

As mentioned in the KIP-866, currently the [combined mode 
migration|https://cwiki.apache.org/confluence/display/KAFKA/KIP-866+ZooKeeper+to+KRaft+Migration#KIP866ZooKeepertoKRaftMigration-CombinedModeMigrationSupport]
 is not supported and is only suitable for dev environments, but we want to 
utilize *combined mode* for our production environments and continue with our 
current set of broker machines ( as otherwise we would need to provision 
additional controller node machines ).

By which version can we expect the *KRaft combined mode* migration be available 
?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2023-09-25 Thread Victor van den Hoven (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768550#comment-17768550
 ] 

Victor van den Hoven edited comment on KAFKA-15417 at 9/25/23 1:22 PM:
---

What to do with "Flaky tests"?

 

I do not think my PR has anything to do with it.

 

Just commit and try again and pray?

!Afbeelding 1.png|width=133,height=156!

 


was (Author: victorvandenhoven):
What to do with "Flaky tests"?

 

I do not think my PR has anything to do with it.

 

Just commit and try again and pray?

 

 

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Attachments: Afbeelding 1.png, SimpleStreamTopology.java, 
> SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2023-09-25 Thread Victor van den Hoven (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor van den Hoven updated KAFKA-15417:
-
Attachment: Afbeelding 1.png

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Attachments: Afbeelding 1.png, SimpleStreamTopology.java, 
> SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15492) Enable spotbugs when building with Java 21

2023-09-25 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768681#comment-17768681
 ] 

Ismael Juma commented on KAFKA-15492:
-

Relevant issue upstream: https://github.com/spotbugs/spotbugs/issues/2567

> Enable spotbugs when building with Java 21
> --
>
> Key: KAFKA-15492
> URL: https://issues.apache.org/jira/browse/KAFKA-15492
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
>
> The latest version of spotbugs (4.7.3) doesn't support Java 21. In order not 
> to delay Java 21 support, we disabled spotbugs when building with Java 21. 
> This should be reverted once we upgrade to a version of spotbugs that 
> supports Java 21.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-15492) Enable spotbugs when building with Java 21

2023-09-25 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-15492:

Fix Version/s: 3.7.0

> Enable spotbugs when building with Java 21
> --
>
> Key: KAFKA-15492
> URL: https://issues.apache.org/jira/browse/KAFKA-15492
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Major
> Fix For: 3.7.0
>
>
> The latest version of spotbugs (4.7.3) doesn't support Java 21. In order not 
> to delay Java 21 support, we disabled spotbugs when building with Java 21. 
> This should be reverted once we upgrade to a version of spotbugs that 
> supports Java 21.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15495) KRaft partition truncated when the only ISR member restarts with an empty disk

2023-09-25 Thread Ron Dagostino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768674#comment-17768674
 ] 

Ron Dagostino commented on KAFKA-15495:
---

Thanks, Ismael.  Yes, I read [the 
KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-966%3A+Eligible+Leader+Replicas]
 and [Jack's blog post about 
it|https://jack-vanlightly.com/blog/2023/8/17/kafka-kip-966-fixing-the-last-replica-standing-issue|]
 last night after I posted this, and I have asked some folks who have been 
involved in that discussion if the broker epoch communicated in the broker 
registration request via the clean shutdown file might also serve to give the 
controller a signal that the disk is new.  I'll comment more here soon.

> KRaft partition truncated when the only ISR member restarts with an empty disk
> --
>
> Key: KAFKA-15495
> URL: https://issues.apache.org/jira/browse/KAFKA-15495
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.3.2, 3.4.1, 3.6.0, 3.5.1
>Reporter: Ron Dagostino
>Priority: Critical
>
> Assume a topic-partition in KRaft has just a single leader replica in the 
> ISR.  Assume next that this replica goes offline.  This replica's log will 
> define the contents of that partition when the replica restarts, which is 
> correct behavior.  However, assume now that the replica has a disk failure, 
> and we then replace the failed disk with a new, empty disk that we also 
> format with the storage tool so it has the correct cluster ID.  If we then 
> restart the broker, the topic-partition will have no data in it, and any 
> other replicas that might exist will truncate their logs to match, which 
> results in data loss.  See below for a step-by-step demo of how to reproduce 
> this.
> [KIP-858: Handle JBOD broker disk failure in 
> KRaft|https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft]
>  introduces the concept of a Disk UUID that we can use to solve this problem. 
>  Specifically, when the leader restarts with an empty (but 
> correctly-formatted) disk, the actual UUID associated with the disk will be 
> different.  The controller will notice upon broker re-registration that its 
> disk UUID differs from what was previously registered.  Right now we have no 
> way of detecting this situation, but the disk UUID gives us that capability.
> STEPS TO REPRODUCE:
> Create a single broker cluster with single controller.  The standard files 
> under config/kraft work well:
> bin/kafka-storage.sh random-uuid
> J8qXRwI-Qyi2G0guFTiuYw
> #ensure we start clean
> /bin/rm -rf /tmp/kraft-broker-logs /tmp/kraft-controller-logs
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/controller.properties
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/broker.properties
> bin/kafka-server-start.sh config/kraft/controller.properties
> bin/kafka-server-start.sh config/kraft/broker.properties
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic foo1 
> --partitions 1 --replication-factor 1
> #create __consumer-offsets topics
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic foo1 
> --from-beginning
> ^C
> #confirm that __consumer_offsets topic partitions are all created and on 
> broker with node id 2
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
> Now create 2 more brokers, with node IDs 11 and 12
> cat config/kraft/broker.properties | sed 's/node.id=2/node.id=11/' | sed 
> 's/localhost:9092/localhost:9011/g' |  sed 
> 's#log.dirs=/tmp/kraft-broker-logs#log.dirs=/tmp/kraft-broker-logs11#' > 
> config/kraft/broker11.properties
> cat config/kraft/broker.properties | sed 's/node.id=2/node.id=12/' | sed 
> 's/localhost:9092/localhost:9012/g' |  sed 
> 's#log.dirs=/tmp/kraft-broker-logs#log.dirs=/tmp/kraft-broker-logs12#' > 
> config/kraft/broker12.properties
> #ensure we start clean
> /bin/rm -rf /tmp/kraft-broker-logs11 /tmp/kraft-broker-logs12
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/broker11.properties
> bin/kafka-storage.sh format --cluster-id J8qXRwI-Qyi2G0guFTiuYw --config 
> config/kraft/broker12.properties
> bin/kafka-server-start.sh config/kraft/broker11.properties
> bin/kafka-server-start.sh config/kraft/broker12.properties
> #create a topic with a single partition replicated on two brokers
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic foo2 
> --partitions 1 --replication-factor 2
> #reassign partitions onto brokers with Node IDs 11 and 12
> echo '{"partitions":[{"topic": "foo2","partition": 0,"replicas": [11,12]}], 
> "version":1}' > /tmp/reassign.json
> bin/kafka-reassign-partitions.sh --bootstrap-server 

[GitHub] [kafka] jlprat commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


jlprat commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733618819

   If it read the tests right, it seems it is now flaky


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (KAFKA-15417) JoinWindow does not seem to work properly with a KStream - KStream - LeftJoin()

2023-09-25 Thread Victor van den Hoven (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17768550#comment-17768550
 ] 

Victor van den Hoven edited comment on KAFKA-15417 at 9/25/23 12:26 PM:


What to do with "Flaky tests"?

 

I do not think my PR has anything to do with it.

 

Just commit and try again and pray?

 

 


was (Author: victorvandenhoven):
Not sure what I can do about this:

> Task :streams:test 
> org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once]
>  failed, log available in 
> /home/jenkins/jenkins-agent/workspace/Kafka_kafka-pr_PR-14426@2/streams/build/reports/testOutput/org.apache.kafka.streams.integration.EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once].test.stdout
>  Gradle Test Run :streams:test > Gradle Test Executor 85 > 
> EOSUncleanShutdownIntegrationTest > [exactly_once] > 
> shouldWorkWithUncleanShutdownWipeOutStateStore[exactly_once] FAILED

:(

> JoinWindow does not  seem to work properly with a KStream - KStream - 
> LeftJoin()
> 
>
> Key: KAFKA-15417
> URL: https://issues.apache.org/jira/browse/KAFKA-15417
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 3.4.0
>Reporter: Victor van den Hoven
>Assignee: Victor van den Hoven
>Priority: Major
> Attachments: SimpleStreamTopology.java, SimpleStreamTopologyTest.java
>
>
> In Kafka-streams 3.4.0 :
> According to the javadoc of the Joinwindow:
> _There are three different window configuration supported:_
>  * _before = after = time-difference_
>  * _before = 0 and after = time-difference_
>  * _*before = time-difference and after = 0*_
>  
> However if I use a joinWindow with *before = time-difference and after = 0* 
> on a kstream-kstream-leftjoin the *after=0* part does not seem to work.
> When using _stream1.leftjoin(stream2, joinWindow)_ with 
> {_}joinWindow{_}.{_}after=0 and joinWindow.before=30s{_}, any new message on 
> stream 1 that can not be joined with any messages on stream2 should be joined 
> with a null-record after the _joinWindow.after_ has ended and a new message 
> has arrived on stream1.
> It does not.
> Only if the new message arrives after the value of _joinWindow.before_ the 
> previous message will be joined with a null-record.
>  
> Attached you can find two files with a TopologyTestDriver Unit test to 
> reproduce.
> topology:   stream1.leftjoin( stream2, joiner, joinwindow)
> joinWindow has before=5000ms and after=0
> message1(key1) ->  stream1
> after 4000ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 4900ms message2(key2) -> stream1  ->  NO null-record join was made, but 
> the after period was expired.
> after 5000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
> after 6000ms message2(key2) -> stream1  ->  A null-record join was made,  
> before period was expired.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] tinaselenge closed pull request #14184: KAFKA-15201: When git fails, the script goes into a loop

2023-09-25 Thread via GitHub


tinaselenge closed pull request #14184: KAFKA-15201: When git fails, the script 
goes into a loop
URL: https://github.com/apache/kafka/pull/14184


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on pull request #14440: [KAFKA-15117] In TestSslUtils set SubjectAlternativeNames to null if there are no hostnames

2023-09-25 Thread via GitHub


ijuma commented on PR #14440:
URL: https://github.com/apache/kafka/pull/14440#issuecomment-1733599894

   I merged a PR that introduced a conflict - I resolved it since it was 
trivial.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-15201) When git fails, script goes into a loop

2023-09-25 Thread Gantigmaa Selenge (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gantigmaa Selenge reassigned KAFKA-15201:
-

Assignee: (was: Gantigmaa Selenge)

> When git fails, script goes into a loop
> ---
>
> Key: KAFKA-15201
> URL: https://issues.apache.org/jira/browse/KAFKA-15201
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Priority: Major
>
> When the git push to remote fails (let's say with unauthenticated exception), 
> then the script runs into a loop. It should not retry and fail gracefully 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] ijuma commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


ijuma commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733595038

   Looks like this regressed? So we'll have to work with upstream to figure out 
what happened.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma merged pull request #14433: KAFKA-15485: Support building with Java 21 (3/3)

2023-09-25 Thread via GitHub


ijuma merged PR #14433:
URL: https://github.com/apache/kafka/pull/14433


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on a diff in pull request #14433: KAFKA-15485: Support building with Java 21 (3/3)

2023-09-25 Thread via GitHub


ijuma commented on code in PR #14433:
URL: https://github.com/apache/kafka/pull/14433#discussion_r1335774192


##
build.gradle:
##
@@ -232,7 +232,10 @@ subprojects {
 
   apply plugin: 'java-library'
   apply plugin: 'checkstyle'
-  apply plugin: "com.github.spotbugs"
+
+  // spotbugs doesn't support Java 21 yet
+  if (!JavaVersion.current().isCompatibleWith(JavaVersion.VERSION_21))

Review Comment:
   I see this as a short term thing, so didn't think it was worth it. I expect 
spotbugs to have a release that works with Java 21 shortly. If it doesn't 
happen in the next few weeks, happy to add the warning. Is that a reasonable 
compromise.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd merged pull request #14407: KAFKA-15479: Remote log segments should be considered once for retention breach

2023-09-25 Thread via GitHub


satishd merged PR #14407:
URL: https://github.com/apache/kafka/pull/14407


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] satishd commented on pull request #14407: KAFKA-15479: Remote log segments should be considered once for retention breach

2023-09-25 Thread via GitHub


satishd commented on PR #14407:
URL: https://github.com/apache/kafka/pull/14407#issuecomment-1733562853

   Merging it to trunk as the failed tests are not related to the changes 
introduced in this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15500) Code bug in SslPrincipalMapper.java

2023-09-25 Thread Svyatoslav (Jira)
Svyatoslav created KAFKA-15500:
--

 Summary: Code bug in SslPrincipalMapper.java
 Key: KAFKA-15500
 URL: https://issues.apache.org/jira/browse/KAFKA-15500
 Project: Kafka
  Issue Type: Bug
  Components: clients, security
Affects Versions: 3.5.1
Reporter: Svyatoslav


Code bug in:

if (toLowerCase && result != null) {
                result = result.toLowerCase(Locale.ENGLISH);
            } else if (toUpperCase{color:#FF} & {color}result != null) {
                result = result.toUpperCase(Locale.ENGLISH);
            }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] divijvaidya commented on a diff in pull request #14433: KAFKA-15485: Support building with Java 21 (3/3)

2023-09-25 Thread via GitHub


divijvaidya commented on code in PR #14433:
URL: https://github.com/apache/kafka/pull/14433#discussion_r1335783017


##
build.gradle:
##
@@ -232,7 +232,10 @@ subprojects {
 
   apply plugin: 'java-library'
   apply plugin: 'checkstyle'
-  apply plugin: "com.github.spotbugs"
+
+  // spotbugs doesn't support Java 21 yet
+  if (!JavaVersion.current().isCompatibleWith(JavaVersion.VERSION_21))

Review Comment:
   Sure. As I mentioned, I am fine with not adding this as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] cadonna opened a new pull request, #14442: KAFKA-10199: Do not unlock state directories of tasks in state updater

2023-09-25 Thread via GitHub


cadonna opened a new pull request, #14442:
URL: https://github.com/apache/kafka/pull/14442

   When Streams completes a rebalance, it unlocks state directories all 
unassigned tasks. Unfortunately, when the state updater is enabled, Streams 
does not look into the state updater to determine the unassigned tasks.
   This commit corrects this by adding the check.
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rykovsi opened a new pull request, #14441: Correct SslPrincipalMapper.java

2023-09-25 Thread via GitHub


rykovsi opened a new pull request, #14441:
URL: https://github.com/apache/kafka/pull/14441

   Correct this "&" to "&&"
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] ijuma commented on a diff in pull request #14433: KAFKA-15485: Support building with Java 21 (3/3)

2023-09-25 Thread via GitHub


ijuma commented on code in PR #14433:
URL: https://github.com/apache/kafka/pull/14433#discussion_r1335774192


##
build.gradle:
##
@@ -232,7 +232,10 @@ subprojects {
 
   apply plugin: 'java-library'
   apply plugin: 'checkstyle'
-  apply plugin: "com.github.spotbugs"
+
+  // spotbugs doesn't support Java 21 yet
+  if (!JavaVersion.current().isCompatibleWith(JavaVersion.VERSION_21))

Review Comment:
   I see this as a short term thing, so didn't think it was worth it. I expect 
spotbugs to have a release that works with Java 21 shortly. If it doesn't 
happen in the next few weeks, happy to see the warning. Is that a reasonable 
compromise.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


jlprat commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733502161

   This one seems to be a failing one: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14434/1/testReport/junit/kafka.log/LogCleanerParameterizedIntegrationTest/Build___JDK_11_and_Scala_2_133__codec_snappy/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] jlprat commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


jlprat commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733500635

   @divijvaidya the link you pasted states "Passed", maybe it was on one of the 
other runs that failed?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] emissionnebula commented on a diff in pull request #14440: [KAFKA-15117] In TestSslUtils set SubjectAlternativeNames to null if there are no hostnames

2023-09-25 Thread via GitHub


emissionnebula commented on code in PR #14440:
URL: https://github.com/apache/kafka/pull/14440#discussion_r1335745349


##
clients/src/test/java/org/apache/kafka/test/TestSslUtils.java:
##
@@ -399,10 +399,12 @@ public CertificateBuilder(int days, String algorithm) {
 }
 
 public CertificateBuilder sanDnsNames(String... hostNames) throws 
IOException {
-GeneralName[] altNames = new GeneralName[hostNames.length];
-for (int i = 0; i < hostNames.length; i++)
-altNames[i] = new GeneralName(GeneralName.dNSName, 
hostNames[i]);
-subjectAltName = GeneralNames.getInstance(new 
DERSequence(altNames)).getEncoded();
+if (hostNames.length > 0) {
+GeneralName[] altNames = new GeneralName[hostNames.length];
+for (int i = 0; i < hostNames.length; i++)
+altNames[i] = new GeneralName(GeneralName.dNSName, 
hostNames[i]);
+subjectAltName = GeneralNames.getInstance(new 
DERSequence(altNames)).getEncoded();
+}

Review Comment:
   Makes sense. Added the `else` clause. Thank you!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] rajinisivaram commented on a diff in pull request #14440: [KAFKA-15117] In TestSslUtils set SubjectAlternativeNames to null if there are no hostnames

2023-09-25 Thread via GitHub


rajinisivaram commented on code in PR #14440:
URL: https://github.com/apache/kafka/pull/14440#discussion_r1335740613


##
clients/src/test/java/org/apache/kafka/test/TestSslUtils.java:
##
@@ -399,10 +399,12 @@ public CertificateBuilder(int days, String algorithm) {
 }
 
 public CertificateBuilder sanDnsNames(String... hostNames) throws 
IOException {
-GeneralName[] altNames = new GeneralName[hostNames.length];
-for (int i = 0; i < hostNames.length; i++)
-altNames[i] = new GeneralName(GeneralName.dNSName, 
hostNames[i]);
-subjectAltName = GeneralNames.getInstance(new 
DERSequence(altNames)).getEncoded();
+if (hostNames.length > 0) {
+GeneralName[] altNames = new GeneralName[hostNames.length];
+for (int i = 0; i < hostNames.length; i++)
+altNames[i] = new GeneralName(GeneralName.dNSName, 
hostNames[i]);
+subjectAltName = GeneralNames.getInstance(new 
DERSequence(altNames)).getEncoded();
+}

Review Comment:
   Should we also add an `else` clause that sets `subjectAltName = null` to 
preserve the override semantics if the method is called twice? Just in case 
some test relies on overriding existing hostname in future.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] emissionnebula opened a new pull request, #14440: [KAFKA-15117] In TestSslUtils set SubjectAlternativeNames to null if there are no hostnames

2023-09-25 Thread via GitHub


emissionnebula opened a new pull request, #14440:
URL: https://github.com/apache/kafka/pull/14440

   We are currently encoding an empty hostNames array to `subjectAltName` in 
the keystore. While parsing the certificates in the test this causes an issue. 
Up to Java 17, this parsing error was ignored. This PR assigns `subjectAltName` 
to `null` if hostnames are empty.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] divijvaidya commented on pull request #14434: KAFKA-15498: bump snappy-java version to 1.1.10.4

2023-09-25 Thread via GitHub


divijvaidya commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1733446288

   Some snappy related tests are failing: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14434/1/testReport/junit/kafka.log/LogCleanerParameterizedIntegrationTest/Build___JDK_20_and_Scala_2_133__codec_snappy_2/
 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (KAFKA-15498) Upgrade Snappy-Java to 1.1.10.4

2023-09-25 Thread Divij Vaidya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Divij Vaidya updated KAFKA-15498:
-
Summary: Upgrade Snappy-Java to 1.1.10.4  (was: [CVE fix] Upgrade 
Snappy-Java to 1.1.10.4)

> Upgrade Snappy-Java to 1.1.10.4
> ---
>
> Key: KAFKA-15498
> URL: https://issues.apache.org/jira/browse/KAFKA-15498
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.4.1, 3.5.1
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.6.0
>
>
> Snappy-java published a new vulnerability
> <[https://github.com/xerial/snappy-java/security/advisories/GHSA-55g7-9cwv-5qfv]>
> that will cause OOM error in the server.
> Kafka is also impacted by this vulnerability since it's like CVE-2023-34455
> <[https://nvd.nist.gov/vuln/detail/CVE-2023-34455]>.
> We'd better bump the snappy-java version to bypass this vulnerability.
> PR <[https://github.com/apache/kafka/pull/14434]> is created to run the CI
> build.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka] kamalcph commented on pull request #14330: KAFKA-15410: Delete records with tiered storage integration test (4/4)

2023-09-25 Thread via GitHub


kamalcph commented on PR #14330:
URL: https://github.com/apache/kafka/pull/14330#issuecomment-1733441920

   > > Folks the test introduced in this PR has been flaky lately - 
https://ge.apache.org/scans/tests?search.rootProjectNames=kafka=Europe/Berlin=org.apache.kafka.tiered.storage.integration.DeleteSegmentsDueToLogStartOffsetBreachTest=executeTieredStorageTest(String)%5B2%5D
   > > CI link - 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14365/2/testReport/junit/org.apache.kafka.tiered.storage.integration/DeleteSegmentsDueToLogStartOffsetBreachTest/Build___JDK_17_and_Scala_2_13___executeTieredStorageTest_String__quorum_kraft/
   > 
   > Will look into it later today.
   
   Opened #14439 to fix this flaky test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka] kamalcph opened a new pull request, #14439: KAFKA-15499: Fix the flaky DeleteSegmentsDueToLogStartOffsetBreach test.

2023-09-25 Thread via GitHub


kamalcph opened a new pull request, #14439:
URL: https://github.com/apache/kafka/pull/14439

   DeleteSegmentsDueToLogStartOffsetBreach configures the segment such that it 
can hold at-most 2 record-batches. And, it asserts that the 
local-log-start-offset based on the assumption that each segment will contain 
exactly two messages.
   
   During leader switch, the segment can get rotated and may not always contain 
two records. Previously, we were checking whether the expected 
local-log-start-offset is equal to the 
base-offset-of-the-first-local-log-segment. With this patch, we will scan the 
first local-log-segment for the expected offset.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15499) Fix the flaky DeleteSegmentsDueToLogStartOffsetBreachTest

2023-09-25 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-15499:


 Summary: Fix the flaky DeleteSegmentsDueToLogStartOffsetBreachTest
 Key: KAFKA-15499
 URL: https://issues.apache.org/jira/browse/KAFKA-15499
 Project: Kafka
  Issue Type: Test
Reporter: Kamal Chandraprakash
Assignee: Kamal Chandraprakash


Flaky test failure is reported in 
[https://github.com/apache/kafka/pull/14330#issuecomment-1717195473]



{code:java}
java.lang.AssertionError: [BrokerId=0] The base offset of the first log segment 
of topicA-0 in the log directory is 7 which is smaller than the expected offset 
8. The directory of topicA-0 is made of the following files: 
leader-epoch-checkpoint
0009.timeindex
remote_log_snapshot
0009.index
0007.timeindex
0007.index
0007.snapshot
0005.snapshot
0009.log
partition.metadata
0009.snapshot
0007.log
at 
org.apache.kafka.tiered.storage.utils.BrokerLocalStorage.waitForOffset(BrokerLocalStorage.java:118)
at 
org.apache.kafka.tiered.storage.utils.BrokerLocalStorage.waitForEarliestLocalOffset(BrokerLocalStorage.java:77)
at 
org.apache.kafka.tiered.storage.actions.ProduceAction.doExecute(ProduceAction.java:121)
at 
org.apache.kafka.tiered.storage.TieredStorageTestAction.execute(TieredStorageTestAction.java:25)
at 
org.apache.kafka.tiered.storage.TieredStorageTestHarness.executeTieredStorageTest(TieredStorageTestHarness.java:177){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >