[jira] [Resolved] (KAFKA-16722) Add ConsumerGroupPartitionAssignor interface

2024-05-29 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16722.
-
  Reviewer: David Jacot
Resolution: Fixed

> Add ConsumerGroupPartitionAssignor interface
> 
>
> Key: KAFKA-16722
> URL: https://issues.apache.org/jira/browse/KAFKA-16722
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
> Fix For: 3.8.0
>
>
> Adds the interface 
> `org.apache.kafka.coordinator.group.assignor.ConsumerGroupPartitionAssignor` 
> as described in KIP-932.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16569) Target Assignment Format Change

2024-05-29 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16569.
-
Resolution: Won't Do

> Target Assignment Format Change
> ---
>
> Key: KAFKA-16569
> URL: https://issues.apache.org/jira/browse/KAFKA-16569
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
>
> Currently the assignment is stored as Map>, we 
> want to change it to a list
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16832) LeaveGroup API for upgrading ConsumerGroup

2024-05-29 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16832.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> LeaveGroup API for upgrading ConsumerGroup
> --
>
> Key: KAFKA-16832
> URL: https://issues.apache.org/jira/browse/KAFKA-16832
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16860) Introduce `group.version` feature flag

2024-05-29 Thread David Jacot (Jira)
David Jacot created KAFKA-16860:
---

 Summary: Introduce `group.version` feature flag
 Key: KAFKA-16860
 URL: https://issues.apache.org/jira/browse/KAFKA-16860
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16371) Unstable committed offsets after triggering commits where metadata for some partitions are over the limit

2024-05-27 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16371.
-
Fix Version/s: 3.8.0
   3.7.1
 Assignee: David Jacot
   Resolution: Fixed

> Unstable committed offsets after triggering commits where metadata for some 
> partitions are over the limit
> -
>
> Key: KAFKA-16371
> URL: https://issues.apache.org/jira/browse/KAFKA-16371
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 3.7.0
>Reporter: mlowicki
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0, 3.7.1
>
>
> Issue is reproducible with simple CLI tool - 
> [https://gist.github.com/mlowicki/c3b942f5545faced93dc414e01a2da70]
> {code:java}
> #!/usr/bin/env bash
> for i in {1..100}
> do
> kafka-committer --bootstrap "ADDR:9092" --topic "TOPIC" --group foo 
> --metadata-min 6000 --metadata-max 1 --partitions 72 --fetch
> done{code}
> What it does it that initially it fetches committed offsets and then tries to 
> commit for multiple partitions. If some of commits have metadata over the 
> allowed limit then:
> 1. I see errors about too large commits (expected)
> 2. Another run the tool fails at the stage of fetching commits with (this is 
> the problem):
> {code:java}
> config: ClientConfig { conf_map: { "group.id": "bar", "bootstrap.servers": 
> "ADDR:9092", }, log_level: Error, }
> fetching committed offsets..
> Error: Meta data fetch error: OperationTimedOut (Local: Timed out) Caused by: 
> OperationTimedOut (Local: Timed out){code}
> On the Kafka side I see _unstable_offset_commits_ errors reported by out 
> internal metric which is derived from:
> {noformat}
>  
> kafka.network:type=RequestMetrics,name=ErrorsPerSec,request=X,error=Y{noformat}
> Increasing the timeout doesn't help and the only solution I've found is to 
> trigger commits for all partitions with metadata below the limit or to use: 
> {code:java}
> isolation.level=read_uncommitted{code}
>  
> I don't know that code very well but 
> [https://github.com/apache/kafka/blob/3.7/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L492-L496]
>  seems fishy:
> {code:java}
>     if (isTxnOffsetCommit) {
>       addProducerGroup(producerId, group.groupId)
>       group.prepareTxnOffsetCommit(producerId, offsetMetadata)
>     } else {
>       group.prepareOffsetCommit(offsetMetadata)
>     }{code}
> as it's using _offsetMetadata_ and not _filteredOffsetMetadata_ and I see 
> that while removing those pending commits we use filtered offset metadata 
> around 
> [https://github.com/apache/kafka/blob/3.7/core/src/main/scala/kafka/coordinator/group/GroupMetadataManager.scala#L397-L422]
>  
> {code:java}
>       val responseError = group.inLock {
>         if (status.error == Errors.NONE) {
>           if (!group.is(Dead)) {
>             filteredOffsetMetadata.forKeyValue { (topicIdPartition, 
> offsetAndMetadata) =>
>               if (isTxnOffsetCommit)
>                 group.onTxnOffsetCommitAppend(producerId, topicIdPartition, 
> CommitRecordMetadataAndOffset(Some(status.baseOffset), offsetAndMetadata))
>               else
>                 group.onOffsetCommitAppend(topicIdPartition, 
> CommitRecordMetadataAndOffset(Some(status.baseOffset), offsetAndMetadata))
>             }
>           }
>           // Record the number of offsets committed to the log
>           offsetCommitsSensor.record(records.size)
>           Errors.NONE
>         } else {
>           if (!group.is(Dead)) {
>             if (!group.hasPendingOffsetCommitsFromProducer(producerId))
>               removeProducerGroup(producerId, group.groupId)
>             filteredOffsetMetadata.forKeyValue { (topicIdPartition, 
> offsetAndMetadata) =>
>               if (isTxnOffsetCommit)
>                 group.failPendingTxnOffsetCommit(producerId, topicIdPartition)
>               else
>                 group.failPendingOffsetWrite(topicIdPartition, 
> offsetAndMetadata)
>             }
>           }
> {code}
> so the problem might be related to not cleaning up the data structure with 
> pending commits properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16846) Should TxnOffsetCommit API fail all the offsets if any fails the validation?

2024-05-27 Thread David Jacot (Jira)
David Jacot created KAFKA-16846:
---

 Summary: Should TxnOffsetCommit API fail all the offsets if any 
fails the validation?
 Key: KAFKA-16846
 URL: https://issues.apache.org/jira/browse/KAFKA-16846
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot


While working on KAFKA-16371, we realized that the handling of 
INVALID_COMMIT_OFFSET_SIZE errors while committer transaction offsets, is a bit 
inconsistent between the server and the client. On the server, the offsets are 
validated independently from each others. Hence if two offsets A and B are 
committed and A fails the validation, B is still written to the log as part of 
the transaction. On the client, when INVALID_COMMIT_OFFSET_SIZE is received, 
the transaction transitions to the fatal state. Hence the transaction will be 
eventually aborted.

The client side API is quite limiting here because it does not return an error 
per committed offsets. It is all or nothing. From this point of view, the 
current behaviour is correct. It seems that we could either change the API and 
let the user decide what to do; or we could strengthen the validation on the 
server to fail all the offsets if any of them fails (all or nothing). We could 
also leave it as it is.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16625) Reverse Lookup Partition to Member in Assignors

2024-05-25 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16625.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> Reverse Lookup Partition to Member in Assignors
> ---
>
> Key: KAFKA-16625
> URL: https://issues.apache.org/jira/browse/KAFKA-16625
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
>
> Calculating unassigned partitions within the Uniform assignor is a costly 
> process, this can be improved by using a reverse lookup map between 
> topicIdPartition and the member



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16831) CoordinatorRuntime should initialize MemoryRecordsBuilder with max batch size write limit

2024-05-24 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16831.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> CoordinatorRuntime should initialize MemoryRecordsBuilder with max batch size 
> write limit
> -
>
> Key: KAFKA-16831
> URL: https://issues.apache.org/jira/browse/KAFKA-16831
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.8.0
>
>
> Otherwise, we default to the min buffer size of 16384 for the write limit. 
> This causes the coordinator to threw RecordTooLargeException even when it's 
> under the 1MB max batch size limit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16503) getOrMaybeCreateClassicGroup should not thrown GroupIdNotFoundException

2024-05-24 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-16503:
---

Assignee: David Jacot

> getOrMaybeCreateClassicGroup should not thrown GroupIdNotFoundException
> ---
>
> Key: KAFKA-16503
> URL: https://issues.apache.org/jira/browse/KAFKA-16503
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> It looks like `getOrMaybeCreateClassicGroup` method throws an 
> `GroupIdNotFoundException` error when the group exists but with the wrong 
> type. As `getOrMaybeCreateClassicGroup` is mainly used on the 
> join-group/sync-group APIs, this seems incorrect. We need to double check and 
> fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16815) Handle FencedInstanceId on heartbeat for new consumer

2024-05-24 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16815.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> Handle FencedInstanceId on heartbeat for new consumer
> -
>
> Key: KAFKA-16815
> URL: https://issues.apache.org/jira/browse/KAFKA-16815
> Project: Kafka
>  Issue Type: Task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
>
> With the new consumer group protocol, a member could receive a 
> FencedInstanceIdError in the heartbeat response. This could be the case when 
> an active member using a group instance id is removed from the group by an 
> admin client. If a second member joins with the same instance id, the first 
> member will receive a FencedInstanceId on the next heartbeat response. This 
> should be treated as a fatal error (consumer should not attempt to rejoin). 
> Currently, the FencedInstanceId is not explicitly handled by the client in 
> the HeartbeatRequestManager. It ends up being treated as a fatal error, see 
> [here|https://github.com/apache/kafka/blob/5552f5c26df4eb07b2d6ee218e4a29e4ca790d5c/clients/src/main/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManager.java#L417]
>  (just because it lands on the "unexpected" error category). We should handle 
> it explicitly, just to make sure that we express that it's is an expected 
> error: log a proper message for it and fail (handleFatalFailure). We should 
> also that the error is included in the tests that cover the HB request error 
> handling 
> ([here|https://github.com/apache/kafka/blob/5552f5c26df4eb07b2d6ee218e4a29e4ca790d5c/clients/src/test/java/org/apache/kafka/clients/consumer/internals/HeartbeatRequestManagerTest.java#L798])
>     



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16626) Uuid to String for subscribed topic names in assignment spec

2024-05-24 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16626.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Uuid to String for subscribed topic names in assignment spec
> 
>
> Key: KAFKA-16626
> URL: https://issues.apache.org/jira/browse/KAFKA-16626
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.8.0
>
>
> In creating the assignment spec from the existing consumer subscription 
> metadata, quite some time is spent in converting the String to a Uuid
> Change from Uuid to String for the subscribed topics in assignment spec and 
> convert on the fly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16793) Heartbeat API for upgrading ConsumerGroup

2024-05-23 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16793.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> Heartbeat API for upgrading ConsumerGroup
> -
>
> Key: KAFKA-16793
> URL: https://issues.apache.org/jira/browse/KAFKA-16793
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16762) SyncGroup API for upgrading ConsumerGroup

2024-05-17 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16762.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> SyncGroup API for upgrading ConsumerGroup
> -
>
> Key: KAFKA-16762
> URL: https://issues.apache.org/jira/browse/KAFKA-16762
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16770) Coalesce records into bigger batches

2024-05-15 Thread David Jacot (Jira)
David Jacot created KAFKA-16770:
---

 Summary: Coalesce records into bigger batches
 Key: KAFKA-16770
 URL: https://issues.apache.org/jira/browse/KAFKA-16770
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16758) Extend Consumer#close with option to leave the group or not

2024-05-14 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846237#comment-17846237
 ] 

David Jacot commented on KAFKA-16758:
-

[~ableegoldman] It will be straightforward in the new consumer too. Adding the 
"newbie" label makes sense. I also think that this one would be a perfect KIP 
for [~lianetm] too :).

> Extend Consumer#close with option to leave the group or not
> ---
>
> Key: KAFKA-16758
> URL: https://issues.apache.org/jira/browse/KAFKA-16758
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer
>Reporter: A. Sophie Blee-Goldman
>Priority: Major
>  Labels: needs-kip
>
> See comments on https://issues.apache.org/jira/browse/KAFKA-16514 for the 
> full context.
> Essentially we would get rid of the "internal.leave.group.on.close" config 
> that is used as a backdoor by Kafka Streams right now to prevent closed 
> consumers from leaving the group, thus reducing unnecessary task movements 
> after a simple bounce. 
> This would be replaced by an actual public API that would allow the caller to 
> opt in or out to the LeaveGroup when close is called. This would be similar 
> to the KafkaStreams#close(CloseOptions) API, and in fact would be how that 
> API will be implemented (since it only works for static groups at the moment 
> as noted in KAFKA-16514 )
> This has several benefits over the current situation:
>  # It allows plain consumer apps to opt-out of leaving the group when closed, 
> which is currently not possible through any public API (only an internal 
> backdoor config)
>  # It enables the caller to dynamically select the appropriate action 
> depending on why the client is being closed – for example, you would not want 
> the consumer to leave the group during a simple restart, but would want it to 
> leave the group when shutting down the app or if scaling down the node. This 
> is not possible today, even with the internal config, since configs are 
> immutable
>  # It can be leveraged to "fix" the KafkaStreams#close(closeOptions) API so 
> that the user's choice to leave the group during close will be respected for 
> non-static members



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16694) Remove rack aware code in assignors temporarily due to performance

2024-05-14 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16694.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Remove rack aware code in assignors temporarily due to performance
> --
>
> Key: KAFKA-16694
> URL: https://issues.apache.org/jira/browse/KAFKA-16694
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15578) Run System Tests for Old protocol in the New Coordinator

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15578.
-
Resolution: Fixed

> Run System Tests for Old protocol in the New Coordinator
> 
>
> Key: KAFKA-15578
> URL: https://issues.apache.org/jira/browse/KAFKA-15578
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
>  Labels: kip-848-preview
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> Change existing system tests related to the consumer group protocol and group 
> coordinator to test the old protocol running with the new coordinator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14701) More broker side partition assignor to common

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-14701:
---

Affects Version/s: 3.8.0
 Assignee: David Jacot
 Priority: Blocker  (was: Major)

> More broker side partition assignor to common
> -
>
> Key: KAFKA-14701
> URL: https://issues.apache.org/jira/browse/KAFKA-14701
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.8.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
>
> Before releasing KIP-848, we need to move the server side partition assignor 
> to its final location in common.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16117) Add Integration test for checking if the correct assignor is chosen

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16117.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add Integration test for checking if the correct assignor is chosen
> ---
>
> Key: KAFKA-16117
> URL: https://issues.apache.org/jira/browse/KAFKA-16117
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Priority: Minor
> Fix For: 3.8.0
>
>
> h4.  We are trying to test this section of the KIP-848
> h4. Assignor Selection
> The group coordinator has to determine which assignment strategy must be used 
> for the group. The group's members may not have exactly the same assignors at 
> any given point in time - e.g. they may migrate from an assignor to another 
> one for instance. The group coordinator will chose the assignor as follow:
>  * A client side assignor is used if possible. This means that a client side 
> assignor must be supported by all the members. If multiple are, it will 
> respect the precedence defined by the members when they advertise their 
> supported client side assignors.
>  * A server side assignor is used otherwise. If multiple server side 
> assignors are specified in the group, the group coordinator uses the most 
> common one. If a member does not provide an assignor, the group coordinator 
> will default to the first one in {{{}group.consumer.assignors{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16735) Deprecate offsets.commit.required.acks in 3.8

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16735.
-
Resolution: Fixed

> Deprecate offsets.commit.required.acks in 3.8
> -
>
> Key: KAFKA-16735
> URL: https://issues.apache.org/jira/browse/KAFKA-16735
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16735) Deprecate offsets.commit.required.acks in 3.8

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16735:

Fix Version/s: 3.8.0

> Deprecate offsets.commit.required.acks in 3.8
> -
>
> Key: KAFKA-16735
> URL: https://issues.apache.org/jira/browse/KAFKA-16735
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16736) Remove offsets.commit.required.acks in 4.0

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-16736:
---

Assignee: David Jacot

> Remove offsets.commit.required.acks in 4.0
> --
>
> Key: KAFKA-16736
> URL: https://issues.apache.org/jira/browse/KAFKA-16736
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 4.0.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16735) Deprecate offsets.commit.required.acks in 3.8

2024-05-13 Thread David Jacot (Jira)
David Jacot created KAFKA-16735:
---

 Summary: Deprecate offsets.commit.required.acks in 3.8
 Key: KAFKA-16735
 URL: https://issues.apache.org/jira/browse/KAFKA-16735
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16736) Remove offsets.commit.required.acks in 4.0

2024-05-13 Thread David Jacot (Jira)
David Jacot created KAFKA-16736:
---

 Summary: Remove offsets.commit.required.acks in 4.0
 Key: KAFKA-16736
 URL: https://issues.apache.org/jira/browse/KAFKA-16736
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 4.0.0
Reporter: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16587) Store subscription model for consumer group in group state

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16587.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> Store subscription model for consumer group in group state
> --
>
> Key: KAFKA-16587
> URL: https://issues.apache.org/jira/browse/KAFKA-16587
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
>
> Currently we iterate through all the subscribed topics for each member in the 
> consumer group to determine whether all the members are subscribed to the 
> same set of topics aka it has a homogeneous subscription model.
> Instead of iterating and comparing the topicIds on every rebalance, we want 
> to maintain this information in the group state



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16663) CoordinatorRuntime write timer tasks should be cancelled once HWM advances

2024-05-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16663.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> CoordinatorRuntime write timer tasks should be cancelled once HWM advances
> --
>
> Key: KAFKA-16663
> URL: https://issues.apache.org/jira/browse/KAFKA-16663
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.8.0
>
>
> Otherwise, we pile up the number of timer tasks which are no-ops if 
> replication was successful. They stay in memory for 15 seconds and as the 
> rate of write increases, this may heavily impact memory usage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16307) fix EventAccumulator thread idle ratio metric

2024-05-07 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16307.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
   Resolution: Fixed

> fix EventAccumulator thread idle ratio metric
> -
>
> Key: KAFKA-16307
> URL: https://issues.apache.org/jira/browse/KAFKA-16307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.8.0
>
>
> The metric does not seem to be accurate, nor reporting metrics at every 
> interval. Requires investigation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16615) JoinGroup API for upgrading ConsumerGroup

2024-05-07 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16615.
-
Fix Version/s: 3.8.0
 Reviewer: David Jacot
 Assignee: Dongnuo Lyu
   Resolution: Fixed

> JoinGroup API for upgrading ConsumerGroup
> -
>
> Key: KAFKA-16615
> URL: https://issues.apache.org/jira/browse/KAFKA-16615
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16658) Drop `offsets.commit.required.acks` config in 4.0 (deprecate in 3.8)

2024-05-02 Thread David Jacot (Jira)
David Jacot created KAFKA-16658:
---

 Summary: Drop `offsets.commit.required.acks` config in 4.0 
(deprecate in 3.8)
 Key: KAFKA-16658
 URL: https://issues.apache.org/jira/browse/KAFKA-16658
 Project: Kafka
  Issue Type: New Feature
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16568) Add JMH Benchmarks for assignor performance testing

2024-04-25 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16568.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add JMH Benchmarks for assignor performance testing 
> 
>
> Key: KAFKA-16568
> URL: https://issues.apache.org/jira/browse/KAFKA-16568
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
>
> The 3 benchmarks that are being used to test the performance and efficiency 
> of the consumer group rebalance process.
>  * Client Assignors (assign method)
>  * Server Assignors (assign method)
>  * Target Assignment Builder (build method)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15089) Consolidate all the group coordinator configs

2024-04-18 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17838754#comment-17838754
 ] 

David Jacot commented on KAFKA-15089:
-

The goal was to also define the “AbstractConfig” part here like we did for the 
storage and raft modules. Are you interested in giving it a try?

> Consolidate all the group coordinator configs
> -
>
> Key: KAFKA-15089
> URL: https://issues.apache.org/jira/browse/KAFKA-15089
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>
> The group coordinator configurations are defined in KafkaConfig at the 
> moment. As KafkaConfig is defined in the core module, we can't pass it to the 
> new java modules to pass the configurations along.
> A suggestion here is to centralize all the configurations of a module in the 
> module itself similarly to what we have do for RemoteLogManagerConfig and 
> RaftConfig. We also need a mechanism to add all the properties defined in the 
> module to the KafkaConfig's ConfigDef.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16294) Add group protocol migration enabling config

2024-04-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16294.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add group protocol migration enabling config
> 
>
> Key: KAFKA-16294
> URL: https://issues.apache.org/jira/browse/KAFKA-16294
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.8.0
>
>
> The online upgrade is triggered when a consumer group heartbeat request is 
> received in a classic group. The downgrade is triggered when any old protocol 
> request is received in a consumer group. We only accept upgrade/downgrade if 
> the corresponding group migration config policy is enabled.
> This is the first part of the implementation of online group protocol 
> migration, adding the kafka config group protocol migration. The config has 
> four valid values – both(both upgrade and downgrade are allowed), 
> upgrade(only upgrade is allowed), downgrade(only downgrade is allowed) and 
> none(neither is allowed.).
> At present the default value is NONE. When we start enabling the migration, 
> we expect to set BOTH to default so that it's easier to roll back to the old 
> protocol as a quick fix for anything wrong in the new protocol; when using 
> consumer groups becomes default and the migration is near finished, we will 
> set the default policy to UPGRADE to prevent unwanted downgrade causing too 
> frequent migration. DOWNGRADE could be useful for revert or debug purposes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16503) getOrMaybeCreateClassicGroup should not thrown GroupIdNotFoundException

2024-04-10 Thread David Jacot (Jira)
David Jacot created KAFKA-16503:
---

 Summary: getOrMaybeCreateClassicGroup should not thrown 
GroupIdNotFoundException
 Key: KAFKA-16503
 URL: https://issues.apache.org/jira/browse/KAFKA-16503
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot


It looks like `getOrMaybeCreateClassicGroup` method throws an 
`GroupIdNotFoundException` error when the group exists but with the wrong type. 
As `getOrMaybeCreateClassicGroup` is mainly used on the join-group/sync-group 
APIs, this seems incorrect. We need to double check and fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16470) kafka-dump-log --offsets-decoder should support new records

2024-04-04 Thread David Jacot (Jira)
David Jacot created KAFKA-16470:
---

 Summary: kafka-dump-log --offsets-decoder should support new 
records
 Key: KAFKA-16470
 URL: https://issues.apache.org/jira/browse/KAFKA-16470
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16148) Implement GroupMetadataManager#onUnloaded

2024-04-02 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16148.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Implement GroupMetadataManager#onUnloaded
> -
>
> Key: KAFKA-16148
> URL: https://issues.apache.org/jira/browse/KAFKA-16148
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.8.0
>
>
> complete all awaiting futures with NOT_COORDINATOR (for classic group)
> transition all groups to DEAD.
> Cancel all timers related to the unloaded group metadata manager



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16353) Offline protocol migration integration tests

2024-03-27 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16353.
-
Resolution: Fixed

> Offline protocol migration integration tests
> 
>
> Key: KAFKA-16353
> URL: https://issues.apache.org/jira/browse/KAFKA-16353
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16374) High watermark updates should have a higher priority

2024-03-25 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16374.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> High watermark updates should have a higher priority
> 
>
> Key: KAFKA-16374
> URL: https://issues.apache.org/jira/browse/KAFKA-16374
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15989) Upgrade existing generic group to consumer group

2024-03-20 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15989.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Upgrade existing generic group to consumer group
> 
>
> Key: KAFKA-15989
> URL: https://issues.apache.org/jira/browse/KAFKA-15989
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Minor
> Fix For: 3.8.0
>
>
> It should be possible to upgrade an existing generic group to a new consumer 
> group, in case it was using either the previous generic protocol or manual 
> partition assignment and commit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15763) Group Coordinator should not deliver new assignment before previous one is acknowledged

2024-03-20 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15763.
-
Resolution: Won't Fix

We went with another approach.

> Group Coordinator should not deliver new assignment before previous one is 
> acknowledged
> ---
>
> Key: KAFKA-15763
> URL: https://issues.apache.org/jira/browse/KAFKA-15763
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> In the initial implementation of the new consumer group protocol, the group 
> coordinators waits on received an acknowledgement from the consumer only when 
> there are partitions to be revoked. In the case of newly assigned partitions, 
> a new assignment can be delivered any time (e.g. in two subsequent 
> heartbeats).
> While implementing the state machine on the client side, we found out that 
> this caused confusion because the protocol does not treat revocation and 
> assignment in the same way. We also found out that changing the assignment 
> before the previous one is fully processed by the member makes the client 
> side logic more complicated than it should be because the consumer can't 
> process any new assignment until it has completed the previous one.
> In the end, it is better to change the server side to not deliver a new 
> assignment before the current one is acknowledged by the consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16313) Offline group protocol migration

2024-03-20 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16313.
-
Fix Version/s: 3.8.0
 Assignee: Dongnuo Lyu
   Resolution: Fixed

> Offline group protocol migration
> 
>
> Key: KAFKA-16313
> URL: https://issues.apache.org/jira/browse/KAFKA-16313
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16367) Full ConsumerGroupHeartbeat response must be sent when full request is received

2024-03-19 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16367.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Full ConsumerGroupHeartbeat response must be sent when full request is 
> received
> ---
>
> Key: KAFKA-16367
> URL: https://issues.apache.org/jira/browse/KAFKA-16367
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16374) High watermark updates should have a higher priority

2024-03-14 Thread David Jacot (Jira)
David Jacot created KAFKA-16374:
---

 Summary: High watermark updates should have a higher priority
 Key: KAFKA-16374
 URL: https://issues.apache.org/jira/browse/KAFKA-16374
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14048) The Next Generation of the Consumer Rebalance Protocol

2024-03-14 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827070#comment-17827070
 ] 

David Jacot commented on KAFKA-14048:
-

[~aratz_ws] We are working with the current timeline in mind: Preview in AK 3.8 
and GA in AK 4.0 (without the client side assignor).

> The Next Generation of the Consumer Rebalance Protocol
> --
>
> Key: KAFKA-14048
> URL: https://issues.apache.org/jira/browse/KAFKA-14048
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> This Jira tracks the development of KIP-848: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16249) Improve reconciliation state machine

2024-03-14 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16249.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Improve reconciliation state machine
> 
>
> Key: KAFKA-16249
> URL: https://issues.apache.org/jira/browse/KAFKA-16249
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15997) Ensure fairness in the uniform assignor

2024-03-14 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15997.
-
Resolution: Fixed

This issue got resolved by https://issues.apache.org/jira/browse/KAFKA-16249.

> Ensure fairness in the uniform assignor
> ---
>
> Key: KAFKA-15997
> URL: https://issues.apache.org/jira/browse/KAFKA-15997
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Minor
>
>  
>  
> Fairness has to be ensured in uniform assignor as it was in 
> cooperative-sticky one.
> There's this test 0113 subtest u_multiple_subscription_changes in librdkafka 
> where 8 consumers are subscribing to the same topic, and it's verifying that 
> all of them are getting 2 partitions assigned. But with new protocol it seems 
> two consumers get assigned 3 partitions and 1 has zero partitions. The test 
> doesn't configure any client.rack.
> {code:java}
> [0113_cooperative_rebalance  /478.183s] Consumer assignments 
> (subscription_variation 0) (stabilized) (no rebalance cb):
> [0113_cooperative_rebalance  /478.183s] Consumer C_0#consumer-3 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [5] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [8] (4000msgs)
> [0113_cooperative_rebalance  /478.183s] Consumer C_1#consumer-4 assignment 
> (3): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [0] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [3] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [13] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_2#consumer-5 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [6] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [10] (2000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_3#consumer-6 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [7] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [9] (2000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_4#consumer-7 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [11] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [14] (3000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_5#consumer-8 assignment 
> (3): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [1] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [2] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [4] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_6#consumer-9 assignment 
> (0): 
> [0113_cooperative_rebalance  /478.184s] Consumer C_7#consumer-10 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [12] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [15] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] 16/32 partitions assigned
> [0113_cooperative_rebalance  /478.184s] Consumer C_0#consumer-3 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_1#consumer-4 has 3 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_2#consumer-5 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_3#consumer-6 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_4#consumer-7 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_5#consumer-8 has 3 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_6#consumer-9 has 0 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_7#consumer-10 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [                      /479.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [                      /480.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [                      /481.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [0113_cooperative_rebalance  /482.498s] TEST FAILURE
> ### Test "0113_cooperative_rebalance (u_multiple_subscription_changes:2390: 
> use_rebalance_cb: 0, subscription_variation: 0)" failed at 
> test.c:1243:check_test_timeouts() at Thu Dec  7 15:52:15 2023: ###
> Test 0113_cooperative_rebalance (u_multiple_subscription_changes:2390: 
> use_rebalance_cb: 0, subscription_variation: 0) timed out (timeout set to 480 
> seconds)
> ./run-test.sh: line 62: 3512920 Killed                  $TEST $ARGS
> ###
> ### Test ./test-runner in bare mode FAILED! (return code 137) ###
> ###{code}
>  
>  




[jira] [Assigned] (KAFKA-15997) Ensure fairness in the uniform assignor

2024-03-14 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-15997:
---

Assignee: David Jacot  (was: Ritika Reddy)

> Ensure fairness in the uniform assignor
> ---
>
> Key: KAFKA-15997
> URL: https://issues.apache.org/jira/browse/KAFKA-15997
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Minor
>
>  
>  
> Fairness has to be ensured in uniform assignor as it was in 
> cooperative-sticky one.
> There's this test 0113 subtest u_multiple_subscription_changes in librdkafka 
> where 8 consumers are subscribing to the same topic, and it's verifying that 
> all of them are getting 2 partitions assigned. But with new protocol it seems 
> two consumers get assigned 3 partitions and 1 has zero partitions. The test 
> doesn't configure any client.rack.
> {code:java}
> [0113_cooperative_rebalance  /478.183s] Consumer assignments 
> (subscription_variation 0) (stabilized) (no rebalance cb):
> [0113_cooperative_rebalance  /478.183s] Consumer C_0#consumer-3 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [5] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [8] (4000msgs)
> [0113_cooperative_rebalance  /478.183s] Consumer C_1#consumer-4 assignment 
> (3): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [0] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [3] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [13] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_2#consumer-5 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [6] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [10] (2000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_3#consumer-6 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [7] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [9] (2000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_4#consumer-7 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [11] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [14] (3000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_5#consumer-8 assignment 
> (3): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [1] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [2] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [4] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_6#consumer-9 assignment 
> (0): 
> [0113_cooperative_rebalance  /478.184s] Consumer C_7#consumer-10 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [12] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [15] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] 16/32 partitions assigned
> [0113_cooperative_rebalance  /478.184s] Consumer C_0#consumer-3 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_1#consumer-4 has 3 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_2#consumer-5 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_3#consumer-6 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_4#consumer-7 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_5#consumer-8 has 3 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_6#consumer-9 has 0 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_7#consumer-10 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [                      /479.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [                      /480.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [                      /481.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [0113_cooperative_rebalance  /482.498s] TEST FAILURE
> ### Test "0113_cooperative_rebalance (u_multiple_subscription_changes:2390: 
> use_rebalance_cb: 0, subscription_variation: 0)" failed at 
> test.c:1243:check_test_timeouts() at Thu Dec  7 15:52:15 2023: ###
> Test 0113_cooperative_rebalance (u_multiple_subscription_changes:2390: 
> use_rebalance_cb: 0, subscription_variation: 0) timed out (timeout set to 480 
> seconds)
> ./run-test.sh: line 62: 3512920 Killed                  $TEST $ARGS
> ###
> ### Test ./test-runner in bare mode FAILED! (return code 137) ###
> ###{code}
>  
>  



--
This message was sent by Atlassian Jira

[jira] [Created] (KAFKA-16367) Full ConsumerGroupHeartbeat response must be sent when full request is received

2024-03-12 Thread David Jacot (Jira)
David Jacot created KAFKA-16367:
---

 Summary: Full ConsumerGroupHeartbeat response must be sent when 
full request is received
 Key: KAFKA-16367
 URL: https://issues.apache.org/jira/browse/KAFKA-16367
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15462) Add group type filter to the admin client

2024-02-29 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15462.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add group type filter to the admin client
> -
>
> Key: KAFKA-15462
> URL: https://issues.apache.org/jira/browse/KAFKA-15462
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16306) GroupCoordinatorService logger is not configured

2024-02-27 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16306.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> GroupCoordinatorService logger is not configured
> 
>
> Key: KAFKA-16306
> URL: https://issues.apache.org/jira/browse/KAFKA-16306
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Minor
> Fix For: 3.8.0
>
>
> The GroupCoordinatorService constructor initializes with the wrong logger 
> class:
> ```
> GroupCoordinatorService(
> LogContext logContext,
> GroupCoordinatorConfig config,
> CoordinatorRuntime runtime,
> GroupCoordinatorMetrics groupCoordinatorMetrics
> ) {
>     this.log = logContext.logger(CoordinatorLoader.class);
>     this.config = config;
>     this.runtime = runtime;
>     this.groupCoordinatorMetrics = groupCoordinatorMetrics;
> }
> ```
> change this to GroupCoordinatorService.class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16307) fix EventAccumulator thread idle ratio metric

2024-02-26 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-16307:
---

Assignee: Jeff Kim

> fix EventAccumulator thread idle ratio metric
> -
>
> Key: KAFKA-16307
> URL: https://issues.apache.org/jira/browse/KAFKA-16307
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
>
> The metric does not seem to be accurate, nor reporting metrics at every 
> interval. Requires investigation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16306) GroupCoordinatorService logger is not configured

2024-02-26 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-16306:
---

Assignee: Jeff Kim

> GroupCoordinatorService logger is not configured
> 
>
> Key: KAFKA-16306
> URL: https://issues.apache.org/jira/browse/KAFKA-16306
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Minor
>
> The GroupCoordinatorService constructor initializes with the wrong logger 
> class:
> ```
> GroupCoordinatorService(
> LogContext logContext,
> GroupCoordinatorConfig config,
> CoordinatorRuntime runtime,
> GroupCoordinatorMetrics groupCoordinatorMetrics
> ) {
>     this.log = logContext.logger(CoordinatorLoader.class);
>     this.config = config;
>     this.runtime = runtime;
>     this.groupCoordinatorMetrics = groupCoordinatorMetrics;
> }
> ```
> change this to GroupCoordinatorService.class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16249) Improve reconciliation state machine

2024-02-13 Thread David Jacot (Jira)
David Jacot created KAFKA-16249:
---

 Summary: Improve reconciliation state machine
 Key: KAFKA-16249
 URL: https://issues.apache.org/jira/browse/KAFKA-16249
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16244) Move code style exceptions from suppressions.xml to the code

2024-02-12 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816873#comment-17816873
 ] 

David Jacot commented on KAFKA-16244:
-

[~ijuma] I actually learnt yesterday that we are already doing it: 
[https://github.com/apache/kafka/pull/15139#discussion_r1451037707.] [~jsancio] 
Did you check the license when you added support for it?

> Move code style exceptions from suppressions.xml to the code
> 
>
> Key: KAFKA-16244
> URL: https://issues.apache.org/jira/browse/KAFKA-16244
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16244) Move code style exceptions from suppressions.xml to the code

2024-02-12 Thread David Jacot (Jira)
David Jacot created KAFKA-16244:
---

 Summary: Move code style exceptions from suppressions.xml to the 
code
 Key: KAFKA-16244
 URL: https://issues.apache.org/jira/browse/KAFKA-16244
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16178) AsyncKafkaConsumer doesn't retry joining the group after rediscovering group coordinator

2024-02-11 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16178.
-
Resolution: Fixed

> AsyncKafkaConsumer doesn't retry joining the group after rediscovering group 
> coordinator
> 
>
> Key: KAFKA-16178
> URL: https://issues.apache.org/jira/browse/KAFKA-16178
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: Dongnuo Lyu
>Assignee: Lianet Magrans
>Priority: Blocker
>  Labels: client-transitions-issues, consumer-threading-refactor
> Fix For: 3.8.0
>
> Attachments: pkc-devc63jwnj_jan19_0_debug
>
>
> {code:java}
> [2024-01-17 21:34:59,500] INFO [Consumer 
> clientId=consumer.7e26597f-0285-4e13-88d6-31500a500275-0, 
> groupId=consumer-groups-test-0] Discovered group coordinator 
> Coordinator(key='consumer-groups-test-0', nodeId=3, 
> host='b3-pkc-devc63jwnj.us-west-2.aws.devel.cpdev.cloud', port=9092, 
> errorCode=0, errorMessage='') 
> (org.apache.kafka.clients.consumer.internals.CoordinatorRequestManager:162)
> [2024-01-17 21:34:59,681] INFO [Consumer 
> clientId=consumer.7e26597f-0285-4e13-88d6-31500a500275-0, 
> groupId=consumer-groups-test-0] GroupHeartbeatRequest failed because the 
> group coordinator 
> Optional[b3-pkc-devc63jwnj.us-west-2.aws.devel.cpdev.cloud:9092 (id: 
> 2147483644 rack: null)] is incorrect. Will attempt to find the coordinator 
> again and retry in 0ms: This is not the correct coordinator. 
> (org.apache.kafka.clients.consumer.internals.HeartbeatRequestManager:407)
> [2024-01-17 21:34:59,681] INFO [Consumer 
> clientId=consumer.7e26597f-0285-4e13-88d6-31500a500275-0, 
> groupId=consumer-groups-test-0] Group coordinator 
> b3-pkc-devc63jwnj.us-west-2.aws.devel.cpdev.cloud:9092 (id: 2147483644 rack: 
> null) is unavailable or invalid due to cause: This is not the correct 
> coordinator.. Rediscovery will be attempted. 
> (org.apache.kafka.clients.consumer.internals.CoordinatorRequestManager:136)
> [2024-01-17 21:34:59,882] INFO [Consumer 
> clientId=consumer.7e26597f-0285-4e13-88d6-31500a500275-0, 
> groupId=consumer-groups-test-0] Discovered group coordinator 
> Coordinator(key='consumer-groups-test-0', nodeId=3, 
> host='b3-pkc-devc63jwnj.us-west-2.aws.devel.cpdev.cloud', port=9092, 
> errorCode=0, errorMessage='') 
> (org.apache.kafka.clients.consumer.internals.CoordinatorRequestManager:162){code}
> Some of the consumers don't consume any message. The logs show that after the 
> consumer starts up and successfully logs in,
>  # The consumer discovers the group coordinator.
>  # The heartbeat to join group fails because "This is not the correct 
> coordinator"
>  # The consumer rediscover the group coordinator.
> Another heartbeat should follow the rediscovery of the group coordinator, but 
> there's no logs showing sign of a heartbeat request. 
> On the server side, there is completely no log about the group id. A 
> suspicion is that the consumer doesn't send a heartbeat request after 
> rediscover the group coordinator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16227) Console consumer fails with `IllegalStateException`

2024-02-06 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16227:

Affects Version/s: 3.7.0

> Console consumer fails with `IllegalStateException`
> ---
>
> Key: KAFKA-16227
> URL: https://issues.apache.org/jira/browse/KAFKA-16227
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 3.7.0
>Reporter: David Jacot
>Assignee: Kirk True
>Priority: Major
>
> I have seen a few occurrences like the following one. There is a race between 
> the background thread and the foreground thread. I imagine the following 
> steps:
>  * quickstart-events-2 is assigned by the background thread;
>  * the foreground thread starts the initialization of the partition (e.g. 
> reset offset);
>  * quickstart-events-2 is removed by the background thread;
>  * the initialization completes and quickstart-events-2 does not exist 
> anymore.
>  
> {code:java}
> [2024-02-06 16:21:57,375] ERROR Error processing message, terminating 
> consumer process:  (kafka.tools.ConsoleConsumer$)
> java.lang.IllegalStateException: No current assignment for partition 
> quickstart-events-2
>   at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:367)
>   at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.updateHighWatermark(SubscriptionState.java:579)
>   at 
> org.apache.kafka.clients.consumer.internals.FetchCollector.handleInitializeSuccess(FetchCollector.java:283)
>   at 
> org.apache.kafka.clients.consumer.internals.FetchCollector.initialize(FetchCollector.java:226)
>   at 
> org.apache.kafka.clients.consumer.internals.FetchCollector.collectFetch(FetchCollector.java:110)
>   at 
> org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.collectFetch(AsyncKafkaConsumer.java:1540)
>   at 
> org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.pollForFetches(AsyncKafkaConsumer.java:1525)
>   at 
> org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.poll(AsyncKafkaConsumer.java:711)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:874)
>   at 
> kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:473)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:103)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:77)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala) {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16227) Console consumer fails with `IllegalStateException`

2024-02-06 Thread David Jacot (Jira)
David Jacot created KAFKA-16227:
---

 Summary: Console consumer fails with `IllegalStateException`
 Key: KAFKA-16227
 URL: https://issues.apache.org/jira/browse/KAFKA-16227
 Project: Kafka
  Issue Type: Sub-task
  Components: clients
Reporter: David Jacot
Assignee: Kirk True


I have seen a few occurrences like the following one. There is a race between 
the background thread and the foreground thread. I imagine the following steps:
 * quickstart-events-2 is assigned by the background thread;
 * the foreground thread starts the initialization of the partition (e.g. reset 
offset);
 * quickstart-events-2 is removed by the background thread;
 * the initialization completes and quickstart-events-2 does not exist anymore.

 
{code:java}
[2024-02-06 16:21:57,375] ERROR Error processing message, terminating consumer 
process:  (kafka.tools.ConsoleConsumer$)
java.lang.IllegalStateException: No current assignment for partition 
quickstart-events-2
at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:367)
at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.updateHighWatermark(SubscriptionState.java:579)
at 
org.apache.kafka.clients.consumer.internals.FetchCollector.handleInitializeSuccess(FetchCollector.java:283)
at 
org.apache.kafka.clients.consumer.internals.FetchCollector.initialize(FetchCollector.java:226)
at 
org.apache.kafka.clients.consumer.internals.FetchCollector.collectFetch(FetchCollector.java:110)
at 
org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.collectFetch(AsyncKafkaConsumer.java:1540)
at 
org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.pollForFetches(AsyncKafkaConsumer.java:1525)
at 
org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.poll(AsyncKafkaConsumer.java:711)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:874)
at 
kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:473)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:103)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:77)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala) {code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15460) Add group type filter to ListGroups API

2024-02-05 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15460.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Add group type filter to ListGroups API
> ---
>
> Key: KAFKA-15460
> URL: https://issues.apache.org/jira/browse/KAFKA-15460
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Ritika Reddy
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16189) Extend admin to support ConsumerGroupDescribe API

2024-02-01 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16189.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Extend admin to support ConsumerGroupDescribe API
> -
>
> Key: KAFKA-16189
> URL: https://issues.apache.org/jira/browse/KAFKA-16189
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16168) Implement GroupCoordinator.onPartitionsDeleted

2024-02-01 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16168.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Implement GroupCoordinator.onPartitionsDeleted
> --
>
> Key: KAFKA-16168
> URL: https://issues.apache.org/jira/browse/KAFKA-16168
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16095) Update list group state type filter to include the states for the new consumer group type

2024-01-29 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16095.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Update list group state type filter to include the states for the new 
> consumer group type
> -
>
> Key: KAFKA-16095
> URL: https://issues.apache.org/jira/browse/KAFKA-16095
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ritika Reddy
>Assignee: Lan Ding
>Priority: Minor
> Fix For: 3.8.0
>
>
> # While using *—list —state* the current accepted values correspond to the 
> classic group type states. We need to include support for the new group type 
> states.
>  ## Consumer Group: Should list the state of the group. Accepted Values: 
>  ### _UNKNOWN(“unknown”)_
>  ### {_}EMPTY{_}("empty"),
>  ### *{_}ASSIGNING{_}("assigning"),*
>  ### *{_}RECONCILING{_}("reconciling"),*
>  ### {_}STABLE{_}("stable"),
>  ### {_}DEAD{_}("dead");
>  # 
>  ## Classic Group : Should list the state of the group. Accepted Values: 
>  ### {_}UNKNOWN{_}("Unknown"),
>  ### {_}EMPTY{_}("Empty");
>  ### *{_}PREPARING_REBALANCE{_}("PreparingRebalance"),*
>  ### *{_}COMPLETING_REBALANCE{_}("CompletingRebalance"),*
>  ### {_}STABLE{_}("Stable"),
>  ### {_}DEAD{_}("Dead")



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14505) Implement TnxOffsetCommit API

2024-01-26 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14505.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Implement TnxOffsetCommit API
> -
>
> Key: KAFKA-14505
> URL: https://issues.apache.org/jira/browse/KAFKA-14505
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.8.0
>
>
> Implement TnxOffsetCommit API in the new Group Coordinator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16194) KafkaConsumer.groupMetadata() should be correct when first records are returned

2024-01-25 Thread David Jacot (Jira)
David Jacot created KAFKA-16194:
---

 Summary: KafkaConsumer.groupMetadata() should be correct when 
first records are returned
 Key: KAFKA-16194
 URL: https://issues.apache.org/jira/browse/KAFKA-16194
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot


The following code returns records before the group metadata is updated. This 
fails the first transactions ever run by the Producer/Consumer.

 
{code:java}
Producer txnProducer = new KafkaProducer<>(txnProducerProps);
Consumer consumer = new KafkaConsumer<>(consumerProps);

txnProducer.initTransactions();
System.out.println("Init transactions called");

try {
txnProducer.beginTransaction();
System.out.println("Begin transactions called");

consumer.subscribe(Collections.singletonList("input"));
System.out.println("Consumer subscribed to topic -> KIP848-topic-2 ");

ConsumerRecords records = 
consumer.poll(Duration.ofSeconds(10));
System.out.println("Returned " + records.count() + " records.");

// Process and send txn messages.
for (ConsumerRecord processedRecord : records) {
txnProducer.send(new ProducerRecord<>("output", processedRecord.key(), 
"Processed: " + processedRecord.value()));
}

ConsumerGroupMetadata groupMetadata = consumer.groupMetadata();
System.out.println("Group metadata inside test" + groupMetadata);

Map offsetsToCommit = new HashMap<>();
for (ConsumerRecord record : records) {
offsetsToCommit.put(new TopicPartition(record.topic(), 
record.partition()),
new OffsetAndMetadata(record.offset() + 1));
}
System.out.println("Offsets to commit" + offsetsToCommit);
// Send offsets to transaction with ConsumerGroupMetadata.
txnProducer.sendOffsetsToTransaction(offsetsToCommit, groupMetadata);
System.out.println("Send offsets to transaction done");

// Commit the transaction.
txnProducer.commitTransaction();
System.out.println("Commit transaction done");
} catch (ProducerFencedException | OutOfOrderSequenceException | 
AuthorizationException e) {
e.printStackTrace();
txnProducer.close();
} catch (KafkaException e) {
e.printStackTrace();
txnProducer.abortTransaction();
} finally {
txnProducer.close();
consumer.close();
} {code}
The issue seems to be that while it waits in `poll`, the event to update the 
group metadata is not processed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16107) Ensure consumer does not start fetching from added partitions until onPartitionsAssigned completes

2024-01-24 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16107.
-
  Reviewer: David Jacot
Resolution: Fixed

> Ensure consumer does not start fetching from added partitions until 
> onPartitionsAssigned completes
> --
>
> Key: KAFKA-16107
> URL: https://issues.apache.org/jira/browse/KAFKA-16107
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Lianet Magrans
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
>
> In the new consumer implementation, when new partitions are assigned, the 
> subscription state is updated and then the #onPartitionsAssigned triggered. 
> This sequence seems sensible but we need to ensure that no data is fetched 
> until the onPartitionsAssigned completes (where the user could be setting the 
> committed offsets it want to start fetching from).
> We should pause the partitions newly added partitions until 
> onPartitionsAssigned completes, similar to how it's done on revocation to 
> avoid positions getting ahead of the committed offsets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16189) Extend admin to support ConsumerGroupDescribe API

2024-01-24 Thread David Jacot (Jira)
David Jacot created KAFKA-16189:
---

 Summary: Extend admin to support ConsumerGroupDescribe API
 Key: KAFKA-16189
 URL: https://issues.apache.org/jira/browse/KAFKA-16189
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16147) Partition is assigned to two members at the same time

2024-01-22 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16147:

Affects Version/s: 3.7.0

> Partition is assigned to two members at the same time
> -
>
> Key: KAFKA-16147
> URL: https://issues.apache.org/jira/browse/KAFKA-16147
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: broker1.log, broker2.log, broker3.log, librdkafka.log, 
> server.properties, server1.properties, server2.properties
>
>
> While running [test 0113 of 
> librdkafka|https://github.com/confluentinc/librdkafka/blob/8b6357f872efe2a5a3a2fd2828e4133f85e6b023/tests/0113-cooperative_rebalance.cpp#L2384],
>  subtest _u_multiple_subscription_changes_ have received this error saying 
> that a partition is assigned to two members at the same time.
> {code:java}
> Error: C_6#consumer-9 is assigned rdkafkatest_rnd550f20623daba04c_0113u_2 [0] 
> which is already assigned to consumer C_5#consumer-8 {code}
> I've reconstructed this sequence:
> C_5 SUBSCRIBES TO T1
> {noformat}
> %7|1705403451.561|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 6, group instance id 
> "(null)", current assignment "", subscribe topics 
> "rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"{noformat}
> C_5 ASSIGNMENT CHANGES TO T1-P7, T1-P8, T1-P12
> {noformat}
> [2024-01-16 12:10:51,562] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw 
> transitioned from CurrentAssignment(memberEpoch=6, previousMemberEpoch=0, 
> targetMemberEpoch=6, state=assigning, assignedPartitions={}, 
> partitionsPendingRevocation={}, 
> partitionsPendingAssignment={IKXGrFR1Rv-Qes7Ummas6A=[3, 12]}) to 
> CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=14, state=stable, 
> assignedPartitions={IKXGrFR1Rv-Qes7Ummas6A=[7, 8, 12]}, 
> partitionsPendingRevocation={}, partitionsPendingAssignment={}). 
> (org.apache.kafka.coordinator.group.GroupMetadataManager){noformat}
>  
> C_5 RECEIVES TARGET ASSIGNMENT
> {noformat}
> %7|1705403451.565|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat response received target assignment 
> "(null)(IKXGrFR1Rv+Qes7Ummas6A)[7], (null)(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> (null)(IKXGrFR1Rv+Qes7Ummas6A)[12]"{noformat}
>  
> C_5 ACKS TARGET ASSIGNMENT
> {noformat}
> %7|1705403451.566|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 14, group instance id 
> "NULL", current assignment 
> "rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[7], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[12]", 
> subscribe topics "rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"
> %7|1705403451.567|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat response received target assignment 
> "(null)(IKXGrFR1Rv+Qes7Ummas6A)[7], (null)(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> (null)(IKXGrFR1Rv+Qes7Ummas6A)[12]"{noformat}
>  
> C_5 SUBSCRIBES TO T1,T2: T1 partitions are revoked, 5 T2 partitions are 
> pending 
> {noformat}
> %7|1705403452.612|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 14, group instance id 
> "NULL", current assignment "NULL", subscribe topics 
> "rdkafkatest_rnd550f20623daba04c_0113u_2((null))[-1], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"
> [2024-01-16 12:10:52,615] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw updated 
> its subscribed topics to: [rdkafkatest_rnd550f20623daba04c_0113u_2, 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1]. 
> (org.apache.kafka.coordinator.group.GroupMetadataManager)
> [2024-01-16 12:10:52,616] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw 
> transitioned from CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=14, state=stable, 
> assignedPartitions={IKXGrFR1Rv-Qes7Ummas6A=[7, 8, 12]}, 
> partitionsPendingRevocation={}, partitionsPendingAssignment={}) to 
> CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=16, state=revoking, 

[jira] [Resolved] (KAFKA-16147) Partition is assigned to two members at the same time

2024-01-22 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16147.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Partition is assigned to two members at the same time
> -
>
> Key: KAFKA-16147
> URL: https://issues.apache.org/jira/browse/KAFKA-16147
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
> Attachments: broker1.log, broker2.log, broker3.log, librdkafka.log, 
> server.properties, server1.properties, server2.properties
>
>
> While running [test 0113 of 
> librdkafka|https://github.com/confluentinc/librdkafka/blob/8b6357f872efe2a5a3a2fd2828e4133f85e6b023/tests/0113-cooperative_rebalance.cpp#L2384],
>  subtest _u_multiple_subscription_changes_ have received this error saying 
> that a partition is assigned to two members at the same time.
> {code:java}
> Error: C_6#consumer-9 is assigned rdkafkatest_rnd550f20623daba04c_0113u_2 [0] 
> which is already assigned to consumer C_5#consumer-8 {code}
> I've reconstructed this sequence:
> C_5 SUBSCRIBES TO T1
> {noformat}
> %7|1705403451.561|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 6, group instance id 
> "(null)", current assignment "", subscribe topics 
> "rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"{noformat}
> C_5 ASSIGNMENT CHANGES TO T1-P7, T1-P8, T1-P12
> {noformat}
> [2024-01-16 12:10:51,562] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw 
> transitioned from CurrentAssignment(memberEpoch=6, previousMemberEpoch=0, 
> targetMemberEpoch=6, state=assigning, assignedPartitions={}, 
> partitionsPendingRevocation={}, 
> partitionsPendingAssignment={IKXGrFR1Rv-Qes7Ummas6A=[3, 12]}) to 
> CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=14, state=stable, 
> assignedPartitions={IKXGrFR1Rv-Qes7Ummas6A=[7, 8, 12]}, 
> partitionsPendingRevocation={}, partitionsPendingAssignment={}). 
> (org.apache.kafka.coordinator.group.GroupMetadataManager){noformat}
>  
> C_5 RECEIVES TARGET ASSIGNMENT
> {noformat}
> %7|1705403451.565|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat response received target assignment 
> "(null)(IKXGrFR1Rv+Qes7Ummas6A)[7], (null)(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> (null)(IKXGrFR1Rv+Qes7Ummas6A)[12]"{noformat}
>  
> C_5 ACKS TARGET ASSIGNMENT
> {noformat}
> %7|1705403451.566|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 14, group instance id 
> "NULL", current assignment 
> "rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[7], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[12]", 
> subscribe topics "rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"
> %7|1705403451.567|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat response received target assignment 
> "(null)(IKXGrFR1Rv+Qes7Ummas6A)[7], (null)(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> (null)(IKXGrFR1Rv+Qes7Ummas6A)[12]"{noformat}
>  
> C_5 SUBSCRIBES TO T1,T2: T1 partitions are revoked, 5 T2 partitions are 
> pending 
> {noformat}
> %7|1705403452.612|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 14, group instance id 
> "NULL", current assignment "NULL", subscribe topics 
> "rdkafkatest_rnd550f20623daba04c_0113u_2((null))[-1], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"
> [2024-01-16 12:10:52,615] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw updated 
> its subscribed topics to: [rdkafkatest_rnd550f20623daba04c_0113u_2, 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1]. 
> (org.apache.kafka.coordinator.group.GroupMetadataManager)
> [2024-01-16 12:10:52,616] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw 
> transitioned from CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=14, state=stable, 
> assignedPartitions={IKXGrFR1Rv-Qes7Ummas6A=[7, 8, 12]}, 
> partitionsPendingRevocation={}, partitionsPendingAssignment={}) to 
> CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=16, state=revoking, 

[jira] [Assigned] (KAFKA-16148) Implement GroupMetadataManager#onUnloaded

2024-01-19 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-16148:
---

Assignee: Jeff Kim

> Implement GroupMetadataManager#onUnloaded
> -
>
> Key: KAFKA-16148
> URL: https://issues.apache.org/jira/browse/KAFKA-16148
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
>
> complete all awaiting futures with NOT_COORDINATOR (for classic group)
> transition all groups to DEAD.
> Cancel all timers related to the unloaded group metadata manager



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16168) Implement GroupCoordinator.onPartitionsDeleted

2024-01-19 Thread David Jacot (Jira)
David Jacot created KAFKA-16168:
---

 Summary: Implement GroupCoordinator.onPartitionsDeleted
 Key: KAFKA-16168
 URL: https://issues.apache.org/jira/browse/KAFKA-16168
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16147) Partition is assigned to two members at the same time

2024-01-17 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-16147:
---

Assignee: David Jacot

> Partition is assigned to two members at the same time
> -
>
> Key: KAFKA-16147
> URL: https://issues.apache.org/jira/browse/KAFKA-16147
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Major
> Attachments: broker1.log, broker2.log, broker3.log, librdkafka.log, 
> server.properties, server1.properties, server2.properties
>
>
> While running [test 0113 of 
> librdkafka|https://github.com/confluentinc/librdkafka/blob/8b6357f872efe2a5a3a2fd2828e4133f85e6b023/tests/0113-cooperative_rebalance.cpp#L2384],
>  subtest _u_multiple_subscription_changes_ have received this error saying 
> that a partition is assigned to two members at the same time.
> {code:java}
> Error: C_6#consumer-9 is assigned rdkafkatest_rnd550f20623daba04c_0113u_2 [0] 
> which is already assigned to consumer C_5#consumer-8 {code}
> I've reconstructed this sequence:
> C_5 SUBSCRIBES TO T1
> {noformat}
> %7|1705403451.561|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 6, group instance id 
> "(null)", current assignment "", subscribe topics 
> "rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"{noformat}
> C_5 ASSIGNMENT CHANGES TO T1-P7, T1-P8, T1-P12
> {noformat}
> [2024-01-16 12:10:51,562] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw 
> transitioned from CurrentAssignment(memberEpoch=6, previousMemberEpoch=0, 
> targetMemberEpoch=6, state=assigning, assignedPartitions={}, 
> partitionsPendingRevocation={}, 
> partitionsPendingAssignment={IKXGrFR1Rv-Qes7Ummas6A=[3, 12]}) to 
> CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=14, state=stable, 
> assignedPartitions={IKXGrFR1Rv-Qes7Ummas6A=[7, 8, 12]}, 
> partitionsPendingRevocation={}, partitionsPendingAssignment={}). 
> (org.apache.kafka.coordinator.group.GroupMetadataManager){noformat}
>  
> C_5 RECEIVES TARGET ASSIGNMENT
> {noformat}
> %7|1705403451.565|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat response received target assignment 
> "(null)(IKXGrFR1Rv+Qes7Ummas6A)[7], (null)(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> (null)(IKXGrFR1Rv+Qes7Ummas6A)[12]"{noformat}
>  
> C_5 ACKS TARGET ASSIGNMENT
> {noformat}
> %7|1705403451.566|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 14, group instance id 
> "NULL", current assignment 
> "rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[7], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1(IKXGrFR1Rv+Qes7Ummas6A)[12]", 
> subscribe topics "rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"
> %7|1705403451.567|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat response received target assignment 
> "(null)(IKXGrFR1Rv+Qes7Ummas6A)[7], (null)(IKXGrFR1Rv+Qes7Ummas6A)[8], 
> (null)(IKXGrFR1Rv+Qes7Ummas6A)[12]"{noformat}
>  
> C_5 SUBSCRIBES TO T1,T2: T1 partitions are revoked, 5 T2 partitions are 
> pending 
> {noformat}
> %7|1705403452.612|HEARTBEAT|C_5#consumer-8| [thrd:main]: GroupCoordinator/1: 
> Heartbeat of member id "RaTCu6RXQH-FiSl95iZzdw", group id 
> "rdkafkatest_rnd53b4eb0c2de343_0113u", generation id 14, group instance id 
> "NULL", current assignment "NULL", subscribe topics 
> "rdkafkatest_rnd550f20623daba04c_0113u_2((null))[-1], 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1((null))[-1]"
> [2024-01-16 12:10:52,615] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw updated 
> its subscribed topics to: [rdkafkatest_rnd550f20623daba04c_0113u_2, 
> rdkafkatest_rnd5a91902462d61c2e_0113u_1]. 
> (org.apache.kafka.coordinator.group.GroupMetadataManager)
> [2024-01-16 12:10:52,616] INFO [GroupCoordinator id=1 
> topic=__consumer_offsets partition=7] [GroupId 
> rdkafkatest_rnd53b4eb0c2de343_0113u] Member RaTCu6RXQH-FiSl95iZzdw 
> transitioned from CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=14, state=stable, 
> assignedPartitions={IKXGrFR1Rv-Qes7Ummas6A=[7, 8, 12]}, 
> partitionsPendingRevocation={}, partitionsPendingAssignment={}) to 
> CurrentAssignment(memberEpoch=14, previousMemberEpoch=6, 
> targetMemberEpoch=16, state=revoking, assignedPartitions={}, 
> 

[jira] [Updated] (KAFKA-16118) Coordinator unloading fails when replica is deleted

2024-01-15 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16118:

Affects Version/s: 3.6.0
   (was: 3.7.0)

> Coordinator unloading fails when replica is deleted
> ---
>
> Key: KAFKA-16118
> URL: https://issues.apache.org/jira/browse/KAFKA-16118
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.6.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.7.0
>
>
> The new group coordinator always expects the leader epoch to be received when 
> it must unload the metadata for a partition. However, in KRaft, the leader 
> epoch is not passed when the replica is delete (e.g. after reassignment).
> {noformat}
> java.lang.IllegalArgumentException: The leader epoch should always be 
> provided in KRaft.
>     at 
> org.apache.kafka.coordinator.group.GroupCoordinatorService.onResignation(GroupCoordinatorService.java:931)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9$adapted(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$updateCoordinator$4(BrokerMetadataPublisher.scala:397)
>     at java.base/java.lang.Iterable.forEach(Iterable.java:75)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.updateCoordinator(BrokerMetadataPublisher.scala:396)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$7(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.onMetadataUpdate(BrokerMetadataPublisher.scala:186)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.maybePublishMetadata(MetadataLoader.java:382)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.applyDeltaAndUpdate(MetadataBatchLoader.java:286)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.maybeFlushBatches(MetadataBatchLoader.java:222)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.lambda$handleCommit$1(MetadataLoader.java:406)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>     at java.base/java.lang.Thread.run(Thread.java:1583)
>     at 
> org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66){noformat}
> The side effect of this bug is that group coordinator loading/unloading fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16118) Coordinator unloading fails when replica is deleted

2024-01-15 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16118:

Fix Version/s: 3.7.0
   (was: 3.8.0)

> Coordinator unloading fails when replica is deleted
> ---
>
> Key: KAFKA-16118
> URL: https://issues.apache.org/jira/browse/KAFKA-16118
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.7.0
>
>
> The new group coordinator always expects the leader epoch to be received when 
> it must unload the metadata for a partition. However, in KRaft, the leader 
> epoch is not passed when the replica is delete (e.g. after reassignment).
> {noformat}
> java.lang.IllegalArgumentException: The leader epoch should always be 
> provided in KRaft.
>     at 
> org.apache.kafka.coordinator.group.GroupCoordinatorService.onResignation(GroupCoordinatorService.java:931)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9$adapted(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$updateCoordinator$4(BrokerMetadataPublisher.scala:397)
>     at java.base/java.lang.Iterable.forEach(Iterable.java:75)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.updateCoordinator(BrokerMetadataPublisher.scala:396)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$7(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.onMetadataUpdate(BrokerMetadataPublisher.scala:186)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.maybePublishMetadata(MetadataLoader.java:382)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.applyDeltaAndUpdate(MetadataBatchLoader.java:286)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.maybeFlushBatches(MetadataBatchLoader.java:222)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.lambda$handleCommit$1(MetadataLoader.java:406)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>     at java.base/java.lang.Thread.run(Thread.java:1583)
>     at 
> org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66){noformat}
> The side effect of this bug is that group coordinator loading/unloading fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16118) Coordinator unloading fails when replica is deleted

2024-01-14 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16118:

Affects Version/s: 3.7.0

> Coordinator unloading fails when replica is deleted
> ---
>
> Key: KAFKA-16118
> URL: https://issues.apache.org/jira/browse/KAFKA-16118
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 3.7.0
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>
> The new group coordinator always expects the leader epoch to be received when 
> it must unload the metadata for a partition. However, in KRaft, the leader 
> epoch is not passed when the replica is delete (e.g. after reassignment).
> {noformat}
> java.lang.IllegalArgumentException: The leader epoch should always be 
> provided in KRaft.
>     at 
> org.apache.kafka.coordinator.group.GroupCoordinatorService.onResignation(GroupCoordinatorService.java:931)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9$adapted(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$updateCoordinator$4(BrokerMetadataPublisher.scala:397)
>     at java.base/java.lang.Iterable.forEach(Iterable.java:75)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.updateCoordinator(BrokerMetadataPublisher.scala:396)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$7(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.onMetadataUpdate(BrokerMetadataPublisher.scala:186)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.maybePublishMetadata(MetadataLoader.java:382)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.applyDeltaAndUpdate(MetadataBatchLoader.java:286)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.maybeFlushBatches(MetadataBatchLoader.java:222)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.lambda$handleCommit$1(MetadataLoader.java:406)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>     at java.base/java.lang.Thread.run(Thread.java:1583)
>     at 
> org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66){noformat}
> The side effect of this bug is that group coordinator loading/unloading fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16118) Coordinator unloading fails when replica is deleted

2024-01-14 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16118.
-
Fix Version/s: 3.8.0
   Resolution: Fixed

> Coordinator unloading fails when replica is deleted
> ---
>
> Key: KAFKA-16118
> URL: https://issues.apache.org/jira/browse/KAFKA-16118
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 3.8.0
>
>
> The new group coordinator always expects the leader epoch to be received when 
> it must unload the metadata for a partition. However, in KRaft, the leader 
> epoch is not passed when the replica is delete (e.g. after reassignment).
> {noformat}
> java.lang.IllegalArgumentException: The leader epoch should always be 
> provided in KRaft.
>     at 
> org.apache.kafka.coordinator.group.GroupCoordinatorService.onResignation(GroupCoordinatorService.java:931)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9$adapted(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$updateCoordinator$4(BrokerMetadataPublisher.scala:397)
>     at java.base/java.lang.Iterable.forEach(Iterable.java:75)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.updateCoordinator(BrokerMetadataPublisher.scala:396)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$7(BrokerMetadataPublisher.scala:200)
>     at 
> kafka.server.metadata.BrokerMetadataPublisher.onMetadataUpdate(BrokerMetadataPublisher.scala:186)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.maybePublishMetadata(MetadataLoader.java:382)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.applyDeltaAndUpdate(MetadataBatchLoader.java:286)
>     at 
> org.apache.kafka.image.loader.MetadataBatchLoader.maybeFlushBatches(MetadataBatchLoader.java:222)
>     at 
> org.apache.kafka.image.loader.MetadataLoader.lambda$handleCommit$1(MetadataLoader.java:406)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>     at 
> org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>     at java.base/java.lang.Thread.run(Thread.java:1583)
>     at 
> org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66){noformat}
> The side effect of this bug is that group coordinator loading/unloading fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16118) Coordinator unloading fails when replica is deleted

2024-01-12 Thread David Jacot (Jira)
David Jacot created KAFKA-16118:
---

 Summary: Coordinator unloading fails when replica is deleted
 Key: KAFKA-16118
 URL: https://issues.apache.org/jira/browse/KAFKA-16118
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot


The new group coordinator always expects the leader epoch to be received when 
it must unload the metadata for a partition. However, in KRaft, the leader 
epoch is not passed when the replica is delete (e.g. after reassignment).
{noformat}
java.lang.IllegalArgumentException: The leader epoch should always be provided 
in KRaft.
    at 
org.apache.kafka.coordinator.group.GroupCoordinatorService.onResignation(GroupCoordinatorService.java:931)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9(BrokerMetadataPublisher.scala:200)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$9$adapted(BrokerMetadataPublisher.scala:200)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$updateCoordinator$4(BrokerMetadataPublisher.scala:397)
    at java.base/java.lang.Iterable.forEach(Iterable.java:75)
    at 
kafka.server.metadata.BrokerMetadataPublisher.updateCoordinator(BrokerMetadataPublisher.scala:396)
    at 
kafka.server.metadata.BrokerMetadataPublisher.$anonfun$onMetadataUpdate$7(BrokerMetadataPublisher.scala:200)
    at 
kafka.server.metadata.BrokerMetadataPublisher.onMetadataUpdate(BrokerMetadataPublisher.scala:186)
    at 
org.apache.kafka.image.loader.MetadataLoader.maybePublishMetadata(MetadataLoader.java:382)
    at 
org.apache.kafka.image.loader.MetadataBatchLoader.applyDeltaAndUpdate(MetadataBatchLoader.java:286)
    at 
org.apache.kafka.image.loader.MetadataBatchLoader.maybeFlushBatches(MetadataBatchLoader.java:222)
    at 
org.apache.kafka.image.loader.MetadataLoader.lambda$handleCommit$1(MetadataLoader.java:406)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
    at 
org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
    at java.base/java.lang.Thread.run(Thread.java:1583)
    at 
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66){noformat}
The side effect of this bug is that group coordinator loading/unloading fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15268) Consider replacing Subscription Metadata by a hash

2024-01-09 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-15268:
---

Assignee: David Jacot

> Consider replacing Subscription Metadata by a hash
> --
>
> Key: KAFKA-15268
> URL: https://issues.apache.org/jira/browse/KAFKA-15268
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>  Labels: kip-848-preview
>
> With the addition of the racks, the subscription metadata record is getting 
> large, too large in my opinion. We should consider replacing it with an hash. 
> The subscription metadata is mainly used to detect changes in metadata. A 
> hash would give a similar functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15982) Move GenericGroup state metrics to `GroupCoordinatorMetricsShard`

2024-01-09 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15982.
-
Resolution: Duplicate

Done in KAFKA-15870.

> Move GenericGroup state metrics to `GroupCoordinatorMetricsShard`
> -
>
> Key: KAFKA-15982
> URL: https://issues.apache.org/jira/browse/KAFKA-15982
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
>
> Currently, the generic group state metrics exist inside 
> `GroupCoordinatorMetrics` as global metrics. This causes issues as during 
> unload, we need to traverse through all groups and decrement the group size 
> counters. 
> Move the generic group state metrics to the shard level so that when a 
> partition is unloaded we automatically remove the counter.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15870) Move new group coordinator metrics from Yammer to Metrics

2024-01-09 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15870.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Move new group coordinator metrics from Yammer to Metrics
> -
>
> Key: KAFKA-15870
> URL: https://issues.apache.org/jira/browse/KAFKA-15870
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14519) Add metrics to the new coordinator

2024-01-09 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14519.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Add metrics to the new coordinator
> --
>
> Key: KAFKA-14519
> URL: https://issues.apache.org/jira/browse/KAFKA-14519
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Jeff Kim
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16040) Rename `Generic` to `Classic`

2023-12-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16040.
-
Resolution: Fixed

> Rename `Generic` to `Classic`
> -
>
> Key: KAFKA-16040
> URL: https://issues.apache.org/jira/browse/KAFKA-16040
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 3.7.0
>
>
> People has raised concerned about using {{Generic}} as a name to designate 
> the old rebalance protocol. We considered using {{Legacy}} but discarded it 
> because there are still applications, such as Connect, using the old 
> protocol. We settled on using {{Classic}} for the {{{}Classic Rebalance 
> Protocol{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15456) Add support for OffsetFetch version 9 in consumer

2023-12-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15456.
-
Resolution: Fixed

> Add support for OffsetFetch version 9 in consumer
> -
>
> Key: KAFKA-15456
> URL: https://issues.apache.org/jira/browse/KAFKA-15456
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: David Jacot
>Assignee: Lianet Magrans
>Priority: Major
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16030) new group coordinator should check if partition goes offline during load

2023-12-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16030.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> new group coordinator should check if partition goes offline during load
> 
>
> Key: KAFKA-16030
> URL: https://issues.apache.org/jira/browse/KAFKA-16030
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.7.0
>
>
> The new coordinator stops loading if the partition goes offline during load. 
> However, the partition is still considered active. Instead, we should return 
> NOT_LEADER_OR_FOLLOWER exception during load.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16040) Rename `Generic` to `Classic`

2023-12-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-16040:

Description: People has raised concerned about using {{Generic}} as a name 
to designate the old rebalance protocol. We considered using {{Legacy}} but 
discarded it because there are still applications, such as Connect, using the 
old protocol. We settled on using {{Classic}} for the {{{}Classic Rebalance 
Protocol{}}}.

> Rename `Generic` to `Classic`
> -
>
> Key: KAFKA-16040
> URL: https://issues.apache.org/jira/browse/KAFKA-16040
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 3.7.0
>
>
> People has raised concerned about using {{Generic}} as a name to designate 
> the old rebalance protocol. We considered using {{Legacy}} but discarded it 
> because there are still applications, such as Connect, using the old 
> protocol. We settled on using {{Classic}} for the {{{}Classic Rebalance 
> Protocol{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16040) Rename `Generic` to `Classic`

2023-12-21 Thread David Jacot (Jira)
David Jacot created KAFKA-16040:
---

 Summary: Rename `Generic` to `Classic`
 Key: KAFKA-16040
 URL: https://issues.apache.org/jira/browse/KAFKA-16040
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot
 Fix For: 3.7.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16036) Add `group.coordinator.rebalance.protocols` and publish all new configs

2023-12-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-16036.
-
Resolution: Fixed

> Add `group.coordinator.rebalance.protocols` and publish all new configs
> ---
>
> Key: KAFKA-16036
> URL: https://issues.apache.org/jira/browse/KAFKA-16036
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Blocker
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16036) Add `group.coordinator.rebalance.protocols` and publish all new configs

2023-12-20 Thread David Jacot (Jira)
David Jacot created KAFKA-16036:
---

 Summary: Add `group.coordinator.rebalance.protocols` and publish 
all new configs
 Key: KAFKA-16036
 URL: https://issues.apache.org/jira/browse/KAFKA-16036
 Project: Kafka
  Issue Type: Sub-task
Reporter: David Jacot
Assignee: David Jacot
 Fix For: 3.7.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15971) Re-enable consumer integration tests for new consumer

2023-12-15 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15971.
-
Resolution: Fixed

> Re-enable consumer integration tests for new consumer
> -
>
> Key: KAFKA-15971
> URL: https://issues.apache.org/jira/browse/KAFKA-15971
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 3.7.0
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: consumer-threading-refactor, kip-848-preview
> Fix For: 3.7.0
>
>
> Re-enable the consumer integration tests for the new consumer making sure 
> that build stability is not impacted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15237) Implement write operation timeout

2023-12-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15237.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Implement write operation timeout
> -
>
> Key: KAFKA-15237
> URL: https://issues.apache.org/jira/browse/KAFKA-15237
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Sagar Rao
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.7.0
>
>
> In the scala code, we rely on `offsets.commit.timeout.ms` to bound all the 
> writes. We should do the same in the new code. This is important to ensure 
> that the number of pending response in the purgatory is bound. The name of 
> the config is not ideal but we should keep it for backward compatibility 
> reasons.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15981) update Group size only when groups size changes

2023-12-13 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15981.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> update Group size only when groups size changes
> ---
>
> Key: KAFKA-15981
> URL: https://issues.apache.org/jira/browse/KAFKA-15981
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.7.0
>
>
> Currently, we increment generic group metrics whenever we create a new Group 
> object when we load a partition. This is incorrect as the partition may 
> contain several records for the same group if in the active segment or if the 
> segment has not yet been compacted. 
> The same applies to removing groups; we can possibly have multiple group 
> tombstone records. Instead, only increment the metric if we created a new 
> group and only decrement the metric if the group exists.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15997) Ensure fairness in the uniform assignor

2023-12-12 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-15997:
---

Assignee: Ritika Reddy

> Ensure fairness in the uniform assignor
> ---
>
> Key: KAFKA-15997
> URL: https://issues.apache.org/jira/browse/KAFKA-15997
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: Ritika Reddy
>Priority: Minor
>
>  
>  
> Fairness has to be ensured in uniform assignor as it was in 
> cooperative-sticky one.
> There's this test 0113 subtest u_multiple_subscription_changes in librdkafka 
> where 8 consumers are subscribing to the same topic, and it's verifying that 
> all of them are getting 2 partitions assigned. But with new protocol it seems 
> two consumers get assigned 3 partitions and 1 has zero partitions. The test 
> doesn't configure any client.rack.
> {code:java}
> [0113_cooperative_rebalance  /478.183s] Consumer assignments 
> (subscription_variation 0) (stabilized) (no rebalance cb):
> [0113_cooperative_rebalance  /478.183s] Consumer C_0#consumer-3 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [5] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [8] (4000msgs)
> [0113_cooperative_rebalance  /478.183s] Consumer C_1#consumer-4 assignment 
> (3): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [0] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [3] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [13] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_2#consumer-5 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [6] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [10] (2000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_3#consumer-6 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [7] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [9] (2000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_4#consumer-7 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [11] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [14] (3000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_5#consumer-8 assignment 
> (3): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [1] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [2] (2000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [4] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] Consumer C_6#consumer-9 assignment 
> (0): 
> [0113_cooperative_rebalance  /478.184s] Consumer C_7#consumer-10 assignment 
> (2): rdkafkatest_rnd24419cc75e59d8de_0113u_1 [12] (1000msgs), 
> rdkafkatest_rnd24419cc75e59d8de_0113u_1 [15] (1000msgs)
> [0113_cooperative_rebalance  /478.184s] 16/32 partitions assigned
> [0113_cooperative_rebalance  /478.184s] Consumer C_0#consumer-3 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_1#consumer-4 has 3 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_2#consumer-5 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_3#consumer-6 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_4#consumer-7 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_5#consumer-8 has 3 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_6#consumer-9 has 0 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [0113_cooperative_rebalance  /478.184s] Consumer C_7#consumer-10 has 2 
> assigned partitions (1 subscribed topic(s)), expecting 2 assigned partitions
> [                      /479.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [                      /480.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [                      /481.057s] 1 test(s) running: 
> 0113_cooperative_rebalance
> [0113_cooperative_rebalance  /482.498s] TEST FAILURE
> ### Test "0113_cooperative_rebalance (u_multiple_subscription_changes:2390: 
> use_rebalance_cb: 0, subscription_variation: 0)" failed at 
> test.c:1243:check_test_timeouts() at Thu Dec  7 15:52:15 2023: ###
> Test 0113_cooperative_rebalance (u_multiple_subscription_changes:2390: 
> use_rebalance_cb: 0, subscription_variation: 0) timed out (timeout set to 480 
> seconds)
> ./run-test.sh: line 62: 3512920 Killed                  $TEST $ARGS
> ###
> ### Test ./test-runner in bare mode FAILED! (return code 137) ###
> ###{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15574) Update states and transitions for membership manager state machine

2023-12-11 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15574.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Update states and transitions for membership manager state machine
> --
>
> Key: KAFKA-15574
> URL: https://issues.apache.org/jira/browse/KAFKA-15574
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, consumer
>Reporter: Kirk True
>Assignee: Lianet Magrans
>Priority: Blocker
>  Labels: kip-848, kip-848-client-support, kip-848-e2e, 
> kip-848-preview
> Fix For: 3.7.0
>
>
> This task is to update the state machine so that it correctly acts as the 
> glue between the heartbeat request manager and the assignment reconciler.
> The state machine will transition from one state to another as a response to 
> heartbeats, callback completion, errors, unsubscribing, and other external 
> events. A given transition may kick off one or more actions that are 
> implemented outside of the state machine.
> Steps:
>  # Update the set of states in the code as [defined in the diagrams on the 
> wiki|https://cwiki.apache.org/confluence/display/KAFKA/Consumer+rebalance#Consumerrebalance-RebalanceStateMachine]
>  # Ensure the correct state transitions are performed as responses to 
> external input
>  # _Define_ any actions that should be taken as a result of the above 
> transitions (commit before revoking partitions, stop fetching from partitions 
> being revoked, allow members that do not join a group)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15978) New consumer sends OffsetCommit with empty member ID

2023-12-10 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15978.
-
Resolution: Fixed

> New consumer sends OffsetCommit with empty member ID
> 
>
> Key: KAFKA-15978
> URL: https://issues.apache.org/jira/browse/KAFKA-15978
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: CTR
> Fix For: 3.7.0
>
>
> Running the trogdor tests with the new consumer, it seemed that offsets were 
> not being committed correctly, although the records were being fetched 
> successfully. Upon investigation, it seems that the commit request manager 
> uses a cached member ID which means that its OffsetCommit requests are 
> rejected by the group coordinator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14516) Implement static membeship

2023-12-08 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14516.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Implement static membeship
> --
>
> Key: KAFKA-14516
> URL: https://issues.apache.org/jira/browse/KAFKA-14516
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Sagar Rao
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15989) Upgrade existing generic group to consumer group

2023-12-08 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-15989:
---

Assignee: David Jacot

> Upgrade existing generic group to consumer group
> 
>
> Key: KAFKA-15989
> URL: https://issues.apache.org/jira/browse/KAFKA-15989
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Assignee: David Jacot
>Priority: Minor
>
> It should be possible to upgrade an existing generic group to a new consumer 
> group, in case it was using either the previous generic protocol or manual 
> partition assignment and commit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15989) Upgrade existing generic group to consumer group

2023-12-08 Thread David Jacot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17794680#comment-17794680
 ] 

David Jacot commented on KAFKA-15989:
-

I discussed this with Emanuele offline. The issue was that a consumer with a 
group id and manually assigned partitions committed offsets. This creates a so 
called simple (generic) group in the group coordinator. Later the same consumer 
switched to using a subscription with the new consumer group protocol. It sent 
an heartbeat request to the group coordinator and the group coordinator denied 
it. The issue is that the group coordinator does not have any logic to upgrade 
the simple (generic) group to a consumer group yet. We plan to have a full 
upgrade/downgrade support between the protocol. In the mean time, it seems that 
we could handle this case without waiting on it. Basically, when a generic 
group is empty, we could already replace it by a consumer group in this case.

> Upgrade existing generic group to consumer group
> 
>
> Key: KAFKA-15989
> URL: https://issues.apache.org/jira/browse/KAFKA-15989
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Emanuele Sabellico
>Priority: Minor
>
> It should be possible to upgrade an existing generic group to a new consumer 
> group, in case it was using either the previous generic protocol or manual 
> partition assignment and commit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15910) New group coordinator needs to generate snapshots while loading

2023-12-06 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15910.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> New group coordinator needs to generate snapshots while loading
> ---
>
> Key: KAFKA-15910
> URL: https://issues.apache.org/jira/browse/KAFKA-15910
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jeff Kim
>Assignee: Jeff Kim
>Priority: Major
> Fix For: 3.7.0
>
>
> After the new coordinator loads a __consumer_offsets partition, it logs the 
> following exception when making a read operation (fetch/list groups, etc):
>  
> {{{}java.lang.RuntimeException: No in-memory snapshot for epoch 740745. 
> Snapshot epochs are:{}}}{{{}at 
> org.apache.kafka.timeline.SnapshotRegistry.getSnapshot(SnapshotRegistry.java:178){}}}{{{}at
>  
> org.apache.kafka.timeline.SnapshottableHashTable.snapshottableIterator(SnapshottableHashTable.java:407){}}}{{{}at
>  
> org.apache.kafka.timeline.TimelineHashMap$ValueIterator.(TimelineHashMap.java:283){}}}{{{}at
>  
> org.apache.kafka.timeline.TimelineHashMap$Values.iterator(TimelineHashMap.java:271){}}}
> {{...}}
>  
> This happens because we don't have a snapshot at the last updated high 
> watermark after loading. We cannot generate a snapshot at the high watermark 
> after loading all batches because it may contain records that have not yet 
> been committed. We also don't know where the high watermark will advance up 
> to so we need to generate a snapshot for each offset the loader observes to 
> be greater than the current high watermark. Then once we add the high 
> watermark listener and update the high watermark we can delete all of the 
> snapshots prior. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15705) Add integration tests for Heartbeat API and GroupLeave API

2023-12-05 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15705.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Add integration tests for Heartbeat API and GroupLeave API
> --
>
> Key: KAFKA-15705
> URL: https://issues.apache.org/jira/browse/KAFKA-15705
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15061) CoordinatorPartitionWriter should reuse buffer

2023-12-04 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15061.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> CoordinatorPartitionWriter should reuse buffer
> --
>
> Key: KAFKA-15061
> URL: https://issues.apache.org/jira/browse/KAFKA-15061
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15061) CoordinatorPartitionWriter should reuse buffer

2023-11-30 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-15061:
---

Assignee: David Jacot

> CoordinatorPartitionWriter should reuse buffer
> --
>
> Key: KAFKA-15061
> URL: https://issues.apache.org/jira/browse/KAFKA-15061
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>  Labels: kip-848-preview
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15856) Add integration tests for JoinGroup API and SyncGroup API

2023-11-23 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-15856.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Add integration tests for JoinGroup API and SyncGroup API
> -
>
> Key: KAFKA-15856
> URL: https://issues.apache.org/jira/browse/KAFKA-15856
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dongnuo Lyu
>Assignee: Dongnuo Lyu
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   >